Preface published In: Multidisciplinary Perspectives on Human-AI Team Trust
Edited by Nicolo' Brandizzi, Morgan Elizabeth Bailey, Carolina Centeio Jorge, Myke C. Cohen, Francesco Frattolillo and Alan Richard Wagner
[Interaction Studies 26:2] 2025
► pp. 151–163
preface
Multidisciplinary perspectives on Human-AI team trust
Nicolo’ Brandizzi | Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS)Sankt Augustin | Lamarr Institute for Machine Learning and Artificial Intelligence
Published online: 27 February 2026
https://doi.org/10.1075/is.00025.edi
https://doi.org/10.1075/is.00025.edi
Article outline
- 1.The need for multidisciplinary perspectives on human-AI team trust
- 1.1Trust development and measurement
- 1.2Psychological and interaction dynamics
- 1.3Transparency, communication, and design
- 1.4Ethics, bias, and organizational considerations
- 2.Overview of contributions
- 2.1Trust signals and communication
- 2.2Computational and modeling approaches
- 2.3Conceptual, ethical, and organizational perspectives
- 3.Looking forward
References
References (45)
Afroogh, S., Akbari, A., Malone, E., & Langarizadeh, M. (2024). Trust
in AI: Progress, challenges, and future directions. Humanities and Social Sciences
Communications, 11 (1), 1568.
Bailey, M. E., Gancz, B., & Pollick, F. E. (2026). The
effect of emojis and AI reliability on team performance and trust in human-AI
teams. Interaction Studies, (Special Issue on Multidisciplinary
Perspectives on Human-AI Team Trust), 26 (2).
Carragher, D. J., Sturman, D., & Hancock, P. J. (2024). Trust
in automation and the accuracy of humanalgorithm teams performing one-to-one face matching
tasks. Cognitive Research: Principles and
Implications, 9 (41).
Coester, U., Anderle, L., & Pohlmann, N. (2026). Trustworthiness
needs for the use of AI solutions in business: Ethical and empirical
considerations. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2).
Cohen, M. C., Chiou, E. K., & Cooke, N. J. (2026). Trusting
machine teammates: The role of personifying and objectifying language in team
communication. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2).
Cohen, M. C., Kim, N., Ba, Y., Pan, A., Bhatti, S., Salehi, P., Sung, J., Blasch, E., Mancenido, M. V., & Chiou, E. K. (2025). PADTHAI-MM:
Principles-based approach for designing trustworthy, human-centered AI using the MAST
methodology. AI
Magazine, 46(1), e70000.
de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards
a Theory of Longitudinal Trust Calibration in Human-Robot Teams. International Journal of
Social
Robotics, 12 (2), 459–478.
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost
human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental
Psychology:
Applied, 22 (3), 331–349.
Duan, W., Zhou, S., Scalia, M. J., Freeman, G., Gorman, J., Tolston, M., McNeese, N. J., & Funke, G. (2025). Understanding
the processes of trust and distrust contagion in human-ai teams: A qualitative
approach. Computers in Human
Behavior, 1651, 108560.
European Commission. (2021). Coordinated
plan on artificial intelligence 2021 review (tech.
rep.) (Accessed: 2025-09-04). European Commission. [URL]
Georganta, E., & Ulfert, A.-S. (2024). Would
you trust an ai team member? team trust in human-ai teams. Journal of Occupational and
Organizational
Psychology, 97 (3), 1212–1241.
Glikson, E., & Woolley, A. W. (2020). Human
trust in artificial intelligence: Review of empirical research. Academy of management
annals, 14 (2), 627–660.
High-Level Expert Group on Artificial
Intelligence. (2019). Ethics guidelines for trustworthy
ai (tech. rep.) (Accessed: 2025-09-04). European Commission. [URL]
Hoff, K. A., & Bashir, M. (2015). Trust
in automation: Integrating empirical evidence on factors that influence trust. Human
factors, 57 (3), 407–434.
Jiang, L., Hwang, J. D., Bhagavatula, C., Bras, R. L., Forbes, M., Borchardt, J., Liang, J., Etzioni, O., Sap, M., & Choi, Y. (2021). Delphi:
Towards machine ethics and norms. arXiv preprint
arXiv:2110.07574.
Jobin, A., Ienca, M., & Vayena, E. (2019). The
global landscape of ai ethics guidelines. Nature machine
intelligence, 1 (9), 389–399.
Kucukosmanoglu, M., Johnson, C. J., Pollard, K., Chhan, D., Lakhmani, S. G., Forster, D., Conklin, S., Brooks, J., Crowell, H. P., & Krausman, A. (2026). Exploring
trust in AI-supported military teams using sentiment analysis. Interaction
Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2).
Lee, J. D., & See, K. A. (2004). Trust
in automation: Designing for appropriate reliance. Human
factors, 46 (1), 50–80.
Lu, G., Lu, J., Yao, S., & Yip, Y. J. (2009). A
review on computational trust models for multi-agent systems. The open information science
journal, 21, 18–25.
Madhavan, P., & Wiegmann, D. A. (2007). Similarities
and differences between human-human and human-automat trust: An integrative
review [Publisher: Taylor & Francis _eprint: Theoretical Issues in Ergonomics
Science, 8 (4), 277–301.
Malle, B. F., & Ullman, D. (2021, January). Chapter
1 — A multidimensional conception and measure of humanrobot
trust. In C. S. Nam & J. B. Lyons (Eds.), Trust
in Human-Robot
Interaction (pp. 3–25). Academic Press.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An
integrative model of organizational trust. Academy of management
review, 20 (3), 709–734.
McNeese, N. J., Demir, M., Chiou, E. K., & Cooke, N. J. (2021). Trust
and Team Performance in Human-Autonomy
Teaming [Publisher: Routledge _eprint:]. International Journal of Electronic
Commerce, 25 (1), 51–72.
Momen, A., Tossell, C. C., Walliser, J. C., Niemyer, R., Tolston, M., Funke, G. J., & de Visser, E. J. (2026). Perceived
trustworthiness and moral competence of a GenAI-enabled ethical robot advisor. Interaction
Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2).
Musick, G., O’Neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B. (2021). What
Happens When Humans Believe Their Teammate is an AI? An Investigation into Humans Teaming with
Autonomy. Computers in Human
Behavior, 1221, 106852.
Naikar, N., Hoffman, R., Roth, E. M., Klein, G., Militello, L. G., & Dominguez, C. (2025). Should
we Make AI More Tool-like or Teammate-Like? [Publisher: SAGE Publications]. Journal of
Cognitive Engineering and Decision Making, 15553434251346904.
Nguyen, D., Cohen, M. C., Kao, H.-T., Engberon, G., Penafiel, L., Lynch, S., & Volkova, S. (2026). Exploratory
models of human-AI teams: Leveraging human digital twins to investigate trust
development. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2).
Parasuraman, R., & Manzey, D. H. (2010). Complacency
and Bias in Human Use of Automation: An Attentional Integration [Publisher: SAGE Publications
Inc]. Human
Factors, 52 (3), 381–410.
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., et al. (2019). Machine
behaviour. Nature, 568 (7753), 477–486.
Rezaei Khavas, Z., Kotturu, M. R., Ahmadzadeh, S. R., & Robinette, P. (2024). Do
Humans Trust Robots that Violate Moral Trust? J. Hum.-Robot
Interact., 13 (2), 25:1–25:30.
Ryan, M. (2020). In
AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering
Ethics, 26 (5), 2749–2767.
Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S. (2024). Ai-teaming:
Redefining collaboration in the digital era. Current Opinion in
Psychology, 581, 101837.
Seeber, I., Bittner, E., Briggs, R. O., De Vreede, T., De Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-ReiB, S., Randrup, N., et al. (2020). Machines
as teammates: A research agenda on ai in team collaboration. Information &
management, 57 (2), 103174.
Shneiderman, B. (1989). A
nonanthropomorphic style guide: Overcoming the humpty-dumpty syndrome. The Computing
Teacher, 16 (7), 5.
Snow, T. (2021). From
satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm [Publisher: Cambridge
University Press]. Data &
Policy, 31, e3.
Tabassi, E. (2023, 2023-01-26 05:01:00). Artificial
intelligence risk management framework (ai rmf 1.0).
The
Guardian. (2023, May). Uk schools bewildered by ai
and do not trust tech firms, headteachers say [Accessed: 2025-09-04].
. (2025a, July). Medical charlatans have
existed through history. but ai has turbocharged
them [Accessed: 2025-09-04].
. (2025c, August). Therapists warn ai
chatbots risk harming mental health support [Accessed: 2025-09-04].
. (2025d, May). Yes, ai will eventually
replace some workers. but that day is still a long way
off [Accessed: 2025-09-04].
Tielman, M., Bailey, M. E., Frattolillo, F., Centeio Jorge, C., Ulfert, A.-S., & Meyer-Vitali, A. (2026). Multidisciplinary
Perspectives on Human-AI Team Trust. Interaction Studies.
UNESCO. (2021). Recommendation on the
ethics of artificial intel ligence (tech.
rep.) (Accessed: 2025-09-04). United Nations Educational, Scientific and Cultural Organization. [URL]
