Article published In: Multidisciplinary Perspectives on Human-AI Team Trust
Edited by Nicolo' Brandizzi, Morgan Elizabeth Bailey, Carolina Centeio Jorge, Myke C. Cohen, Francesco Frattolillo and Alan Richard Wagner
[Interaction Studies 26:2] 2025
► pp. 164–199
Multidisciplinary perspectives on human‑AI team trust
Published online: 27 February 2026
https://doi.org/10.1075/is.24048.tie
https://doi.org/10.1075/is.24048.tie
Abstract
Human-AI teamwork is no longer a topic of the future. Given the importance of trust in human teams, the question
arises how trust functions in human-AI teams. Although trust has long been studied from a human-centred perspective (e.g. in
psychology and philosophy), a computational perspective and from the perspective of human trust in AI (e.g. in human-computer
interaction), the study of trust in human-AI interaction in a team setting is still a novel field. For this reason, the
MULTITTRUST (Multidisciplinary perspectives on Human-AI Team Trust) workshop series was founded. In this paper, we present the
main outcomes after three editions. Our contributions are: an overview of the shared language of concepts and definitions; an
outline of the main open research challenges; and methodological guidelines for further studies in meaningful human-AI team trust.
These three contributions form a foundational roadmap towards a better understanding of trust in human-AI team interactions.
Article outline
- 1.Introduction
- 2.Concepts and definitions
- 2.1Trust
- 2.1.1Trust across disciplines
- 2.2Trustworthiness
- 2.3Trust calibration
- 2.4Artificial trust
- 2.5Team trust
- 2.1Trust
- 3.Research challenges
- 3.1Trust calibration
- Challenge. How can AI understand a human’s trust?
- Challenge. How can AI influence a human’s trust?
- Challenge. How do we deal with the subjective and dynamic nature of trust when trying to observe it?
- Challenge. Can we separate trust in an agent from trust in the humans behind it?
- Challenge. Can we quantify how much trust is appropriate in a given situation?
- Challenge. How can we compare human trust to an agent’s trustworthiness in a meaningful way?
- Challenge. What strategies should AI systems use to increase or dampen trust?
- Challenge. Are there trade-offs to more calibrated trust, which might not be worth it?
- Challenge. How can trust calibration take into account changing contexts as AI itself evolves?
- 3.2Artificial trust
- Challenge. What can we observe to base artificial trust on?
- Challenge. How do we deal with the lack of ground truth?
- Challenge. How do we determine the right goal of artificial trust?
- Challenge. How do we make an agent transparent about artificial trust?
- 3.3Team trust
- Challenge. How do human trust and artificial trust differ, and how can they be compared and integrated in team trust?
- Challenge. Do beliefs exist that are shared between humans and AI?
- Challenge. Should we strive to model human-AI team trust after team trust in humans, or is it something completely new?
- 3.1Trust calibration
- 4.Measures and methods
- 4.1Trust measures
- 4.2Research setting
- 5.Conclusions
- Author’s contributions
- Acknowledgements
- Notes
References
References (129)
Adadi, A., & Berrada, M. (2018). Peeking
inside the black-box: a survey on explainable artificial intelligence (XAI), IEEE
access 61 52138–52160. Publisher: IEEE.
Adams, B. D., Waldherr, S., & Sartori, J. (2008). Trust
in Teams Scale, Trust in Leaders Scale: Manual for Administration and
Analyses.
Ali, A., Azevedo-Sa, H., Tilbury, D. M., & Robert Jr, L. P. (2022). Heterogeneous
human-robot task allocation based on artificial trust, Scientific
Reports 121 15304.
Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable
Agents and Robots: Results from a Systematic Literature
Review, in: Proceedings of the 18th International Conference on
Autonomous Agents and MultiAgent Systems, AAMAS ’19, International Foundation for Autonomous Agents and Multiagent
Systems, pp. 1078–1088. Place: Richland, SC.
Aroyo, A. M., Bruyne, J. D., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., Jones, S., Lutz, C., Sætra, H., Solberg, M., & Tamò-Larrieux, A. (2021). Overtrusting
robots: Setting a research agenda to mitigate overtrust in automation, Paladyn, Journal of
Behavioral Robotics 121 423–436. [URL]De Gruyter Open Access Section: Paladyn.
Azevedo-Sa, H., Yang, X. J., Robert, L. P., & Tilbury, D. M. (2021). A
Unified Bi-Directional Model for Natural and Artificial Trust in Human-Robot
Collaboration, IEEE Robotics and Automation
Letters 61 5913–5920. conference
Name: IEEE Robotics and Automation Letters.
Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2024). A
systematic literature review of user trust in ai-enabled systems: An hci
perspective, International Journal of Human-Computer
Interaction 401.
Baier, A. (1986) Trust
and
antitrust, Ethics 961 231–260. [URL]
Barredo Arrieta, A., Díaz-Rodríguez, N., J. Del Ser, Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable
Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible
AI, Information
Fusion 581 82–115.
Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., Kluge, A. (2023). Defining
human-AI teaming the human-centered way: a scoping review and network analysis, Frontiers in
Artificial Intelligence 61. Frontiers.
Bobko, P., Hirshfield, L., Eloy, L., Spencer, C., Doherty, E., Driscoll, J., & Obolsky, H. (2022). Human-agent
teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the
development of adaptive systems, Theoretical Issues in Ergonomics
Science. 1–25. Taylor & Francis.
Braga, D. D. S., Niemann, M., Hellingrath, B., & Neto, F. B. D. L. (2018). Survey
on computational trust and reputation models, ACM Computing
Surveys 511.
Brandizzi, N., C. Centeio Jorge, Cipollone, R., Frattolillo, F., Iocchi, L., & A.-S. Ulfert-Blank (2023). Multittrust:
2nd workshop on multidisciplinary perspectives on human-ai team
trust, in: Proceedings of the 11th International Conference on
Human-Agent
Interaction, pp. 496–497.
Breakey, H., Cadman, T., & Sampford, C. (2015). Conceptualizing
Personal and Institutional Integrity: The Comprehensive Integrity Framework, volume 14
of Research in Ethical Issues in Organizations, Emerald Group Publishing Limited, pp. 1–40.
Briggs, G., Williams, T., Jackson, R. B., & Scheutz, M. (2022) Why
and How Robots Should Say ‘No’, International Journal of Social
Robotics 141. 323–339.
Briggs, G., Law, T., Mirsky, R., Rogers, K., & Rosero, A. (2024). Rebellion
and Disobedience in Human-Robot Interaction (RaD-HRI), in: Companion
of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’24, Association for Computing Machinery, New York, NY, USA, pp. 1308–1310.
Brynjolfsson, E., & Mitchell, T. (2017). What
can machine learning do? workforce
implications, Science 3581.
Burnett, C., Norman, T. J., & Sycara, K. (2011). Trust
decision-making in multi-agent systems, in: IJCAI International Joint
Conference on Artificial Intelligence.
Cabiddu, F., Moi, L., Patriotta, G., & Allen, D. G. (2022) Why
do users trust algorithms? A review and conceptualization of initial trust and trust over
time, European management
journal 401. 685–706. Elsevier.
Cameron, D., Collins, E. C., S. de Saille, Eimontaite, I., Greenwood, A., & Law, J. (2024) The
Social Triad Model: Considering the Deployer in a Novel Approach to Trust in Human-Robot
Interaction, International Journal of Social
Robotics 161. 1405–1418.
Campagna, G., & Rehm, M. (2025) A
Systematic Review of Trust Assessments in Human-Robot Interaction, Journal of Human-Robot
Interaction 141. 301:1–30:35.
Castaldo, S., Premazzi, K., & Zerbini, F. (2010) The
meaning (s) of trust. a content analysis on the diverse conceptualizations of trust in scholarly research on business
relationships, Journal of business
ethics 961. 657–668.
Castelfranchi, C., & Falcone, R. (2010). Definitions
of Trust: From Conceptual Components to the General Core, in: Trust
Theory: A Socio-Cognitive and Computational
Model, Wiley, pp. 7–33. [URL]. conference Name: Trust Theory: A Socio-Cognitive and
Computational Model.
Chi, V. B., & Malle, B. F. (2023). Calibrated
Human-Robot Teaching: What People Do When Teaching Norms to
Robots*, in: 2023 32nd IEEE International Conference on Robot and
Human Interactive Communication
(RO-MAN), pp. 1308–1314. [URL]. iSSN: 1944-9437.
Colquitt, J. A., & Salam, S. C. (2009). Foster
trust through ability, benevolence, and integrity, Handbook of principles of organizational
behavior: Indispensable knowledge for evidence-based
management. 389–404. A John Wiley and Sons, Ltd, Publication.
Costa, A. C., Fulmer, C. A., & Anderson, N. R. (2018). Trust
in work teams: An integrative review, multilevel model, and future directions, Journal of
Organizational Behavior 391.
Degli-Esposti, S., & Arroyo, D. (2021). Trustworthy
humans and machines, in: Trust and Transparency in an Age of
Surveillance, 1
ed., Routledge, London, pp. 201–220. [URL].
Directorate-General for Communications Networks, Content and Technology (European
Commission), Grupa ekspertów wysokiego szczebla ds. sztucznej inteligencji, Ethics guidelines for trustworthy AI, Publications
Office of the European Union. (2019). [URL]
Duan, W., Flathmann, C., McNeese, N., Scalia, M. J., Zhang, R., Gorman, J., Freeman, G., Zhou, S., Hauptman, A. I., & Yin, X. (2025). Trusting
Autonomous Teammates in Human-AI Teams — A Literature
Review, in: Proceedings of the 2025 CHI Conference on Human Factors
in Computing Systems, CHI ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–23.
Duarte, R. d. B., Correia, F., Arriaga, P., & Paiva, A. (2023). AI
Trust: Can Explainable AI Enhance Warranted Trust? — de Brito Duarte — 2023 — Human Behavior and Emerging Technologies — Wiley
Online Library, Human behavior and Emerging
technologies (2023). [URL]
Esterwood, C., & Robert Jr., L. P. (2023). Three
Strikes and you are out!: The impacts of multiple human-robot trust violations and repairs on robot
trustworthiness, Computers in Human
Behavior 1421. 107658. [URL].
Falcone, R., & Castelfranchi, C. (2004). Trust
dynamics: How trust is influenced by direct experiences and by trust
itself, in: 3rd International Joint Conference on Autonomous Agents
and Multiagent Systems (AAMAS 2004), 19–23 August, New York, NY, USA, IEEE Computer Society, 2004, pp. 740–747. [URL].
Falcone, R., Pezzulo, G., & Castelfranchi, C. (2002). A
fuzzy approach to a belief-based trust computation, in: R. Falcone, K. S. Barber, L. Korba, M. P. Singh (Eds.), Trust,
Reputation, and Security: Theories and Practice, AAMAS 2002 International Workshop, Bologna, Italy, July 15, 2002, Selected
and Invited Papers, volume 2631 of Lecture Notes in Computer
Science, Springer, pp. 73–86.
Feitosa, J., Grossman, R., Kramer, W. S., & Salas, E. (2020). Measuring
team trust: A critical and meta-analytical review, Journal of Organizational
Behavior 411. 479–501.
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamo-Larrieux, A. (2020). Towards
Transparency by Design for Artificial Intelligence, Science and Engineering
Ethics 261. 3333–3361.
Fullam, K. K., Klos, T. B., Muller, G., Sabater, J., Schlosser, A., Topol, Z., Barber, K. S., Rosenschein, J. S., Vercouter, L., & Voss, M. (2005). A
specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent
societies, in: Proceedings of the Fourth International Joint
Conference on Autonomous Agents and Multiagent Systems, AAMAS ’05, Association for Computing
Machinery, New York, NY, USA, p. 512–518.
Fulmer, C. A., & Gelfand, M. J. (2012). At
what level (and in whom) we trust: Trust across multiple organizational levels, Journal of
management 381. 1167–1230.
Fulmer, C. A., & Ostroff, C. (2021). Trust
conceptualizations across levels of analysis, in: Understanding trust
in
organizations, Routledge, pp. 14–42.
Georganta, E., & Ulfert, A.-S. (2024). Would
you trust an ai team member? team trust in human-ai teams, Journal of Occupational and
Organizational Psychology.
Glikson, E., & Woolley, A. W. (2020). Human
trust in artificial intelligence: Review of empirical research, Academy of Management
Annals 141.
(2020). Human
Trust in Artificial Intelligence: Review of Empirical Research, Academy of Management
Annals.
Gulati, S., Sousa, S., & Lamas, D. (2019). Design,
Development and Evaluation of a Human-Computer Trust Scale, Behaviour & Information
Technology 381. 1004–1015Taylor &; Francis.
Guo, Y., & Yang, X. J. (2020). Modeling
and Predicting Trust Dynamics in Human-Robot Teaming: A Bayesian Inference
Approach, International Journal of Social Robotics. Springer Science and Business Media B.V.
Hannibal, G., Dobrosovestnova, A., & Weiss, A. (2022). Tolerating
Untrustworthy Robots: Studying Human Vulnerability Experience within a Privacy Scenario for Trust in
Robots, in: 2022 31st IEEE International Conference on Robot and
Human Interactive Communication
(RO-MAN), pp. 821–828. [URL]. , iSSN: 1944-9437.
Herzig, A., Lorini, E., Hubner, J. F., & Vercouter, L. (2009). A
logic of trust and reputation, Logic Journal of the
IGPL 181. 214–244.
Hoff, K. A., & Bashir, M. (2015). Trust
in automation: Integrating empirical evidence on factors that influence trust, Human
Factors 571.
Huber, S., Weppert, L., Baumeister, L., Happel, O., & Grundgeiger, T. (2025). Team
Roles of Artificial Intelligence in Anesthesiology — A Scoping
Review, in: Proceedings of the Extended Abstracts of the CHI
Conference on Human Factors in Computing Systems, CHI EA ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–13.
Ethically aligned design: A vision for prioritizing human well-being with
autonomous and intelligent systems version 2, Technical Report,
IEEE, 2018.
Jacovi, A., Marasović’, A., Miller, T., & Goldberg, Y. (2021). Formalizing
trust in artificial intelligence: Prerequisites, causes and goals of human trust in
AI, in: Proceedings of the 2021 ACM conference on fairness,
accountability, and
transparency, pp. 624–635.
Jensen, T., & Khan, M. M. H. (2022). I’m
Only Human: The Effects of Trust Dampening by Anthropomorphic
Agents, in: J. Y. C. Chen, G. Fragomeni, H. Degen, S. Ntoa (Eds.). HCI
International 2022 — Late Breaking Papers: Interacting with eXtended Reality and Artificial
Intelligence, Springer Nature Switzerland, Cham, pp. 285–306.
Johnson, M., & Bradshaw, J. M. (2021). The
role of interdependence in trust, in: Trust in Human-Robot
Interaction, Elsevier, pp. 379–403.
Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., M. B. van Riemsdijk, & Sierhuis, M. (2014). Coactive
design: designing support for interdependence in joint activity, J. Hum.-Robot
Interact. 31. 43–69.
Johnson, M., Bradshaw, J. M., & Feltovich, P. J. (2018). Tomorrow’s
human-machine design tools: From levels of automation to interdependencies, Journal of
Cognitive Engineering and Decision
Making 121. 77–82.
Jong, B. A. De, & Elfring, T. (2010). How
does trust affect the performance of ongoing teams? the mediating role of reflexivity, monitoring, and
effort, Academy of Management
journal 531, 535–549.
Jorge, C. Centeio, & A. S. Ulfert-Blank (2023). Multittrust-multidisciplinary
perspectives on human-ai team trust, in: CEUR Workshop
Proceedings, volume 34561, CEUR-WS, pp. 132–136.
Jorge, C. Centeio, Mehrotra, S., Tielman, M. L., & Jonker, C. M. (2021). Trust
should correspond to trustworthiness: A formalization of appropriate mutual trust in human-agent
teams, in: 22nd International Trust
Workshop.
Jorge, C. Centeio, Tielman, M. L., & Jonker, C. M. (2022). Artificial
trust as a tool in human-ai teams, in: D. Sakamoto, A. Weiss, L. M. Hiatt, M. Shiomi (Eds.), ACM/IEEE
International Conference on Human-Robot Interaction, HRI 2022, Sapporo, Hokkaido, Japan, March 7 —
10, 2022, IEEE / ACM, pp. 1155–1157.
C. Centeio Jorge, Jonker, C. M., & Tielman, M. L. (2024). How
should an AI trust its human teammates? exploring possible cues of artificial trust, ACM
Transactions of Interactive Intelligent
Systems 141, 51:1–5:26.
Kahr, P., Rooks, G., Snijders, C., Willemsen, M. C. (2025). Good
Performance Isn’t Enough to Trust AI: Lessons from Logistics Experts on their Long-Term Collaboration with an AI Planning
System, in: Proceedings of the 2025 CHI Conference on Human Factors
in Computing Systems, CHI ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–16.
Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A. (2023). Trustworthy
artificial intelligence: A review, ACM Computing
Surveys 551.
Kok, B. C., & Soh, H. (2020). Trust
in robots: Challenges and opportunities, Current Robotics
Reports 11, 297–309.
Kolomaznik, M., Petrik, V., Slama, M., & Jurik, V. (2024). The
role of socio-emotional attributes in enhancing human-AI collaboration, Frontiers in
Psychology 151. [URL]Frontiers.
Kox, E. S., Kerstholt, J. H., Hueting, T. F., & P. W. de Vries (2021). Trust
repair in human-agent teams: the effectiveness of explanations and expressing
regret, Autonomous Agents and Multi-Agent
Systems 351, 30. Publisher: Springer.
Kox, E., Kerstholt, J., Hueting, T., & P. de Vries (2021). Trust
repair in human-agent teams: the effectiveness of explanations and expressing
regret, Autonomous agents and multi-agent
systems 351.
Kumar, S., Savur, C., & Sahin, F. (2021). Survey
of Human-Robot Collaboration in Industrial Settings: Awareness, Intelligence, and
Compliance, IEEE Transactions on Systems, Man, and Cybernetics:
Systems 511, 280–297. [URL].
Küper, A., & Krämer, N. (2023). Psychological
Traits and Appropriate Reliance: Factors Shaping Trust in AI, International Journal of
Human-Computer
Interaction 01, 1–17. Taylor & Francis _eprint:
Lascaux, A. (2008). Trust
and uncertainty: a critical re-assessment, International Review of
Sociology 181, 1–18. URL:, publisher: Routledge _eprint:
Lee, M. H., & Chew, C. J. (2023). Understanding
the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision
Making, Proceedings of ACM Human-Computer
Interaction 71. New York, NY: Association for Computing Machinery.
Lee, J. D., & See, K. A. (2004). Trust
in automation: Designing for appropriate reliance, Human
Factors 461, 50–80. pMID:
15151155.
Lewicki, R. J., & Brinsfield, C. (2015). Trust
research: measuring trust beliefs and behaviours, in: Handbook of
research methods on trust, Edward Elgar Publishing.
Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J., & Zhou, B. (2023). Trustworthy
ai: From principles to practices, ACM Computing
Surveys 551.
Malle, B. F., & Ullman, D. (2023). Measuring
Human-Robot Trust with the MDMT (Multi-Dimensional Measure of Trust). arXiv:2311.14887
[cs].
Mattioli, J., Sohier, H., Delaborde, A., Pedroza, G., Amokrane, K., Awadid, A., Chihani, Z., & Khalfaoui, S. (2023). Towards
a holistic approach for AI trustworthiness assessment based upon aids for multi-criteria
aggregation, in: G. Pedroza, X. Huang, X. C. Chen, A. Theodorou (Eds.), SafeAI
2023 — The AAAI’s Workshop on Artificial Intelligence Safety, volume 3381, Washington, D.C.: AAAI. [URL]
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An
integrative model of organizational trust, Academy of management
review 201, 709–734. Publisher: Academy of Management Briarcliff Manor, NY 10510.
McAllister, D. J. (1995). Affect-and
cognition-based trust as foundations for interpersonal cooperation in organizations, Academy of
management
journal 381, 24–59.
Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L. (2024). A
Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities
and Challenges, ACM Journal of Responsible Computing. just Accepted.
Miller, T. (2019). Explanation
in artificial intelligence: Insights from the social sciences, Artificial
Intelligence 2671, 1–38.
Mui, L., Halberstadt, A., & Mohtashemi, M. (2003). Evaluating
Reputation in Multi-agents Systems, in: R. Falcone, S. Barber, L. Korba, M. Singh (Eds.), Trust,
Reputation, and Security: Theories and Practice, Springer, Berlin, Heidelberg, pp. 123–137.
Okamura, K., & Yamada, S. (2020). Adaptive
trust calibration for human-AI collaboration, Plos
one 151) e0229132. Publisher: Public Library of Science San Francisco, CA USA.
Parasuraman, R., & Riley, V. (1997). Humans
and automation: Use, misuse, disuse, abuse, Human
factors 391, 230–253. Publisher: SAGE Publications Sage CA: Los Angeles, CA.
Pinyol, I., & Sabater-Mir, J. (2013). Computational
trust and reputation models for open multi-agent systems: a review, Artificial Intelligence
Review 401, 1–25.
Pouryousefi, S., & Tallant, J. (2023). Empirical
and philosophical reflections on trust, Journal of the American Philosophical
Association 91.
Ramchurn, S. D., Huynh, D., & Jennings, N. R. (2004). Trust
in multi-agent systems, Knowledge Engineering
Review 191.
Ramchurn, S. D., Stein, S., & Jennings, N. R. (2021). Trustworthy
human-AI
partnerships, iScience 241, 102891. [URL].
Riedl, R. (2022). Is
trust in artificial intelligence systems related to user personality? Review of empirical
evidence and future research directions, Electronic
Markets 321, 2021–2051.
Riegelsberger, J., Sasse, M. A., & McCarthy, J. D. (2005). The
mechanics of trust: A framework for research and design, International Journal of Human
Computer Studies 621.
Rix, J. (2022). From
tools to teammates: Conceptualizing humans’ perception of machines as teammates with a systematic literature
review, in: Proceedings of the 55th Hawaii International
Conference on System Sciences.
Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust
of robots in emergency evacuation scenarios, in: 2016 11th ACM/IEEE
International Conference on Human-Robot Interaction
(HRI), IEEE, pp. 101–108.
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not
so different after all: A crossdiscipline view of trust, Academy of management
review 231, 393–404. [URL], publisher: Academy of Management Briarcliff Manor, NY 10510.
Sabater, J., & Sierra, C. (2005). Review
on computational trust and reputation models, Artificial Intelligence
Review 241, 33–60.
Sabater-Mir, J., & Vercouter, L. (2013). Trust
and reputation in multiagent systems, Multiagent
systems) 381. Publisher: MIT Press.
Salas, E., Sims, D. E., & Burke, C. S. (2005). Is
there a “big five” in teamwork?, Small group
research 361, 555–599.
Sapp, J. E., Torre, D. M., Larsen, K. L., Holmboe, E. S., & Durning, S. J. (2019). Trust
in group decisions: A scoping review, BMC Medical
Education 191.
Schemmer, M., Kuehl, N., Benz, C., Bartos, A., & Satzger, G. (2023). Appropriate
Reliance on AI Advice: Conceptualization and the Effect of
Explanations, in: Proceedings of the 28th International Conference on
Intelligent User Interfaces, Iui ’23, Association for Computing Machinery, New York, NY, USA, pp. 410–422. event-place: Sydney, NSW, Australia.
Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S. (2024). AI-teaming:
Redefining collaboration in the digital era, Current Opinion in
Psychology 581, 101837. [URL].
Seraj, E., & Gombolay, M. (2020). Coordinated
Control of UAVs for Human-Centered Active Sensing of
Wildfires, in: 2020 American Control Conference
(ACC), pp. 1845–1852. [URL]. iSSN: 2378-5861.
F. Santoni de Sio, & J. Van den Hoven (2018). Meaningful
human control over autonomous systems: A philosophical account, Frontiers in Robotics and
AI) 15. Publisher: Frontiers.
Spain, R. D., Bustamante, E. A., & Bliss, J. P. (2008). Towards
an empirically developed scale for system trust: Take
two, in: Proceedings of the human factors and ergonomics society
annual meeting, volume 52, SAGE Publications Sage CA: Los Angeles, CA, pp. 1335–1339. Issue: 19.
Stuck, R. E., Holthausen, B. E., & Walker, B. N. (2021). The
role of risk in human-robot trust, in: Trust in human-robot
interaction, Elsevier, pp. 179–194.
Surendran, V., & Wagner, A. R. (2019). Your
robot is watching: Using surface cues to evaluate the trustworthiness of human
actions, in: 28th IEEE International Conference on Robot and Human
Interactive Communication, RO-MAN 2019, New Delhi, India, October 14–18, IEEE, pp. 1–8.
Tielman, M. L., Meyer-Vitali, A., Bailey, M., & Frattolillo, F. (2024). Multittrust:
3rd workshop on multidisciplinary perspectives on human-ai team
trust, in: Proceedings of HHAI 2024 Workshops,
CEUR. [URL]
Tolmeijer, S., Weiss, A., Hanheide, M., Lindner, F., Powers, T. M., Dixon, C., & Tielman, M. L. (2020). Taxonomy
of Trust-Relevant Failures and Mitigation Strategies, in: Proceedings
of the 2020 ACM/IEEE International Conference on Human-Robot
Interaction, pp. 3–12.
Tomlinson, E. C., & Mayer, R. C. (2009). The
Role of Causal Attribution Dimensions in Trust Repair, The Academy of Management
Review 341, 85–104. [URL], publisher: Academy of Management.
Tucci, V., Saary, J., & Doyle, T. E. (2022). Factors
influencing trust in medical artificial intelligence for healthcare professionals: a narrative
review, Journal of Medical Artificial
Intelligence 51. [URL]. number: 0 Publisher: AME Publishing Company.
Ulfert, A.-S., Georganta, E., C. Centeio Jorge, Mehrotra, S., & Tielman, M. L. (2024). Shaping
a multidisciplinary understanding of team trust in human-ai teams: a theoretical
framework, European Journal of Work and Organizational
Psychology 331. 158–171.
Ullman, D., & Malle, B. F. (2018). What
Does it Mean to Trust a Robot? Steps Toward a Multidimensional Measure of
Trust, in: Companion of the 2018 ACM/IEEE International Conference on
Human-Robot Interaction, HRI ’18, Association for Computing Machinery, New York, NY, USA, pp. 263–264.
Urbano, J., Rocha, A. P., & Oliveira, E. (2011) Computational
trust: A review, ACM Computing
Surveys 431 1–36.
Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., & Krishna, R. (2023). Explanations
Can Reduce Overreliance on AI Systems During Decision-Making, Proc. ACM Human-Computer
Interaction 71. place: New York, NY, USA Publisher: Association for Computing Machinery. ,
Verhagen, R. S., Neerincx, M. A., & Tielman, M. L. (2022). The
influence of interdependence and a transparent or explainable communication style on human-robot
teamwork, Frontiers in Robotics and
AI 91 243. Publisher: Frontiers.
(2024). Meaningful
human control and variable autonomy in human-robot teams for firefighting, Frontiers in
Robotics and AI 111Frontiers. .
Verhagen, R. S., Marcu, A., Neerincx, M. A., & Tielman, M. L. (2024). The
Influence of Interdependence on Trust Calibration in Human-Machine
Teams, in: HHAI 2024: Hybrid Human AI Systems for the Social
Good, IOS Press, pp. 300–314.
Vinanzi, S., Patacchiola, M., Chella, A., & Cangelosi, A. (2018). Would
a robot trust you? developmental robotics model of trust and theory of
mind, in: A. Chella, I. Infantino, A. Lieto (Eds.), Proceedings
of the 6th International Workshop on Artificial Intelligence and Cognition, Palermo, Italy, July 2–4, 2018, volume 24181 of CEUR
Workshop
Proceedings, CEUR-WS.org p. 74. [URL]
Visser, E. J. de, Pak, R., & Shaw, T. H. (2018) From
automation to autonomy: the importance of trust repair in human-machine
interaction, Ergonomics 611 1409–1427Taylor & Francis _eprint:
Visser, E. J. de, Marieke, M. M. Peeters, Malte, F. Jung, Kohn, S., Tyler, H. Shaw, Pak, R., & Neerincx, M. A. (2020) Towards
a Theory of Longitudinal Trust Calibration in Human-Robot Teams, International Journal of
Social
Robotics 121 459–478. iSBN: 5,98,108,117,1.
Visser, E. J. de, Momen, A., Walliser, J. C., Kohn, S. C., Shaw, T. H., & Tossell, C. C. (2023). Mutually
Adaptive Trust Calibration in Human-AI Teams. [URL]
Waa, J. van der, Diggelen, J. van, Siebert, L. Cavalcante, Neerincx, M., Jonker, & C. (2020). Allocation
of Moral Decision-Making in Human-Agent Teams: A Pattern
Approach, in: D. Harris, W.-C. Li (Eds.), Engineering
Psychology and Cognitive Ergonomics. Cognition and Design, Springer International Publishing, Cham, pp. 203–220.
Wagner, A. R., Borenstein, J., & Howard, A. (2018). Overtrust
in the robotic age, Communications of the
ACM 611 22–24ACM New York, NY, USA.
Winikoff, M. (2017). Towards
Trusting Autonomous Systems, in: International Workshop on
Engineering Multi-Agent
Systems, Springer, pp. 3–20.
Youssef, M., E.-N. Abdeslam, & Mohamed, D. (2015). A
jade based testbed for evaluating computational trust
models, in: 2015 10th International Conference on Intelligent
Systems: Theories and Applications
(SITA), pp. 1–7.
Zerilli, J., Bhatt, U., & Weller, A. (2022). How
transparency modulates trust in artificial
intelligence, Patterns 31 100455.
Zhang, Y., Liao, Q. V., & Bellamy, R. K. E. (2020). Effect
of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision
Making, in: Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency, ’20, Association for Computing Machinery, New York, NY, USA, pp. 295–305. event-place: Barcelona, Spain.
