Article published In: Multidisciplinary Perspectives on Human-AI Team Trust
Edited by Nicolo' Brandizzi, Morgan Elizabeth Bailey, Carolina Centeio Jorge, Myke C. Cohen, Francesco Frattolillo and Alan Richard Wagner
[Interaction Studies 26:2] 2025
► pp. 357–385
The effect of emojis and AI reliability on team performance and trust in human-AI teams
Published online: 27 February 2026
https://doi.org/10.1075/is.24045.bai
https://doi.org/10.1075/is.24045.bai
Abstract
The increasing integration of Artificial Intelligence (AI) into human teams necessitates a deeper understanding of
how to foster effective collaboration. This study investigates how incorporating emojis, as a representation of emotional
intelligence, into AI communication influences human-AI team dynamics. Specifically, the study examined how emojis impact human
trust in AI teammates, whether different types of emojis yield varied outcomes, and how emoji use affects the perceived
performance of both AI and human teammates. A controlled experiment was conducted with participants who collaborated with a
simulated AI teammate on a geographic location identification task. The AI teammate’s reliability and the use of emojis were
manipulated across different experimental conditions. Results showed that neither the AI teammate’s reliability nor the use of
emojis significantly influenced participants’ explicit trust ratings in the AI teammate. These findings highlight the complex
interplay of trust, perception, and emotional cues in HAT collaboration.
Article outline
- 1.Introduction
- 1.1Human AI Teams
- 1.2Performance and reliability
- 1.3Trust
- 1.4Emojis
- 1.5Aims of current study
- 2.Methods
- 2.1Participants
- 2.2Materials
- Equipment and materials
- Experimental stimuli
- 2.1Procedure
- 2.2Developing the linear mixed model for analysis
- 2.3Model specification
- 2.4Post Hoc analysis
- 3.Results
- 3.1Demographic and baseline analysis
- 3.2Condition performance
- 3.3Post-Hoc analysis
- 3.4Descriptive statistics
- 3.5Propensity to trust
- 3.6Influence ratings
- 3.7Trust rating
- 3.8AI performance ratings
- 3.9Human performance ratings
- 3.10Questionnaire data
- 3.10.1Trust in AI questionnaire
- 3.10.1.1The godspeed questionnaire
- 3.10.1Trust in AI questionnaire
- 4.Discussion
- 4.1Overview
- 4.2Task performance
- 4.3Trust and influence
- 4.4Teammate performance ratings
- 4.5Godspeed perceptions
- 4.6AI reliability
- 4.7Ethical considerations
- 4.8Applications
- 4.9Limitations and future research
- Author contributions
References
References (60)
Ahanin, Z., & Ismail, M. A. (2022). A
multi-label emoji classification method using balanced pointwise mutual information-based feature
selection. Computer Speech and
Language, 731.
Bailey, M. E., & Pollick, F. E. (2023). Social
Intelligence towards Human-AI Teambuilding. Proceedings of the 37th AAAI Conference on
Artificial Intelligence, AAAI 2023, 371.
Bansal, G., Nushi, B., Kamar, E., Horvitz, E., & Weld, D. S. (2021). Is
the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork. 35th AAAI Conference on
Artificial Intelligence, AAAI 2021, 13A.
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond
Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI
Conference on Human Computation and Crowdsourcing, 71.
Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2019). Updates
in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. 33rd
AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference,
IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019.
Bansal, G., Wu, T., & Zhou, J. (2021). Does
the whole exceed its parts? The efect of ai explanations on complementary team
performance. Conference on Human Factors in Computing Systems —
Proceedings.
Barr, D. J. (2013). Random
effects structure for testing interactions in linear mixed-effects models. Frontiers in
Psychology, 41.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement
instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of
robots. In International Journal of Social
Robotics (Vol. 1, Issue 1).
Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C. (2015). Fitting
linear mixed-effects models using lme4. Journal of Statistical
Software, 67(1).
Beattie, A., Edwards, A. P., & Edwards, C. (2020). A
Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated
Communication. Communication
Studies, 71(3).
Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A. (2023). Defining
human-AI teaming the human-centered way: a scoping review and network analysis. Frontiers in
Artificial Intelligence, 61.
Chai, D. S., Hwang, S. J., & Joo, B. K. (2017). Transformational
Leadership and Organizational Commitment in Teams: The Mediating Roles of Shared Vision and Team-Goal
Commitment. Performance Improvement
Quarterly, 30(2).
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human
vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and
relationship norms. Information Processing and
Management, 59(3).
Chiou, E. K., Lee, J. D., & Su, T. (2019). Negotiated
and reciprocal exchange structures in human-agent cooperation. Computers in Human
Behavior, 901.
Cooke, N. J., Salas, E., Cannon-Bowers, J. A., & Stout, R. J. (2000). Measuring
team knowledge. Human
Factors, 42(1).
Costantini, S., De Gasperis, G., & Olivieri, R. (2019). Digital
forensics and investigations meet artificial intelligence. Annals of Mathematics and Artificial
Intelligence, 86(1–3).
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost
human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental
Psychology: Applied, 22(3).
de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards
a Theory of Longitudinal Trust Calibration in Human–Robot Teams. International Journal of
Social Robotics, 12(2).
Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The
role of emotion in computer-mediated communication: A
review. In Computers in Human
Behavior (Vol. 24, Issue 3).
Döppner, D. A., Derckx, P., & Schoder, D. (2019). Symbiotic
co-evolution in collaborative human-machine decision making: Exploration of a multi-year design science research project in
the air cargo industry. Proceedings of the Annual Hawaii International Conference on System
Sciences, 2019-January.
Endsley, M. R. (2023). Supporting
Human-AI Teams:Transparency, explainability, and situation awareness. Computers in Human
Behavior, 1401.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On
Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological
Review, 114(4).
Fadhil, A., Schiavo, G., Wang, Y., & Yilma, B. A. (2018). The
effect of emojis when interacting with conversational interface assisted health coaching
system. ACM International Conference Proceeding Series.
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive
Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive
Delegation. Information Systems
Research, 33(2).
Gesselman, A. N., Ta, V. P., & Garcia, J. R. (2019). Worth
a thousand interpersonal words: Emoji as affective signals for relationship-oriented digital
communication. PLoS
ONE, 14(8).
Ghosh, R., Shuck, B., & Petrosko, J. (2012). Emotional
intelligence and organizational learning in work teams. Journal of Management
Development, 31(6).
Glikson, E., & Woolley, A. W. (2020). Human
trust in artificial intelligence: Review of empirical research. Academy of Management
Annals, 14(2), 627–660.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A
survey of methods for explaining black box models. ACM Computing
Surveys, 51(5).
Hamza, A. (2016). Are
Emojis Creating a New or Old Visual Language for New Generations? A Socio-semiotic
Study. Advances in Language and Literary
Studies, 7(6).
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A
meta-analysis of factors affecting trust in human-robot interaction. Human
Factors, 53(5).
Kamar, E. (2016). Directions
in hybrid intelligence: Complementing AI systems with human intelligence. IJCAI International
Joint Conference on Artificial Intelligence, 2016-January.
Kim, D., Song, Y., Kim, S., Lee, S., Wu, Y., Shin, J., & Lee, D. (2023). How
should the results of artificial intelligence be explained to users? — Research on consumer preferences in user-centered
explainable artificial intelligence. Technological Forecasting and Social
Change, 1881.
Körber, M. (2019). Theoretical
considerations and development of a questionnaire to measure trust in automation. Advances in
Intelligent Systems and Computing, 8231.
Kulms, P., & Kopp, S. (2019). More
human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and
interdependent human–agent cooperation. ACM International Conference Proceeding
Series.
Kunar, M. A., & Watson, D. G. (2023). Framing
the fallibility of Computer-Aided Detection aids cancer detection. Cognitive Research:
Principles and Implications, 8(1).
Lee, C., & Wong, C. S. (2019). The
effect of team emotional intelligence on team process and effectiveness. Journal of Management
and Organization, 25(6).
Lenth, R., Piaskowski, J. (2025). emmeans: Estimated Marginal Means, aka Least-Squares Means. R packa 2.0.1, [URL]
Liang, C., Proft, J., Andersen, E., & Knepper, R. A. (2019). Implicit
Communication of Actionable Information in Human-AI teams. Conference on Human Factors in
Computing Systems — Proceedings.
McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C. (2018). Teaming
With a Synthetic Teammate: Insights into Human-Autonomy Teaming. Human
Factors, 60(2).
Merritt, S. M., Heimbaugh, H., Lachapell, J., & Lee, D. (2013). I
trust it, but i don’t know why: Effects of implicit attitudes toward automation on trust in an automated
system. Human
Factors, 55(3).
Merritt, T. R., Tan, K. B., Ong, C., Thomas, A., Chuah, T. L., & McGee, K. (2011). Are
artificial team-mates scapegoats in computer games. Proceedings of the ACM Conference on
Computer Supported Cooperative Work, CSCW.
Oh, C., Song, J., Choi, J., Kim, S., Lee, S., & Suh, B. (2018). I
lead, you help but only with enough details: Understanding the user experience of co-creation with artificial
intelligence. Conference on Human Factors in Computing Systems — Proceedings,
2018-April.
O’Neill, T. A., Flathmann, C., McNeese, N. J., & Salas, E. (2023). Human-autonomy
Teaming: Need for a guiding team-based framework? Computers in Human
Behavior, 1461.
Ong, C., McGee, K., & Chuah, T. L. (2012). Closing
the human-AI team-mate gap: How changes to displayed information impact player behavior towards computer
teammates. Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI
2012.
Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K. (2019). PsychoPy2:
Experiments in behavior made easy. Behavior Research
Methods, 51(1).
Pinski, M., Adam, M., & Benlian, A. (2023). AI
Knowledge: Improving AI Delegation through Human Enablement. Conference on Human Factors in
Computing Systems — Proceedings.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why
should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17-August-2016.
Rix, J. (2022). From
Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature
Review. Proceedings of the 55th Hawaii International Conference on System
Sciences.
Roy, R., & Naidoo, V. (2021). Enhancing
chatbot effectiveness: The role of anthropomorphic conversational styles and time
orientation. Journal of Business
Research, 1261.
Salovey, P., & Mayer, J. D. (1990). Emotional
Intelligence. Imagination, Cognition and
Personality, 9(3), 185–211.
Schelble, B. G., Flathmann, C., McNeese, N. J., Freeman, G., & Mallick, R. (2022). Let’s
Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent
Teams. Proceedings of the ACM on Human-Computer
Interaction, 6(GROUP).
Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency
and trust in artificial intelligence systems. Journal of Decision
Systems, 29(4).
Stoianov, D., Kemp, N., Wegener, S., & Beyersmann, E. (2024). Emojis
and affective priming in visual word recognition. Cognition and
Emotion.
Sung, Y. T., & Wu, J. S. (2018). The Visual Analogue Scale for Ratin Comparison (VAS-RRP): A new technique for psychological measurement. Behavior research methods.
Ulfert, A. S., Georganta, E., Centeio Jorge, C., Mehrotra, S., & Tielman, M. (2023). Shaping
a multidisciplinary understanding of team trust in human-AI teams: a theoretical
framework. European Journal of Work and Organizational Psychology.
von Eschenbach, W. J. (2021). Transparency
and the Black Box Problem: Why We Do Not Trust AI. Philosophy and
Technology, 34(4).
Waytz, A., Heafner, J., & Epley, N. (2014). The
mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of
Experimental Social Psychology, 521.
