Article published In: Multidisciplinary Perspectives on Human-AI Team Trust
Edited by Nicolo' Brandizzi, Morgan Elizabeth Bailey, Carolina Centeio Jorge, Myke C. Cohen, Francesco Frattolillo and Alan Richard Wagner
[Interaction Studies 26:2] 2025
► pp. 200–228
Trusting machine teammates
The role of personifying and objectifying language in team communication
Published online: 27 February 2026
https://doi.org/10.1075/is.24050.coh
https://doi.org/10.1075/is.24050.coh
Abstract
Team communication content can provide insights into teammates’ coordination processes and perceptions of one
another. Using a simulated aircraft reconnaissance team task testbed, we investigate how personifying and objectifying
communication content relate to people’s trust in and anthropomorphism of machine teammates and to overall team performance. A
total of 44 participants were paired and assigned to one of two unique team roles alongside a synthetic pilot agent. Instances of
verbal personifications and objectifications that occurred during the task were captured and compared to team performance, as well
as questionnaire responses related to participants’ trust in, and anthropomorphizing of, the synthetic pilot. Verbal
personifications were not correlated with trust and anthropomorphism but converged for the two human roles over time, along with a
convergence in trust towards the synthetic agent. Verbal objectifications, on the other hand, were negatively correlated with
perceived trustworthiness and anthropomorphism of a teammate. Neither verbal personifications nor objectifications were found to
be related to team performance. Our findings suggest that people verbally personify machines to ease communication, and that the
same processes that underlie tendencies to verbally personify and objectify machines are related to those that influence trust and
anthropomorphism.
Article outline
- 1.Introduction
- 2.Background
- 2.1Team communication and team cognition
- 2.2Trust and anthropomorphism in HMT communication
- 2.3Personification & objectification in HMT communication
- 3.Method
- 3.1Remotely piloted aircraft system synthetic task environment
- 3.2Experimental design
- 3.3Participants
- 3.4Procedure
- 3.5Measures
- 3.5.1Verbal personifications and objectifications
- 3.5.2Self-reported trust and anthropomorphism
- 3.5.3Team performance
- 3.6Hypotheses
- 4.Results
- 4.1Survey, verbal, & performance measure correlations
- 4.1.1Verbal personifications & survey measures
- 4.1.2Verbal objectifications & survey measures
- 4.1.3Verbal behaviors & team performance
- 4.1.4Verbal personifications & verbal objectifications
- 4.2Survey & verbal trends over time & between roles
- 4.2.1Survey measures
- 4.2.2Verbal measures
- 4.1Survey, verbal, & performance measure correlations
- 5.Discussion
- 5.1Verbal personifications
- 5.1.1Verbal personifications & gendered attributions
- 5.1.2Verbal personifications & the role of teammate roles
- 5.2Verbal objectifications
- 5.2.1Verbal objectifications, negative perceptions, & dehumanization
- 5.2.2Verbal objectifications and personifications as non-orthogonal measures
- 5.3Limitations & recommendations for future work
- 5.1Verbal personifications
- 6.Conclusion
- Acknowledgments
References
References (74)
Algoe, S. B., Dwyer, P. C., Younge, A., & Oveis, C. (2020). A
new perspective on the social functions of emotions: Gratitude and the witnessing
effect. Journal of Personality and Social
Psychology, 119(1), 40–74.
Bandow, D. (2001). Time
to create sound teamwork. The Journal for Quality and
Partici-pation, 24(2), 41–47.
Barth, S., Schraagen, J. M., & Schmettow, M. (2015). Network
measures for characterising team adaptation
processes. Ergonomics, 58(8), 1287–1302.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement
Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of
Robots. International Journal of Social
Robotics, 1 (1), 71–81.
Brodbeck, F. C., Kugler, K. G., Fischer, J. A., Heinze, J., & Fischer, D. (2021). Group-
level integrative complexity: Enhancing differentiation and integration in group
decision-making. Group Processes & Intergroup
Relations, 24(1), 125–144.
Cannon-Bowers, J. A., & Salas, E. (2001). Reflections
on shared cognition. Journal of Organizational
Behavior, 22(2), 195–202.
Chiou, E. K., & Lee, J. D. (2023). Trusting
Automation: Designing for Responsivity and Resilience. Human Factors: The Journal of the Human
Factors and Ergonomics
Society, 65(1), 137–165.
Cohen, M. C., Demir, M., Chiou, E. K., & Cooke, N. J. (2021). The
Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming. 2021 IEEE 2nd
International Conference on Human-Machine Systems
(ICHMS), 1–6.
Cohen, M. C., Mancenido, M. V., Chiou, E. K., & Cooke, N. J. (2023, June). Teamness
and Trust in AI-Enabled Decision Support Systems (Short
Paper). In P. K. Murukannaiah & T. Hirzle (Eds.), Proceedings
of the Workshops at the Second International Conference on Hybrid Human-Artificial
Intelligence (pp. 175–187, Vol. 34561). CEUR.
Cohen, M. C., Peel, M. A., Scalia, M. J., Willett, M. M., Chiou, E. K., Gorman, J. C., & Cooke, N. J. (2023). Anthropomorphism
Moderates the Relationships of Dispositional, Perceptual, and Behavioral Trust in a Robot
Teammate. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 671, 529–536.
Cooke, N. J., Cohen, M. C., Fazio, W. C., Inderberg, L. H., Johnson, C. J., Lematta, G. J., Peel, M., & Teo, A. (2024). From
Teams to Teamness: Future Directions in the Science of Team Cognition. Human Factors: The
Journal of the Human Factors and Ergonomics
Society, 66(6), 1669–1680.
Cooke, N. J., Demir, M., McNeese, N., Gorman, J., & Myers, C. (2020, September). Human-Autonomy
Teaming in Remotely Piloted Aircraft Systems Operations Under Degraded Conditions (tech.
rep.). Cognitive Engineering Research Institute. Mesa, AZ.
Cooke, N. J., Gorman, J. C., Duran, J. L., & Taylor, A. R. (2007). Team
cognition in experienced command-and-control teams. Journal of Experimental Psychology:
Applied, 13(3), 146–157.
Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive
Team Cognition. Cognitive
Science, 37(2), 255–285.
Cooke, N. J., & Shope, S. M. (2004). Designing
a Synthetic Task Environment. In S. G. Schiflett, L. R. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled
Worlds: Development, Validation and
Applications (pp. 263–278).
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost
human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental
Psychology:
Applied, 22(3), 331–349.
Demir, M., Canan, M., & Cohen, M. C. (2023). Modeling
Team Interaction and Decision-Making in Agile Human-Machine Teams: Quantum and Dynamical Systems
Perspective. IEEE Transactions on Human-Machine
Systems, 53(4), 720–730.
Demir, M., McNeese, N. J., & Cooke, N. J. (2016). Team
communication behaviors of the human-automation teaming. 2016 IEEE International
Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support
(CogSIMA), 28–34.
Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M. (2015). Synthetic
Teammate Communication and Coordination With Humans. Proceedings of the Human Factors and
Ergonomics Society Annual
Meeting, 59(1), 951–955.
Duan, W., Flathmann, C., McNeese, N., Scalia, M. J., Zhang, R., Gorman, J., Freeman, G., Zhou, S., Hauptman, A. I., & Yin, X. (2025). Trusting
Autonomous Teammates in Human-AI Teams — A Literature Review. Proceedings of the 2025 CHI
Conference on Human Factors in Computing Systems, 1–23.
Duan, W., McNeese, N., & Zhang, R. (2023, November). Communication
in Human- AI Teaming. In Group
Communication (1st
ed., pp. 340–352). Routledge.
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The
role of trust in automation reliance. International Journal of Human-Computer
Studies, 58(6), 697–718.
Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The
Perceived Utility of Human and Automated Aids in a Visual Detection Task. Human Factors: The
Journal of the Human Factors and Ergonomics
Society, 44(1), 79–94.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On
Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological
Review, 114(4), 864–886.
Fischer, K. (2011). Interpersonal
variation in understanding robots as social actors. 2011 6th ACM/IEEE International Conference
on Human-Robot Interaction (HRI), 53–60.
Gorman, J. C. (2014). Team
Coordination and Dynamics: Two Central Issues. Current Directions in Psychological
Science, 23(5), 355–360.
Gorman, J. C., Amazeen, P. G., & Cooke, N. J. (2010). Team
coordination dynamics. Nonlinear Dynamics, Psychology, and Life
Sciences, 14(3), 265–289.
Gorman, J. C., Cooke, N. J., & Winner, J. L. (2006). Measuring
team situation awareness in decentralized command and control
environments. Ergonomics, 49(12–13), 1312–1325.
Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Eder, K. (2016). Believing
in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot
interaction. 2016 25th IEEE International Symposium on Robot and Human Interactive
Communication (RO-MAN), 493–500.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Para-suraman, R. (2011). A
Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors: The Journal
of the Human Factors and Ergonomics
Society, 53(5), 517–527.
Haring, K. S., Phillips, E., Lazzara, E. H., Ullman, D., Baker, A. L., & Keebler, J. R. (2021, January). Chapter
17 — Applying the swift trust model to human-robot teaming. In C. S. Nam & J. B. Lyons (Eds.), Trust
in Human-Robot
Interaction (pp. 407–427). Academic Press.
Hart, S. G., & Staveland, L. E. (1988, January). Development
of NASA-TLX (Task Load Index): Results of Empirical and Theoretical
Research. In P. A. Hancock & N. Meshkati (Eds.), Advances
in
Psychology (pp. 139–183, Vol. 521). North-Holland.
Heider, F., & Simmel, M. (1944). An
Experimental Study of Apparent Behavior. The American Journal of
Psychology, 57(2), 243–259.
Hinds, P. J., & Bailey, D. E. (2003). Out
of Sight, Out of Sync: Understanding Conflict in Distributed Teams. Organization
Science, 14(6), 615–632.
Jarvenpaa, S. L., & Leidner, D. E. (1999). Communication
and Trust in Global Virtual Teams. Organization
Science, 10(6), 791–815.
Jensen, T., Khan, M. M. H., & Albayram, Y. (2020). The
Role of Behavioral Anthropomor-phism in Human-Automation Trust
Calibration. In H. Degen & L. Reinerman-Jones (Eds.), Artificial
Intelligence in
HCI (pp. 33–53). Springer International Publishing.
Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations
for an Empirically Determined Scale of Trust in Automated Systems. International Journal of
Cognitive
Ergonomics, 4(1), 53–71.
Johnson, C. J., Demir, M., McNeese, N. J., Gorman, J. C., Wolff, A. T., & Cooke, N. J. (2023). The
Impact of Training on Human-Autonomy Team Communications and Trust Calibration. Human Factors:
The Journal of the Human Factors and Ergonomics
Society, 65(7), 1554–1570.
Johnson, M., & Bradshaw, J. M. (2021). How
Interdependence Explains the World of Teamwork. In W. F. Lawless, J. Llinas, D. A. Sofge, & R. Mittu (Eds.), Engineering
Artificially Intelligent Systems: A Systems Engineering Approach to Realizing Synergistic
Capabilities (pp. 122–146). Springer International Publishing.
Kozlowski, S. W. J., & Klein, K. J. (2000). A
multilevel approach to theory and research in organizations: Contextual, temporal, and emergent
processes. In Multilevel theory, research, and methods in
organizations: Foundations, extensions, and new
directions (pp. 3–90). Jossey-Bass/Wiley.
Kulms, P., & Kopp, S. (2019). More
Human-Likeness, More Trust?: The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and
Interdependent Human-Agent Cooperation. Proceedings of Mensch Und Computer
2019, 31–42.
Lee, J. D., & See, K. A. (2004). Trust
in Automation: Designing for Appropriate Reliance. Human
Factors, 46(1), 50–80.
Li, M., Erickson, I. M., Cross, E. V., & Lee, J. D. (2024). It’s
Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from
Conversation. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 66(6), 1724–1741.
Madhavan, P., & Wiegmann, D. A. (2007). Similarities
and differences between human- human and human-automation trust: An integrative
review. Theoretical Issues in Ergonomics
Science, 8(4), 277–301.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An
Integrative Model Of Organizational Trust. Academy of Management
Review, 20(3), 709–734.
Mayer, R. C., & Gavin, M. B. (2005). Trust
in Management and Performance: Who Minds the Shop While the Employees Watch the Boss? Academy
of Management
Journal, 48(5), 874–888.
McNeese, N. J., Demir, M., Cooke, N. J., & She, M. (2021). Team
Situation Awareness and Conflict: A Study of Human-Machine Teaming. Journal of Cognitive
Engineering and Decision
Making, 15(2–3), 83–96.
Mery, D., Saavedra, D., & Prasad, M. (2020). X-Ray
Baggage Inspection With Computer Vision: A Survey. IEEE
Access, 81, 145620–145633.
Mesmer-Magnus, J. R., Niler, A. A., Plummer, G., Larson, L. E., & DeChurch, L. A. (2017). The
cognitive underpinnings of effective teamwork: A continuation. Career Development
International, 22(5), 507–519.
Montague, E., & Asan, O. (2012). Trust
in technology-mediated collaborative health encounters: Constructing trust in passive user interactions with
technologies. Ergonomics, 55(7), 752–761.
Musick, G., O’Neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B. (2021). What
Happens When Humans Believe Their Teammate is an AI? An Investigation into Humans Teaming with
Autonomy. Computers in Human
Behavior, 1221, 106852.
Nass, C. (2004). Etiquette
equality: Exhibitions and expectations of computer politeness. Communications of the
ACM, 47(4), 35–37.
Nass, C., & Moon, Y. (2000). Machines
and Mindlessness: Social Responses to Computers. Journal of Social
Issues, 56(1), 81–103.
Nass, C., Steuer, J., Tauber, E., & Reeder, H. (1993). Anthropomorphism,
agency, and ethopoeia: Computers as social actors. INTERACT ’93 and CHI ’93 Conference
Companion on Human Factors in Computing Systems, 111–112.
Ogan, A., Finkelstein, S., Mayfield, E., D’Adamo, C., Matsuda, N., & Cassell, J. (2012). “Oh
dear stacy!”: Social interaction, elaboration, and learning with teachable agents. Proceedings
of the 2012 ACM Annual Conference on Human Factors in Computing Systems — CHI
’12, 39.
Olson, J., & Olson, L. (2012). Virtual
team trust: Task, communication and sequence. Team Performance Management: An International
Journal, 18(5/6), 256–276.
O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human-Autonomy
Teaming: A Review and Analysis of the Empirical Literature. Human Factors: The Journal of the
Human Factors and Ergonomics
Society, 64(5), 904–938.
Parasuraman, R., & Miller, C. A. (2004). Trust
and etiquette in high-criticality automated systems. Communications of the
ACM, 47(4), 51–55.
Parasuraman, R., Sheridan, T., & Wickens, C. (2000). A
model for types and levels of human interaction with automation. IEEE Transactions on Systems,
Man, and Cybernetics — Part A: Systems and
Humans, 30(3), 286–297.
Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust
in close relationships. Journal of Personality and Social
Psychology, 49(1), 95–112.
Riek, L. (2012). Wizard
of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. Journal of Human-Robot
Interaction, 119–136.
Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To
Err is Human(- like): Effects of Robot Gesture on Perceived Anthropomorphism and
Likability. International Journal of Social
Robotics, 5(3), 313–323.
Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. (2015). Would
You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and
Trust. 2015 10th ACM/IEEE International Conference on Human-Robot Interaction
(HRI), 1–8.
Salikutluk, V., Koert, D., & Jäkel, F. (2023, June). Interacting
with Large Language Models: A Case Study on AI-Aided Brainstorming for Guesstimation
Problems. In P. Lukowicz, S. Mayer, J. Koch, J. Shawe-Taylor, & I. Tiddi (Eds.), Frontiers
in Artificial Intelligence and Applications. IOS Press.
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A
Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future
Systems. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 58(3), 377–400.
Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines
as teammates: A research agenda on AI in team collaboration. Information &
Management, 57(2), 103174.
Shah, J., & Breazeal, C. (2010). An
Empirical Analysis of Team Coordination Behaviors and Action Planning With Application to Human-Robot
Teaming. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 52(2), 234–245.
Snow, T. (2021). From
satisficing to artificing: The evolution of administrative decision-making in the age of the
algorithm. Data &
Policy, 31, e3.
Takayama, L. (2009). Making
sense of agentic objects and teleoperation: In-the-moment and reflective perspectives. 2009 4th
ACM/IEEE International Conference on Human-Robot Interaction
(HRI), 239–240.
Tørring, B., Gittell, J. H., Laursen, M., Rasmussen, B. S., & Sørensen, E. E. (2019). Communication
and relationship dynamics in surgical teams in the operating room: An ethnographic study. BMC
Health Services
Research, 19(1), 528.
Walther, J. B., & D’Addario, K. P. (2001). The
Impacts of Emoticons on Message Interpretation in Computer-Mediated Communication. Social
Science Computer
Review, 19(3), 324–347.
