Article published In: Interaction Studies: Online-First Articles
Anthropomorphism, dependency, and trust in Generative Artificial Intelligence
Exploring the trust-privacy paradox through dual-theory integration
Published online: 17 March 2026
https://doi.org/10.1075/is.25116.maz
https://doi.org/10.1075/is.25116.maz
Abstract
As Generative Artificial Intelligence (GAI) becomes increasingly integrated into daily life, understanding how
users develop trust in these systems while navigating privacy concerns is critical. This study examines how perceived
anthropomorphism, privacy concerns, and dependency influence trust in GAI, drawing on Privacy Calculus Theory (PCT) and Media
Dependency Theory (MDT). The findings reveal that users trust GAI more when they perceive it as human-like, but privacy concerns
reduce trust, creating a trust-privacy paradox. However, GAI dependency moderates these relationships, strengthening the positive
effect of anthropomorphism on trust while weakening the negative impact of privacy concerns. Additionally, privacy concerns
partially mediate the relationship between anthropomorphism and trust, suggesting that users who perceive AI as human-like worry
less about privacy risks. By integrating PCT and MDT, this study offers a comprehensive framework to understand how trust in AI
evolves, not just through rational cost-benefit evaluations (PCT) but also through behavioral adaptation based on dependency
(MDT). These insights have practical implications for AI developers and policymakers, emphasizing the need for human-centered AI
design, privacy safeguards, and ethical guidelines to foster sustained trust in AI-driven interactions while addressing user
concerns.
Article outline
- 1.Introduction
- 2.Literature review
- 2.1Theoretical framework
- 2.2Perceived anthropomorphism and trust in generative AI
- 2.3Anthropomorphism, privacy, and trust in GAI
- 2.4Moderating role of generative AI dependency
- 3.Methodology
- 3.1Information of study samples
- 3.2Data collection
- 3.3Development of measurement scale
- 3.4Composite reliability & validity of measurement scale
- 3.5Testing model fit
- 4.Data analysis & results
- 4.1Descriptive & correlation analysis
- 4.2Measurement of structural model
- 4.3Slope test for moderation
- 5.Discussion & implications
- 5.1Discussion of research findings
- 5.2Theoretical & practical implications
- 5.3Study limitations and future research directions
- 6.Conclusion
References
References (48)
Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust
in AI: progress, challenges, and future directions. Humanities and Social Sciences
Communications, 11(1), 1–30.
Ali, H., & Aysan, A. F. (2025). Ethical
dimensions of generative AI: a cross-domain analysis using machine learning structural topic
modeling. International Journal of Ethics and
Systems, 41(1), 3–34.
Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2024). A
systematic literature review of user trust in AI-enabled systems: An HCI
perspective. International Journal of Human–Computer
Interaction, 40(5), 1251–1266.
Ball-Rokeach, S. J., & DeFleur, M. L. (1976). A
dependency model of mass-media effects. Communication
research, 3(1), 3–21.
Başer, M. Y., Büyükbeşe, T., & Durmaz, Y. (2024). “Yes,
It’s Cute, But How Can I Be Sure It’s Safe or Not?” Investigating the Intention to Use Service Robots in the Context of
Privacy Calculus. International Journal of Human–Computer
Interaction, 40(20), 6151–6166.
Benitez, J., Henseler, J., Castillo, A., & Schuberth, F. (2020). How
to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS
research. Information &
management, 57(2), 103168.
Carillo, K., Scornavacca, E., & Za, S. (2017). The
role of media dependency in predicting continuance intention to use ubiquitous media
systems. Information &
Management, 54(3), 317–335.
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To
be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence
agents. Journal of Management Information
Systems, 39(4), 969–1005.
Chen, X., Wu, M., Cheng, C., & Mou, J. (2024). Weighing
user’s privacy calculus on personal information disclosure: the moderating effect of social media
identification. Online Information Review.
Chen, Y., & Esmaeilzadeh, P. (2024). Generative
AI in medical practice: in-depth exploration of privacy and security challenges. Journal of
Medical Internet
Research, 261, e53008.
Chen, Z., Gong, Y., Huang, R., & Lu, X. (2024). How
does information encountering enhance purchase behavior? The mediating role of customer
inspiration. Journal of Retailing and Consumer
Services, 781, 103772.
Culnan, M. J., & Armstrong, P. K. (1999). Information
privacy concerns, procedural fairness, and impersonal trust: An empirical
investigation. Organization
science, 10(1), 104–115.
Ding, Y., & Najaf, M. (2024). Interactivity,
humanness, and trust: a psychological approach to AI chatbot adoption in e-commerce. BMC
psychology, 12(1), 595.
Dwork, C., & Minow, M. (2022). Distrust
of artificial intelligence: Sources & responses from computer science &
law. Daedalus, 151(2), 309–321.
F. Hair Jr, J., Sarstedt, M., Hopkins, L., & G. Kuppelwieser, V. (2014). Partial
least squares structural equation modeling (PLS-SEM). European Business
Review, 26(2), 106–121.
Fornell, C., & Larcker, D. F. (1981). Evaluating
structural equation models with unobservable variables and measurement error. Journal of
marketing
research, 18(1), 39–50.
Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., & Sikdar, B. (2024). Privacy
and Security Concerns in Generative AI: A Comprehensive Survey. IEEE
Access.
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From
chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE
Access.
Ha, Q.-A., Chen, J. V., Uy, H. U., & Capistrano, E. P. (2021). Exploring
the Privacy Concerns in Using Intelligent Virtual Assistants under Perspectives of Information Sensitivity and
Anthropomorphism. International Journal of Human–Computer
Interaction, 37(6), 512–527.
Hagendorff, T. (2024). Mapping
the Ethics of Generative AI: A Comprehensive Scoping Review. Minds and
Machines, 34(4), 39.
Henrique, B. M., & Santos Jr, E. (2024). Trust
in artificial intelligence: Literature review and main path analysis. Computers in Human
Behavior: Artificial Humans, 1000431.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A
new criterion for assessing discriminant validity in variance-based structural equation
modeling. Journal of the academy of marketing
science, 431, 115–135.
Hsieh, S. H., & Lee, C. T. (2024). The
AI humanness: how perceived personality builds trust and continuous usage intention. Journal of
Product & Brand Management.
Hsu, C.-L., & Lin, J. C.-C. (2008). Acceptance
of blog usage: The roles of technology acceptance, social influence and knowledge sharing
motivation. Information &
management, 45(1), 65–74.
Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., Dai, X., & Wang, Y. (2024). AI
Technology panic — is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations
for AI Use Among Adolescents. Psychology Research and Behavior
Management, 17(null), 1087–1102.
Hyun Baek, T., & Kim, M. (2023). Is
ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial
intelligence. Telematics and
Informatics, 831, 102030.
Kim, J., & Im, I. (2023). Anthropomorphic
response: Understanding interactions between humans and artificial intelligence
agents. Computers in Human
Behavior, 1391, 107512.
Kim, J. S., & Baek, T. H. (2024). Motivational
determinants of continuance usage intention for generative AI: an investment model approach for ChatGPT users in the United
States. Behaviour & Information
Technology, 1–17.
Kim, J. S., Kim, M., & Baek, T. H. (2024). Enhancing
User Experience With a Generative AI Chatbot. International Journal of Human–Computer
Interaction, 1–13.
Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies
in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions. Philosophy &
Technology, 35(1), 17.
Lim, S., & Shim, H. (2022). No
secrets between the two of us: Privacy concerns over using AI agents. Cyberpsychology: Journal
of Psychosocial Research on
Cyberspace, 16(4).
Liu, S., & Wang, L. (2016). Influence
of managerial control on performance in medical information system projects: the moderating role of organizational environment
and team risks. International Journal of Project
Management, 34(1), 102–116.
Maseeh, H. I., Jebarajakirthy, C., Pentecost, R., Arli, D., Weaven, S., & Ashaduzzaman, M. (2021). Privacy
concerns in e-commerce: A multilevel meta-analysis. Psychology &
Marketing, 38(10), 1779–1798.
Menard, P., & Bott, G. J. (2025). Artificial
intelligence misuse and concern for information privacy: New construct validation and future
directions. Information Systems
Journal, 35(1), 322–367.
Patrizi, M., Šerić, M., & Vernuccio, M. (2024). Hey
Google, I trust you! The consequences of brand anthropomorphism in voice-based artificial intelligence
contexts. Journal of Retailing and Consumer
Services, 771, 103659.
Piller, F. T., Srour, M., & Marion, T. J. (2024). Generative
AI, Innovation, and Trust. The Journal of Applied Behavioral
Science, 60(4), 613–622.
Rohmah, N. N. m., & Rahmawati, E. (2023). Interpretation
of Media System Dependency Theory on Financial Technology. Jurnal
ASPIKOM, 8(1), 153–168.
Seberger, J. S. (2024). Stuck
in Isabel’s Garden: Anthropomorphic Metaphors and the Oversimplification of Digital
Privacy. Proc. ACM Hum.-Comput.
Interact., 8(CSCW2), Article
408.
Shahzad, M. F., Xu, S., & Asif, M. (2024). Factors
affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology
acceptance model. British Educational Research
Journal, n/a(n/a).
Teo, Z. L., Quek, C. W. N., Wong, J. L. Y., & Ting, D. S. W. (2024). Cybersecurity
in the generative artificial intelligence era. Asia-Pacific Journal of
Ophthalmology, 13(4), 100091.
Tschopp, M., Gieselmann, M., & Sassenberg, K. (2023). Servant
by default? How humans perceive their relationship with conversational AI. Cyberpsychology:
Journal of Psychosocial Research on
Cyberspace, 17(3).
van Es, K., & Nguyen, D. (2024). “Your
friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining
AI. AI & SOCIETY.
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The
dark side of generative artificial intelligence: A critical analysis of controversies and risks of
ChatGPT. Entrepreneurial Business and Economics
Review, 11(2), 7–30.
Wang, T., Zhang, Y., Qi, S., Zhao, R., Xia, Z., & Weng, J. (2024). Security
and Privacy on Generative Data in AIGC: A Survey. ACM Comput.
Surv., 57(4), Article
82.
Westerman, D., Edwards, A. P., Edwards, C., Luo, Z., & Spence, P. R. (2020). I-It,
I-Thou, I-Robot: The perceived humanness of AI in human-machine communication. Communication
Studies, 71(3), 393–408.