References (48)
References
Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications, 11(1), 1–30. Google Scholar logo with link to Google Scholar
Ali, H., & Aysan, A. F. (2025). Ethical dimensions of generative AI: a cross-domain analysis using machine learning structural topic modeling. International Journal of Ethics and Systems, 41(1), 3–34. Google Scholar logo with link to Google Scholar
Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2024). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 40(5), 1251–1266. Google Scholar logo with link to Google Scholar
Ball-Rokeach, S. J., & DeFleur, M. L. (1976). A dependency model of mass-media effects. Communication research, 3(1), 3–21. Google Scholar logo with link to Google Scholar
Başer, M. Y., Büyükbeşe, T., & Durmaz, Y. (2024). “Yes, It’s Cute, But How Can I Be Sure It’s Safe or Not?” Investigating the Intention to Use Service Robots in the Context of Privacy Calculus. International Journal of Human–Computer Interaction, 40(20), 6151–6166. Google Scholar logo with link to Google Scholar
Bellovin, S. M. (2024). Degenerative AI? IEEE Security & Privacy, 22(3), 88–88. Google Scholar logo with link to Google Scholar
Benitez, J., Henseler, J., Castillo, A., & Schuberth, F. (2020). How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research. Information & management, 57(2), 103168. Google Scholar logo with link to Google Scholar
Carillo, K., Scornavacca, E., & Za, S. (2017). The role of media dependency in predicting continuance intention to use ubiquitous media systems. Information & Management, 54(3), 317–335. Google Scholar logo with link to Google Scholar
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. Google Scholar logo with link to Google Scholar
Chen, X., Wu, M., Cheng, C., & Mou, J. (2024). Weighing user’s privacy calculus on personal information disclosure: the moderating effect of social media identification. Online Information Review. Google Scholar logo with link to Google Scholar
Chen, Y., & Esmaeilzadeh, P. (2024). Generative AI in medical practice: in-depth exploration of privacy and security challenges. Journal of Medical Internet Research, 261, e53008. Google Scholar logo with link to Google Scholar
Chen, Z., Gong, Y., Huang, R., & Lu, X. (2024). How does information encountering enhance purchase behavior? The mediating role of customer inspiration. Journal of Retailing and Consumer Services, 781, 103772. Google Scholar logo with link to Google Scholar
Culnan, M. J., & Armstrong, P. K. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization science, 10(1), 104–115. Google Scholar logo with link to Google Scholar
Ding, Y., & Najaf, M. (2024). Interactivity, humanness, and trust: a psychological approach to AI chatbot adoption in e-commerce. BMC psychology, 12(1), 595. Google Scholar logo with link to Google Scholar
Dwork, C., & Minow, M. (2022). Distrust of artificial intelligence: Sources & responses from computer science & law. Daedalus, 151(2), 309–321. Google Scholar logo with link to Google Scholar
F. Hair Jr, J., Sarstedt, M., Hopkins, L., & G. Kuppelwieser, V. (2014). Partial least squares structural equation modeling (PLS-SEM). European Business Review, 26(2), 106–121. Google Scholar logo with link to Google Scholar
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, 18(1), 39–50. Google Scholar logo with link to Google Scholar
Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., & Sikdar, B. (2024). Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE Access. Google Scholar logo with link to Google Scholar
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access. Google Scholar logo with link to Google Scholar
Ha, Q.-A., Chen, J. V., Uy, H. U., & Capistrano, E. P. (2021). Exploring the Privacy Concerns in Using Intelligent Virtual Assistants under Perspectives of Information Sensitivity and Anthropomorphism. International Journal of Human–Computer Interaction, 37(6), 512–527. Google Scholar logo with link to Google Scholar
Hagendorff, T. (2024). Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. Minds and Machines, 34(4), 39. Google Scholar logo with link to Google Scholar
Henrique, B. M., & Santos Jr, E. (2024). Trust in artificial intelligence: Literature review and main path analysis. Computers in Human Behavior: Artificial Humans, 1000431. Google Scholar logo with link to Google Scholar
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the academy of marketing science, 431, 115–135. Google Scholar logo with link to Google Scholar
Hsieh, S. H., & Lee, C. T. (2024). The AI humanness: how perceived personality builds trust and continuous usage intention. Journal of Product & Brand Management. Google Scholar logo with link to Google Scholar
Hsu, C.-L., & Lin, J. C.-C. (2008). Acceptance of blog usage: The roles of technology acceptance, social influence and knowledge sharing motivation. Information & management, 45(1), 65–74. Google Scholar logo with link to Google Scholar
Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., Dai, X., & Wang, Y. (2024). AI Technology panic — is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents. Psychology Research and Behavior Management, 17(null), 1087–1102. Google Scholar logo with link to Google Scholar
Hyun Baek, T., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 831, 102030. Google Scholar logo with link to Google Scholar
Kim, J., & Im, I. (2023). Anthropomorphic response: Understanding interactions between humans and artificial intelligence agents. Computers in Human Behavior, 1391, 107512. Google Scholar logo with link to Google Scholar
Kim, J. S., & Baek, T. H. (2024). Motivational determinants of continuance usage intention for generative AI: an investment model approach for ChatGPT users in the United States. Behaviour & Information Technology, 1–17. Google Scholar logo with link to Google Scholar
Kim, J. S., Kim, M., & Baek, T. H. (2024). Enhancing User Experience With a Generative AI Chatbot. International Journal of Human–Computer Interaction, 1–13. Google Scholar logo with link to Google Scholar
Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions. Philosophy & Technology, 35(1), 17. Google Scholar logo with link to Google Scholar
Lim, S., & Shim, H. (2022). No secrets between the two of us: Privacy concerns over using AI agents. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 16(4). Google Scholar logo with link to Google Scholar
Liu, S., & Wang, L. (2016). Influence of managerial control on performance in medical information system projects: the moderating role of organizational environment and team risks. International Journal of Project Management, 34(1), 102–116. Google Scholar logo with link to Google Scholar
Maseeh, H. I., Jebarajakirthy, C., Pentecost, R., Arli, D., Weaven, S., & Ashaduzzaman, M. (2021). Privacy concerns in e-commerce: A multilevel meta-analysis. Psychology & Marketing, 38(10), 1779–1798. Google Scholar logo with link to Google Scholar
Menard, P., & Bott, G. J. (2025). Artificial intelligence misuse and concern for information privacy: New construct validation and future directions. Information Systems Journal, 35(1), 322–367. Google Scholar logo with link to Google Scholar
Patrizi, M., Šerić, M., & Vernuccio, M. (2024). Hey Google, I trust you! The consequences of brand anthropomorphism in voice-based artificial intelligence contexts. Journal of Retailing and Consumer Services, 771, 103659. Google Scholar logo with link to Google Scholar
Piller, F. T., Srour, M., & Marion, T. J. (2024). Generative AI, Innovation, and Trust. The Journal of Applied Behavioral Science, 60(4), 613–622. Google Scholar logo with link to Google Scholar
Rohmah, N. N. m., & Rahmawati, E. (2023). Interpretation of Media System Dependency Theory on Financial Technology. Jurnal ASPIKOM, 8(1), 153–168. Google Scholar logo with link to Google Scholar
Seberger, J. S. (2024). Stuck in Isabel’s Garden: Anthropomorphic Metaphors and the Oversimplification of Digital Privacy. Proc. ACM Hum.-Comput. Interact., 8(CSCW2), Article 408. Google Scholar logo with link to Google Scholar
Shahzad, M. F., Xu, S., & Asif, M. (2024). Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. British Educational Research Journal, n/a(n/a). Google Scholar logo with link to Google Scholar
Teo, Z. L., Quek, C. W. N., Wong, J. L. Y., & Ting, D. S. W. (2024). Cybersecurity in the generative artificial intelligence era. Asia-Pacific Journal of Ophthalmology, 13(4), 100091. Google Scholar logo with link to Google Scholar
Tschopp, M., Gieselmann, M., & Sassenberg, K. (2023). Servant by default? How humans perceive their relationship with conversational AI. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 17(3). Google Scholar logo with link to Google Scholar
van Es, K., & Nguyen, D. (2024). “Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI. AI & SOCIETY. Google Scholar logo with link to Google Scholar
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7–30. Google Scholar logo with link to Google Scholar
Wang, T., Zhang, Y., Qi, S., Zhao, R., Xia, Z., & Weng, J. (2024). Security and Privacy on Generative Data in AIGC: A Survey. ACM Comput. Surv., 57(4), Article 82. Google Scholar logo with link to Google Scholar
Westerman, D., Edwards, A. P., Edwards, C., Luo, Z., & Spence, P. R. (2020). I-It, I-Thou, I-Robot: The perceived humanness of AI in human-machine communication. Communication Studies, 71(3), 393–408. Google Scholar logo with link to Google Scholar
Wilkowska, W., Otten, S., Maidhof, C., & Ziefle, M. (2024). Trust Conditions and Privacy Perceptions in the Acceptance of Ambient Technologies for Health-Related Purposes. International Journal of Human–Computer Interaction, 40(22), 7784–7799. Google Scholar logo with link to Google Scholar
Zhou, T., & Zhang, C. (2024). Examining generative AI user addiction from a CAC perspective. Technology in Society, 781, 102653. Google Scholar logo with link to Google Scholar
Mobile Menu Logo with link to supplementary files background Layer 1 prag Twitter_Logo_Blue