Article published In: Interaction Studies
Vol. 22:1 (2021) ► pp.55–80
What’s to bullying a bot?
Correlates between chatbot humanlikeness and abuse
Published online: 17 September 2021
https://doi.org/10.1075/is.20002.kei
https://doi.org/10.1075/is.20002.kei
Abstract
In human-chatbot interaction, users casually and regularly offend and abuse the chatbot they are interacting with. The current paper explores the relationship between chatbot humanlikeness on the one hand and sexual advances and verbal aggression by the user on the other hand. 283 conversations between the Cleverbot chatbot and its users were harvested and analysed. Our results showed higher counts of user verbal aggression and sexual comments towards Cleverbot when Cleverbot appeared more humanlike in its behaviour. Caution is warranted with the interpretation of the results however as no experimental manipulation was conducted and causality can thus not be inferred. Nonetheless, the findings are relevant for both the research on the abuse of conversational agents, and the development of efficient approaches to discourage or prevent verbal aggression by chatbot users.
Article outline
- 1.Introduction
- 1.1Literature
- 1.1.1Incidence: Nonhuman conversation agents suffer more abuse than humans
- 1.1.2Ethical implications
- 1.1.3Need for a theoretical framework of chatbot abuse
- 1.2Current study
- 1.1Literature
- 2.Method
- 2.1Procedure
- 2.2Dataset
- 2.2.1Data collection
- 2.3Coding procedure
- 2.4Variables
- 2.4.1Coded variables
- Verbal aggression (Offence)
- Sexual references (SexRef)
- Nonsensical reply by Cleverbot (Nonsense)
- Insult by Cleverbot (Insult)
- Claiming to be (not) human (Humanity/Non-human user)
- Gender
- 2.4.2Other variables
- Turn count
- Approximate Loebner test (Loebner)
- 2.4.1Coded variables
- 3.Results
- 3.1Preliminary analyses
- 3.1.1Conversation descriptives
- 3.1.2Agreement
- 3.2Main analyses
- 3.2.1Verbal aggression
- 3.2.2Sexual comments
- 3.1Preliminary analyses
- 4.Discussion
- 4.1Limitations
- 4.2Conclusion
- Acknowledgements
- Notes
References
References (57)
Ardissono, L., Boella, G., and Lesmo, L. (2000). A plan-based agent architecture for interpreting natural language dialogue. International Journal of Human-Computer Studies, 52(4):583–635.
Bartneck, C., Reichenbach, J., and Carpenter, J. (2008). The carrot and the stick – the role of praise and punishment in human-robot interaction. Interaction Studies – Social Behaviour and Communication in Biological and Artificial Systems, 9(2):179–203.
Bozdogan, H. (1987). Model selection and akaike’s information criterion (aic): The general theory and its analytical extensions. Psychometrika, 52(3):345–370.
Brahnam, S. (2005). Strategies for handling customer abuse of ecas. Abuse: The darker side of humancomputer interaction, pages 62–67.
Brahnam, S. and De Angeli, A. (2012). Gender affordances of conversational agents. Interacting with Computers, 24(3):139–153.
Brscić, D., Kidokoro, H., Suehiro, Y., and Kanda, T. (2015). Escaping from children’s abuse of social robots. In Proceedings of the International Conference on Human-Robot Interaction, pages 59–66, Portland, USA. ACM/IEEE.
Burnham, K. P. and Anderson, D. R. (2003). Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media, New York.
Chin, H. and Yi, M. Y. (2019). Should an agent be ignoring it?: A study of verbal abuse types and conversational agents’ response styles. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–6. ACM.
Connolly, J. (2020). Preventing robot abuse through emotional robot responses. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 558–560.
Cowie, H. and Berdondini, L. (2002). The expression of emotion in response to bullying. Emotional and Behavioural Difficulties, 7(4):207–214.
Curry, A. C. and Rieser, V. (2018). # metoo alexa: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages 7–14.
Darling, K. (2012). Extending legal rights to social robots. In We Robot Conference, University of Miami, pages 1–24, Miami, USA. University of Miami.
De Angeli, A. (2009). Ethical implications of verbal disinhibition with conversational agents. Psych-Nology Journal, 7(1):49–57.
De Angeli, A. and Brahnam, S. (2006). Sex stereotypes and conversational agents. In Gender and Interaction: real and virtual women in a male world. Venice, Italy, pages 1–4.
De Angeli, A., Brahnam, S., Wallis, P., and Dix, A. (2006). Misuse and abuse of interactive technologies. In CHI’06 Extended Abstracts on Human Factors in Computing Systems, pages 1647–1650, Montreal, Canada. ACM.
De Angeli, A. and Carpenter, R. (2005). Stupid computer! abuse and social identities. In Proceedings of Abuse: The dark side of human-computer interaction, An INTERACT 2005 workshop, pages 19–25.
De Angeli, A., Johnson, G. I., and Coventry, L. (2001). The unfriendly user: exploring social reactions to chatterbots. In Proceedings of The International Conference on Affective Human Factors Design, London, pages 467–474.
De Swert, K. (2012). Calculating inter-coder reliability in media content analysis using Krippendorffs Alpha. Center for Politics and Communication, University of Amsterdam, the Netherlands.
Dindia, K., Fitzpatrick, M. A., and Kenny, D. A. (1997). Self-disclosure in spouse and stranger interaction: A social relations analysis. Human Communication Research, 23(3):388–412.
Fessler, L. (2017a). Apple and amazon are under fire for siri and alexas responses to sexual harassment. [URL]
(2017b). We tested bots like siri and alexa to see who would stand up to sexual harassment. [URL]
Haslam, N. (2006). Dehumanization: An integrative review. Personality and social psychology review, 10(3):252–264.
Haslam, N., Loughnan, S., Kashima, Y., and Bain, P. (2008). Attributing and denying humanness to others. European review of social psychology, 19(1):55–85.
Hern, A. (2010). Apple made siri deflect questions on feminism, leaked papers reveal. [URL]. [Online; recovered 23 June 2020].
Hill, J., Ford, W. R., and Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations. Computers in Human Behavior, 491:245–250.
Ho, A., Hancock, J., and Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4):712–733.
Hutchinson, M. K. and Holtman, M. C. (2005). Analysis of count data using poisson regression. Research in Nursing & Health, 28(5):408–418.
Jay, T. (2009). The utility and ubiquity of taboo words. Perspectives on Psychological Science, 4(2):153–161.
Katsyri, J., Forger, K., Mäkäräinen, M., and Takala, T. (2015). A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in psychology, 61.
Keijsers, M. and Bartneck, C. (2018). Mindless robots get bullied. In Proceedings of the International Conference on Human-Robot Interaction, pages 205–214, New York, USA. ACM/IEEE.
Keijsers, M., Bartneck, C., and Kazmi, H. S. (2019a). Cloud-based sentiment analysis for interactive agents. In Proceedings of the 7th International Conference on Human-Agent Interaction, pages 43–50.
(2019b). Cloud-based sentiment analysis for interactive agents. In Proceedings of the 7th International Conference on Human-Agent Interaction (HAI), pages 43–50.
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., and Kircher, T. (2008). Can machines think? interaction and perspective taking with robots investigated via fmri. PloS One, 3(7):e2597.
Lee, M. K., Kiesler, S., and Forlizzi, J. (2010). Receptionist or information kiosk: how do people talk with a robot? In Proceedings of the 2010 ACM conference on Computer supported cooperative work, pages 31–40.
Lortie, C. L. and Guitton, M. J. (2011). Judgment of the humanness of an interlocutor is in the eye of the beholder. PLoS One, 6(9):e25085.
Lowry, P. B., Zhang, J., Wang, C., and Siponen, M. (2016). Why do adults engage in cyberbullying on social media? an integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27(4):962–986.
MacDorman, K. F. and Chattopadhyay, D. (2016). Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition, 1461:190–205.
Mauldin, M. L. (1994). Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. In AAAI, volume 941, pages 16–21.
Moore, S. (2018). Gartner says 25 percent of customer service operations will use virtual customer assistants by 2020.
Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 72–78, Boston, USA. ACM.
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., and Yamada, S. (2017). Why do children abuse robots? Interaction Studies, 17(3):347–369.
Oberman, L. M., McCleery, J. P., Ramachandran, V. S., and Pineda, J. A. (2007). Eeg evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing, 70(13–15):2194–2203.
Paetzel, M., Peters, C., Nyström, I., and Castellano, G. (2016). Congruency matters – how ambiguous gender cues increase a robots uncanniness. In International Conference on Social Robotics, pages 402–412. Springer.
Pennebaker, J. W., Booth, R. J., Boyd, R. L., and Francis, M. E. (2015a). Linguistic inquiry and word count: Liwc 2015 [computer software]. pennebaker conglomerates.
Pennebaker, J. W., Boyd, R. L., Jordan, K., and Blackburn, K. (2015b). The development and psychometric properties of liwc2015. Technical report, The University of Texas at Austin.
Reeves, B. and Nass, C. (1996). The Media Equation. CSLI Publications and Cambridge University Press, Cambridge.
Rehm, M. and Krogsager, A. (2013). Negative affect in human robot interaction – impoliteness in unexpected encounters with robots. In Proceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 45–50. IEEE.
Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., Pistrang, N., and Sanchez-Vives, M. V. (2006). A virtual reprise of the stanley milgram obedience experiments. PloS one, 1(1):e39.
Sokol, N., Bussey, K., and Rapee, R. M. (2016). Victims responses to bullying: The gap between students evaluations and reported responses. School Mental Health, 8(4):461–475.
Strait, M., Contreras, V., and Vela, C. D. (2018). Verbal disinhibition towards robots is associated with general antisociality. arXiv e-prints.
Tan, X. Z., Vázquez, M., Carter, E. J., Morales, C. G., and Steinfeld, A. (2018). Inducing bystander interventions during robot abuse with social mechanisms. In Proceedings of the International Conference on Human-Robot Interaction, pages 169–177, New York, USA. ACM/IEEE.
Veletsianos, G., Scharber, C., and Doering, A. (2008). When sex, drugs, and violence enter the classroom: Conversations between adolescents and a female pedagogical agent. Interacting with computers, 20(3):292–301.
Cited by (21)
Cited by 21 other publications
Cao, Cuicui, Yingying Hu & Qianfan Zhang
Dobrosovestnova, Anna, David J. Bailey, Ralf Vetter & Masoumeh Mansouri
Giebeler, Yasmina, Basil Wahn & Eva Wiese
Lancaster, Karen
Martinez, Julien
Alqurashi, Turki S.
Chen, Yuling, Jiacheng Cao, Shuping Liu, Mei Huang, Fang Wan & Charles Weizheng Chen
Djufril, Ray, Jessica R. Frampton & Silvia Knobloch-Westerwick
Guan, Biyu, Xin Li, Zhenshuo Luo & Pei Liu
Li, Jingshu, Zicheng Zhu, Renwen Zhang & Yi-Chieh Lee
Lima, Eleonora & Tiffany Morisseau
Rafikova, Antonina & Anatoly Voronin
Rezzani, Andrea, María Menéndez-Blanco, Brad J. Bushman & Antonella De Angeli
Ungless, Eddie L., Björn Ross & Zachary Horne
Wang, Cheng-Yen
Yamada, Sachie, Takayuki Kanda & Kohki Arimitsu
De Cicco, Roberta
Gille, Felix, Kimon Papadopoulos, Jana Sedlakova, Federica Zavattaro & Caroline Brall
Ladak, Ali, Jamie Harris & Jacy Reese Anthis
Xi, Yipeng, Aitong Ji & Weihua Yu
This list is based on CrossRef data as of 17 march 2026. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
