Article published In: Social Cues in Robot Interaction, Trust and Acceptance
Edited by Alessandra Rossi, Kheng Lee Koay, Silvia Moros, Patrick Holthaus and Marcus Scheunemann
[Interaction Studies 20:3] 2019
► pp. 530–560
Reshaping human intention in Human-Robot Interactions by robot moves
A comparative analysis of HMM and OOM methods
Published online: 18 November 2019
https://doi.org/10.1075/is.18068.dur
https://doi.org/10.1075/is.18068.dur
Abstract
This paper outlines the methodology and experiments associated with the reshaping of human intentions based on
robot movements within Human-Robot Interactions (HRIs). Although studies on estimating human intentions are well studied in the
literature, reshaping intentions through robot-initiated interactions is a new significant branching in the field of HRI. In this
paper, we analyze how estimated human intentions can intentionally change through cooperation with mobile robots in real
Human-Robot environments. This paper proposes an intention-reshaping system that includes either the Observable Operator Models
(OOMs) or Hidden Markov Models (HMMs) to estimate human intention and decide which moves a robot should perform to reshape
previously estimated human intentions into desired ones. At the low level, the system needs to track the locations of all mobile
agents using cameras. We test our system on videos taken in a real HRI environment that has been developed as our experimental
setup. The results show that OOMs are faster than HMMs and both models give correct decisions for testing sequences.
Article outline
- 1.Introduction
- 2.Related works
- 3.Application scenario and experimental setup
- Experimental application scenario
- 4.Proposed methodology
- 4.1Extraction of intention-related features from human actions
- 4.1.1Tracking human and robots
- 4.2Intention estimation
- 4.2.1Application of Hidden Markov Models (HMM)
- 4.2.2Application of Observable Operator Model (OOM)
- 4.3Intention reshaping
- 4.1Extraction of intention-related features from human actions
- 5.Results
- 5.1Results of intention reshaping
- 5.2Comparison results of methods for intention estimation
- 6.Conclusion and discussion
References
References (45)
Aarno, D., & Kragic, D. (2006). Layered HMM for motion intention recognition, In IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, 2006, pp. 5130–5135.
Arasaratnam, I., Haykin, S., Kirubarajan, T., & Dilkes, F. (2006). Tracking the mode of operation of multi-function radars. 2006 IEEE Conference on Radar, Verona, NY, USA.
Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist., 41, 1, pp. 164–171.
Charniak, E., & Goldman, R. (1993). A Bayesian model of plan recognition, Artif. Intell. 641, pp. 53–79.
Chouchourelou, A., Matsuka, T., Harber, K., & Shiffrar, M. (2006). The visual analysis of emotional actions. Social Neuroscience, 11, pp. 63–74.
Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D. (2005). The perception of emotion from body movement in point-light displays of interpersonal dialogue. Perception, 341, pp. 1171–1180.
Cutting, J. E., & Kozlowski, L. T. (1977). Recognizing friends by their walk-gait perception without familiarity cues. Bulletin of the Psychonomic Society, 91, pp. 353–356.
Daprati, E., Wriessnegger, S., & Lacquaniti, F. (2007). Kinematic cues and recognition of self-generated actions. Experimental Brain Research, 1771, pp. 31–44.
Dielmann, A., & Renals, S. (2004). Dynamic Bayesian Networks for Meeting Structuring. In Proc. of the IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 629–632.
Dittrich, W. H., Troscianko, T., Lea, S. E. G., & Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 251, 727–738.
Durdu, A., Erkmen, I., Erkmen, A. M., Yilmaz, A. (2011). Morphing Estimated Human Intention via Human-Robot Interactions, Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress on Engineering and Computer Science 2011, WCECS 2011, San Francisco, USA, pp. 354–359.
(2012). Chapter 13: Robotic Hardware and Software Integration for Changing Human Intentions. In Prototyping of Robotic Systems: Applications of Design and Implementation, Edited by T. Sobh and X. Xiong, Pennsylvania: IGI Global Publisher, 2012.
Durdu, A., Erkmen, I., Erkmen, A. M. (2012). Observable Operator Models for Reshaping Estimated Human Intention by Robot Moves in Human-Robot Interactions, IEEE–INISTA-12 International Symposium on Innovations in Intelligent Systems and Applications, July 2012, Trabzon, TURKIYE.
(2016). Estimating and Reshaping Human Intention via Human-Robot Interaction, Turkish J. Elec Eng & Comp Sci, 24, 1, pp. 88–104.
Grezes, J., Frith, C., & Passingham, R. E. (2004). Brain mechanisms for inferring deceit in the actions of others. Journal of Neuroscience, 241, pp. 5500–5505.
Jaeger, H., Zhao, M., and Kolling, A. (2005). Efficient estimation of OOMs. In Advances in Neural Information Processing Systems (NIPS). MIT Press.
Kelley, R., Nicolescu, M., Tavakkoli, A., Nicolescu, M., Christopher King, George Bebis. (2008). Understanding Human Intentions via Hidden Markov Models in Autonomous Mobile Robots. HRI’08, March 12–15, 2008, Amsterdam, Netherlands.
Knoblich, G., & Prinz, W. (2001). Recognition of self-generated actions from kinematic displays of drawing. Journal of Experimental Psychology: Human Perception and Performance, 271, pp. 456–465.
Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G. (2002). Hearing sounds, understanding actions: Action representation in mirror neurons. Science, 2971, pp. 846–848.
Lee, K. K., & Xu, Y. (2004). Modeling human actions from learning. in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’04), 31, pp. 2787–2792.
Loula, F., Prasad, S., Harber, K., & Shiffrar, M. (2005). Recognizing people from their movement. Journal of Experimental Psychology: Human Perception and Performance, 311, pp. 210–220.
Manera, V., Schouten, B., Becchio, C., Bara, B. G., & Verfaillie, K. (2010). Inferring intentions from biological motion: A stimulus set of point-light communicative interactions. Behavior Research Methods, 421, pp. 168–178.
Meltzoff, A. N. (1995). Understanding the intentions of others: Reenactment of intended acts by 18-month-old children. Developmental Psychology, 31, 5, pp 1–16.
Miyake, T., Matsumoto, T., Imamura, T., & Zhang, Z. E. (2011). Estimation of facial expression from its change in time. ICIC Express Letters, Part B: Applications, 2, 3, pp 641–645.
Nakauchi, Y., Noguchi, K., Somwong, P., Matsubara, T., & Namatame, A. (2003). Vivid room: human intention detection and activity support environment for ubiquitous autonomy. Intelligent Robots and Systems (IROS 2003).
Noguchi, K., Somwong, P., Matsubara, T., & Nakauchi, Y. (2003). Human Intention Detection and Activity Support System for Ubiquitous Autonomy. Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, 2003, pp. 906–911.
Pynadath, D. (1999). Probabilistic Grammars for Plan Recognition, Doctoral Thesis, the University of Michigan, MI.
Rabiner, L. R. (1989). A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE 77 (2): 257–286.
Roether, C. L., Omlor, L., Christensen, A., & Giese, M. (2009). Critical features for the perception of emotion from gait. Journal of Vision, 91, pp. 1–32.
Runeson, S., & Frykholm, G. (1981). Visual perception of lifted weight. Journal of Experimental Psychology: Human Perception and Performance, 71, pp.733–740.
Sebanz, N., & Shiffrar, M. (2009). Detecting deception in a bluffing body: The role of expertise. Psychonomic Bulletin & Review, 161, pp. 170–175.
Russell, S. J., & Norvig, P. (2003). Artificial Intelligence: A Modern Approach, Prentice Hall series in artificial intelligence. Prentice Hall, second edition.
Schmidt, S., & Färber, B. (2009). Pedestrians at the kerb – Recognising the action intentions of humans. Transportation Research Part F, 121, pp. 300–310.
Schrempf, O. C., Albrecht, D., & Hanebeck, U. D. (2007). Tractable Probabilistic Models for Intention Recognition Based on Expert Knowledge. Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, pp. 1429–1434.
Sevdalis, V., & Keller, P. E. (2009). Self-recognition in the perception of actions performed in synchrony with music. Annals of the New York Academy of Sciences, 11691, pp. 499–502.
(2010). Cues for self-recognition in point-light displays of actions performed in synchrony with music. Consciousness and Cognition, 19, 2, pp. 617–626.
Tahboub, K. A. (2005). Compliant Human-Robot Cooperation based on Intention Recognition. Proceedings of the 2005 IEEE International Symposium on Intelligent Control, Limassol, Cyprus, June 27–29, pp. 1417–1422.
(2006). Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent Robotics Systems, 45, 1, pp. 31–52.
Terada, K., Shamoto, T., Mei, H., & Ito, A. (2007). Reactive Movements of Non-humanoid Robots Cause Intention Attribution in Humans. Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, pp. 3715–3720.
Viterbi, A. J. (1967). “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm”, IEEE Transactions on Information Theory, 13, 2, pp. 260–269.
Cited by (1)
Cited by one other publication
This list is based on CrossRef data as of 17 march 2026. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
