Article published In: Epigenetic robotics
Edited by Giorgio Metta and Luc Berthouze
[Interaction Studies 7:2] 2006
► pp. 171–196
Reinforcing robot perception of multi-modal events through repetition and redundancy and repetition and redundancy
Published online: 29 June 2006
https://doi.org/10.1075/is.7.2.05fit
https://doi.org/10.1075/is.7.2.05fit
For a robot to be capable of development it must be able to explore its environment and learn from its experiences. It must find (or create) opportunities to experience the unfamiliar in ways that reveal properties valid beyond the immediate context. In this paper, we develop a novel method for using the rhythm of everyday actions as a basis for identifying the characteristic appearance and sounds associated with objects, people, and the robot itself. Our approach is to identify and segment groups of signals in individual modalities (sight, hearing, and proprioception) based on their rhythmic variation, then to identify and bind causally-related groups of signals across different modalities. By including proprioception as a modality, this cross-modal binding method applies to the robot itself, and we report a series of experiments in which the robot learns about the characteristics of its own body.
Keywords: sensor fusion, humanoid robotics, body schema, object recognition
Cited by (7)
Cited by seven other publications
Zeng, Hui & Jiaqi Luo
Zambelli, Martina, Antoine Cully & Yiannis Demiris
Zhou, Xuefeng, Hongmin Wu, Juan Rojas, Zhihao Xu & Shuai Li
Park, Daehyung, Zackory Erickson, Tapomayukh Bhattacharjee & Charles C. Kemp
Yonekura, Kenta, Chyon Hae Kim, Kazuhiro Nakadai, Hiroshi Tsujino & Kazuhito Yokoi
Arsénio, Artur, Hugo Serra, Rui Francisco, Fernando Nabais, João Andrade & Eduardo Serrano
This list is based on CrossRef data as of 30 march 2026. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
