In:Eye Tracking and Multidisciplinary Studies on Translation
Edited by Callum Walker and Federico M. Federici
[Benjamins Translation Library 143] 2018
► pp. 185–201
Chapter 9Visual attention distribution in intralingual respeaking
An eye-tracking study
Published online: 16 October 2018
https://doi.org/10.1075/btl.143.09sza
https://doi.org/10.1075/btl.143.09sza
Abstract
Respeaking is a method of producing real-time subtitles for live television programmes, enabling access to the media for people who are deaf, hard of hearing or who support their viewing with subtitles. Respeaking requires a set of skills akin to both interpreting and subtitling, including multitasking and management of concurrent sources of incoming information. In this chapter we present the results of an eye-tracking study on the visual attention of respeakers during an intralingual respeaking task (Polish to Polish). We tested 57 people while they were respeaking a 5-minute news programme. Participants also underwent a short proof-reading task. There were three groups of participants: interpreters, translators and a control group of people with no interpreting/translation experience. We examined the number of fixations and mean fixation duration on major screen areas: picture, subtitles, dictation area and subtitle panel. We found that translators who had experience in subtitling were able to manage their visual attention most efficiently as they fixated more often on key screen areas and their fixations were longer than in other groups. They also achieved the highest score on the proof-reading task, which suggest they could work in respeaking as editors.
Article outline
- 1.Introduction
- 1.1Respeaking
- 1.2Visual attention distribution
- 1.3Eye tracking
- 2.Method
- 2.1Participants
- 2.2Materials
- 2.2.1Respeaking test
- 2.2.2Proof-reading test
- 2.2.3Procedure
- 2.3Eye-tracking data analysis
- 3.Results and discussion
- 3.1Mean fixation duration
- 3.2The number of fixations
- 3.3Proof-reading test
- 4.Conclusions
Acknowledgements References
References (36)
Cambra, Cristina, Olivier Penacchio, Núria Silvestre, and Aurora Leal. 2014. “Visual Attention to Subtitles When Viewing a Cartoon by Deaf and Hearing Children: An Eye-tracking Pilot Study.” Perspectives 22 (4): 607–617..
Carney, Russell, and Joel Levin. 2002. “Pictorial Illustrations Still Improve Students’ Learning from Text.” Educational Psychology Review 14 (1): 5–26..
Carroll, Patrick J., Jason R. Young, and Michael S. Guertin. 1992. “Visual Analysis of Cartoons: A View From the Far Side.” In Eye Movements and Visual Cognition: Scene Perception and Reading, ed. by Keith Rayner, 444–461. New York: Springer-Verlag.
d’Ydewalle, Géry, and Wim De Bruycker. 2007. “Eye Movements of Children and Adults While Reading Television Subtitles.” European Psychologist 12 (3): 196–205..
d’Ydewalle, Gery, Caroline Praet, Karl Verfaillie, and Johan Van Rensbergen. 1991. “Watching Subtitled Television: Automatic Reading Behavior.” Communication Research 18 (5): 650–666.
d’Ydewalle, Géry, and Ingrid Gielen. 1992. “Attention Allocation with Overlapping Sound, Image, and Text.” In Eye Movements and Visual Cognition: Scene Perception and Reading, ed. by Keith Rayner, 415–427. New York, NY: Springer New York.
Germeys, Filip, and Géry d’Ydewalle. 2007. “The Psychology of Film: Perceiving Beyond the Cut.” Psychological Research 71 (4): 458–466..
Ghia, Elisa. 2012. Subtitling Matters: New Perspectives on Subtitling and Foreign Language Learning. Oxford: Peter Lang.
Hershler, Orit, and Shaul Hochstein. 2005. “At First Sight: A High-level Pop Out Effect for Faces.” Vision Research 45 (13): 1707–1724..
Holmqvist, Kenneth, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van De Weijer. 2011. Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford, United Kingdom : Oxford University Press.
Jensema, Carl J., Ramalinga Sarma Danturthi, and Robert Burch. 2000. “Time Spent Viewing Captions on Television Programs.” American Annals of the Deaf 145 (5): 464–468.
Jensema, Carl J., Sameh El Sharkawy, Ramalinga Sarma Danturthi, Robert Burch, and David Hsu. 2000. “Eye Movement Patterns of Captioned Television Viewers.” American Annals of the Deaf 145 (3): 275–285.
Koolstra, Cees M., Tom H. A. Van Der Voort, and Géry d’Ydewalle. 1999. “Lengthening the Presentation Time of Subtitles on Television: Effects on Children’s Reading Time and Recognition.” comm 24 (4): 407–422..
Krejtz, Izabela, Agnieszka Szarkowska, and Krzysztof Krejtz. 2013. “The Effects of Shot Changes on Eye Movements in Subtitling.” Journal of Eye Movement Research 6 (5): 1–12.
Krejtz, Krzysztof, Andrew Duchowski, Izabela Krejtz, Agnieszka Szarkowska, and Agata Kopacz. 2016. “Discerning ambient/focal attention with coefficient K.” ACM Transactions on Applied Perception 13 (3): 11..
Kruger, Jan-Louis. 2016. “Psycholinguistics and Audiovisual Translation.” Target 28 (2): 276–287..
Kruger, Jan-Louis, Este Hefer, and Gordon Matthew. 2013. “Measuring the Impact of Subtitles on Cognitive Load: Eye Tracking and Dynamic Audiovisual Texts.” Proceedings of the 2013 Conference on Eye Tracking South Africa, Cape Town, South Africa.
Kruger, Jan-Louis, Esté Hefer, and Gordon Matthew. 2014. “Attention Distribution and Cognitive Load in a Subtitled Academic Lecture: L1 vs. L2.” Journal of Eye Movement Research 7 (5): 1–15.
Kruger, Jan‐Louis, and Faans Steyn. 2014. “Subtitles and Eye Tracking: Reading and Performance.” Reading Research Quarterly 49 (1): 105–120..
Langton, Stephen R. H., Anna S. Law, A. Mike Burton, and Stefan R. Schweinberger. 2008. “Attention Capture by Faces.” Cognition 107 (1): 330–342..
Mital, Parag K., Tim J. Smith, Robin L. Hill, and John M. Henderson. 2010. “Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion.” Cognitive Computation 3 (1): 5–24..
Orrego-Carmona, David, Łukasz Dutka, and Agnieszka Szarkowska. 2016a. “Expanding the Boundaries: Methodological Considerations for Subtitling Process Research.” 8th EST Congress, Aarhus University, Aarhus.
. 2016b. “Using Process Methods to Study Subtitles.” First South African Inter-varsity Translation and Interpreting Conference, North-West University, Vanderbijlpark.
. 2018. “Using Translation Process Research to Explore the Creation of Subtitles: An Eye Tracking Study Comparing Professional and Trainee Subtitlers.” Journal of Specialised Translation 30.
Perego, Elisa, Fabio Del Missier, Marco Porta, and Mauro Mosconi. 2010. “The Cognitive Effectiveness of Subtitle Processing.” Media Psychology 13 (3): 243–272..
Rajendran, Dhevi J., Andrew T. Duchowski, Pilar Orero, Juan Martínez, and Pablo Romero-Fresco. 2013. “Effects of Text Chunking on Subtitling: A Quantitative and Qualitative Examination.” Perspectives 21 (1): 5–21..
Rayner, Keith. 1998. “Eye Movements in Reading and Information Processing: 20 Years of Research.” Psychological Bulletin 124 (3): 372–422..
. 2009. “Eye Movements and Attention in Reading, Scene Perception, and Visual Search.” The Quarterly Journal of Experimental Psychology 62 (8): 1457–1506..
Romero-Fresco, Pablo. 2011. Subtitling Through Speech Recognition: Respeaking. Manchester: St. Jerome Publishing.
Romero Fresco, Pablo. 2012. “Respeaking in Translator Training Curricula. Present and Future Prospects.” The Interpreter and Translator Trainer 6 (1): 91–112.
Szarkowska, Agnieszka, Izabela Krejtz, Zuzanna Klyszejko, and Anna Wieczorek. 2011. “Verbatim, Standard, or Edited? Reading Patterns of Different Captioning Styles among Deaf, Hard of Hearing, and Hearing Viewers.” American Annals of the Deaf 156 (4): 363–378.
Cited by (6)
Cited by six other publications
Cui, Ying & Xiao Liu
Walker, Callum, Kristian Tangsgaard Hvelplund & Victoria Lei
2025. Screen eyetracking. In Research Methods in Cognitive Translation and Interpreting Studies [Research Methods in Applied Linguistics, 10], ► pp. 213 ff.
Chen, Sijia & Jan-Louis Kruger
2024. Visual processing during computer-assisted consecutive interpreting. Interpreting. International Journal of Research and Practice in Interpreting 26:2 ► pp. 231 ff.
Cui, Ying, Xiao Liu & Wei Wang
Cui, Ying, Xiao Liu & Yuqin Cheng
This list is based on CrossRef data as of 3 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
