Article published In: Interpreting
Vol. 23:2 (2021) ► pp.168–191
The impact of visible lip movements on silent pauses in simultaneous interpreting
Published online: 29 March 2021
https://doi.org/10.1075/intp.00061.gie
https://doi.org/10.1075/intp.00061.gie
Abstract
Simultaneous interpreting requires interpreters to listen to a source text while producing the target text in a
second language. In addition, the interpreter needs to process various types of visual input, which may further increase the
already high cognitive load. A study with 14 students of interpreting was conducted to investigate the impact of a speaker’s
visible lip movements on cognitive load in simultaneous interpreting by analysing the duration of silent pauses in the target
texts. Background noise masking the source speech was introduced as a control condition for cognitive load. Silent pause durations
were shorter when interpreters saw the speaker’s lip movements, which indicates that interpreters benefitted from visual input.
Furthermore, silent pause durations were longer with noise, which suggests that comparative silent pause durations can indicate
changes in cognitive load.
Article outline
- 1.Introduction
- 2.Theoretical background
- 2.1Visual input in simultaneous interpreting
- 2.2Manipulating speech perception with background noise
- 2.3Disfluencies as an indicator of cognitive load
- 3.Empirical study
- 3.1Participants
- 3.2Material
- 3.3Procedure
- 3.4Data analysis
- 3.4.1Subjective reports
- 3.4.2Silent pause durations
- 3.5Results
- 3.5.1Subjective reports
- 3.5.2Silence durations
- 4.Discussion
- 4.1Limitations of the study
- 4.2Potential and limitations of silent pauses in the target text as cognitive load indicator
- 5.Conclusion
- Acknowledgements
- Notes
References
References (65)
(2005). Prosodic
phenomena in simultaneous interpreting: A conceptual approach and its practical
application. Interpreting 7 (1), 51–76.
Albl-Mikasa, M. (2010). Global
English and English as a lingua franca (ELF): Implications for the interpreting
profession. Trans-kom 3 (3), 126–148.
Anderson, L. (1994). Simultaneous
interpretation: Contextual and translation aspects. In S. Lambert & B. Moser-Mercer (Eds.), Bridging
the gap: Empirical research in simultaneous
interpretation. Amsterdam: John Benjamins, 101–248.
Bates, D., Mächler, M., Bolker, B. & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67 (1).
Benoit, C., Mohammadi, T. & Kandel, S. (1994). Effects of phonetic context on audio-visual intelligibility of French. Journal of Speech and Hearing Research 37 (5), 1195–1203.
Bernstein, L. E., Auer, E. T. & Takayanagi, S. (2004). Auditory
speech detection in noise enhanced by lipreading. Speech
Communication 44 (1–4), 5–18.
Brancazio, L., Best, C. T. & Fowler, C. A. (2006). Visual
influences on perception of speech and nonspeech vocal-tract events. Language and
Speech 49 (1), 21–53.
Bühler, H. (1985). Conference
interpreting: A multichannel communication
phenomenon. Meta 30 (1), 49–54.
(1986). Linguistic
(semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and
interpreters. Multilingua 5 (4), 231–235.
Cecot, M. (2001). Pauses
in simultaneous interpretation: A contrastive analysis of professional interpreters’
performances. The Interpreters’
Newsletter 111, 63–85.
Chernov, G. V. (1994). Message
redundancy and message anticipation in simultaneous
interpreting. In S. Lambert & B. Moser-Mercer (Eds.), Bridging
the gap: Empirical research in simultaneous
interpretation. Amsterdam: John Benjamins, 139–153.
Chiaro, D. & Nocella, G. (2004). Interpreters’
perception of linguistic and non-linguistic factors affecting quality: A survey through the World Wide
Web. Meta 49 (2), 278–293.
Chmiel, A., Szarkowska, A., Koržinek, D., Lijewska, A., Dutka, Ł., Brocki, Ł. & Marasek, K. (2017). Ear–voice
span and pauses in intra- and interlingual respeaking: An exploratory study into temporal aspects of the respeaking
process. Applied
Psycholinguistics 38 (5), 1201–1227.
Davies, M. (2008). Word frequency data. Retrieved from The Corpus of Contemporary American English (COCA): [URL] (accessed 19 March 2021).
Euopean Commission. (2009a). United
Airlines rewards fittest people. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
European
Commission. (2009b). Disenchantment at
work. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
. (2012a). Demographic shift in
Europe. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
European Commission. (2012b). Greece
in the doldrums. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
Fox, J. & Weisberg, S. (2018). Visualizing fit and lack of fit in complex regression models with predictor effect plots and partial residuals. Journal of Statistical Software 87 (9), 1–27.
Gerver, D. (1974). The
effects of noise on the performance of simultaneous interpreters: Accuracy of performance. Acta
Psychologica 38 (3), 159–167.
(2002). The
effects of source language presentation rate on the performance of simultaneous conference
interpreters. In F. Pöchhacker & M. Shlesinger (Eds.), The
interpreting studies reader. London/New York: Routledge, 53–66.
Gieshoff, A. C. (2018). The
impact of audio-visual speech on work-load in simultaneous interpreting. Doctoral
thesis, University of Mainz.
Gile, D. (2009). Basic
concepts and models for interpreter and translator training. Revised
edition. Amsterdam: John Benjamins.
(2002). Segmentation
of input in simultaneous translation. In F. Pöchhacker & M. Shlesinger (Eds.), The
interpreting studies reader. London/New York: Routledge, 69–76.
I-hsin, I. L., Feng-lan, A. C. & Feng-lan, K. (2013). The
impact of non-native accented English on rendition accuracy in simultaneous
interpreting. Translation &
Interpreting 5 (2), 30–44.
Kramer, S. E., Kapteyn, T. S., Festen, J. M. & Kuik, D. J. (1997). Assessing
aspects of auditory handicap by means of pupil
dilation. Audiology 361, 155–164.
Lin, Y., Lv, Q. & Liang, J. (2018). Predicting
fluency with language proficiency, working memory, and directionality in simultaneous
interpreting. Frontiers in
Psychology 91: 1543.
Lo, S. & Andrews, S. (2015). To
transform or not to transform: Using generalized linear mixed models to analyse reaction time
data. Frontiers in
Psychology 61: 1171.
Macleod, A. & Summerfield, Q. (1987). Quantifying
the contribution of vision to speech perception in noise. British Journal of
Audiology 21 (2), 131–141.
Massaro, D. W. & Cohen, M. M. (1999). Speech
perception in perceivers with hearing loss: Synergy of multiple modalities. Journal of Speech,
Language, and Hearing
Research 42 (1), 21–41.
Mattys, S. L. & Wiget, L. (2011). Effects
of cognitive load on speech recognition. Journal of Memory and
Language 65 (2), 145–160.
Mattys, S. L., Brooks, J. & Cooke, M. (2009). Recognizing
speech under a processing load: Dissociating energetic from informational factors. Cognitive
Psychology 59 (1), 203–243.
McAllister, R. (2000). Perceptual
foreign accent and its relevance for simultaneous
interpreting. In B. Englund Dimitrova & K. Hyltenstam (Eds.), Language
processing and simultaneous interpreting: Interdisciplinary
perspectives. Amsterdam: John Benjamins, 45–63.
Mizuno, A. (2005). Process
model for simultaneous interpreting and working
memory. Meta 50 (2), 739–752.
Moser, B. (1978). Simultaneous
interpretation: A hypothetical model and its practical
application. In D. Gerver & H. W. Sinaiko (Eds.), Language
interpretation and communication. New York: Plenum Press, 353–368.
Moser-Mercer, B. (2003). Remote
interpreting: Assessment of human factors and performance parameters. Communicate! AIIC
Webzine (Summer 2003). [URL] (accessed 16 March 2020).
Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods 162 (1/2), 8–13.
Plevoets, K. & Defrancq, B. (2016). The
effect of informational load on disfluencies in interpreting: A corpus-based regression
analysis. Translation and Interpreting
Studies 11 (2), 202–224.
(2018). The
cognitive load of interpreters in the European
Parliament. Interpreting 20 (1), 1–28.
Pöchhacker, F. (2005). From
operation to action: Process-orientation in interpreting
studies. Meta 50 (2), 682–695.
Poyatos, F. (1984). The
multichannel reality of discourse: Language-paralanguage-kinesics and the totality of communicative
systems. Language
Sciences 6 (2), 307–337.
Pradas Macías, M. (2006). Probing
quality criteria in simultaneous interpreting: The role of silent pauses in
fluency. Interpreting 8 (1), 25–43.
Rackow, J. (2013). Dolmetschen
als Kommunikation: Verbale und nonverbale Informationsverarbeitung im
Dolmetschprozess. In D. Andres, M. Behr & M. Dingfelder Stone (Eds.), Dolmetschmodelle –
erfasst, erläutert, erweitert. Frankfurt am Main: Peter Lang, 129–152.
Roziner, I. & Shlesinger, M. (2010). Much
ado about something remote: Stress and performance in remote
interpreting. Interpreting 12 (2), 214–247.
Sabatini, E. (2000). Listening
comprehension, shadowing and simultaneous interpretation of two ‘non-standard’ English
speeches. Interpreting 5 (1), 25–48.
Seeber, K. G. (2011). Cognitive
load in simultaneous
interpreting. Interpreting 13 (2), 176–204.
(2015). Cognitive
load in simultaneous interpreting: Measures and methods. In M. Ehrensberger-Dow, S. Göpferich & S. O’Brien (Eds.), Interdisciplinarity
in translation and interpreting process
research. Amsterdam: John Benjamins, 18–33.
(2017). Multimodal
processing in simultaneous interpreting. In J. W. Schwieter & A. Ferreira (Eds.), The
handbook of translation and
cognition (pp. 461–475). Hoboken: John Wiley & Sons, 461–475.
Setton, R. (1999). Simultaneous
interpretation: A cognitive–pragmatic
analysis. Amsterdam: John Benjamins.
Seubert, S. (2017). Simultaneous
interpreting is a whole-person process: Zur Verarbeitung visueller Informationen beim
Simultandolmetschen. In M. Behr & S. Seubert (Eds.), Education
is a whole-person process: Von ganzheitlicher Lehre, Dolmetschforschung und anderen
Dingen. Berlin: Frank & Timme, 271–303.
Thomas, S. M. & Jordan, T. R. (2004). Contributions
of oral and extraoral facial movement to visual and audiovisual speech perception. Journal of
Experimental Psychology: Human Perception and
Performance 30 (5), 873–888.
Tissi, B. (2000). Silent
pauses and disfluencies in simultaneous interpretation: A descriptive analysis. The
Interpreters’
Newsletter 101, 103–128.
Vatikiotis-Bateson, E., Eigsti, I.-M., Yano, S. & Munhall, K. G. (1998). Eye
movement of perceivers during audiovisual speech perception. Perception &
Psychophysics 60 (6), 926–940.
von Kriegstein, K., Dogan, Ö., Grüter, M., Giraud, A.-L., Kell, C. A., Grüter, T., Kleinschmidt, A. & Kiebel, S. J. (2008). Simulation
of talking faces in the human brain improves auditory speech recognition. Proceedings of the
National Academy of
Sciences 105 (18), 6747–6752.
Yu, W. & van Heuven, V. J. (2017). Predicting
judged fluency of consecutive interpreting from acoustic measures: Potential for automatic assessment and pedagogic
implications. Interpreting 19 (1), 47–68.
Cited by (5)
Cited by five other publications
Kornacki, Michał & Jan-Louis Kruger
2025. Dynamic eyetracking. In Research Methods in Cognitive Translation and Interpreting Studies [Research Methods in Applied Linguistics, 10], ► pp. 235 ff.
Zhang, Qiuya & Youping Jing
Peng, Xuejiao, Xiangling Wang & Guangjiao Chen
Shang, Xiaoqi & Guixia Xie
Gieshoff, Anne Catherine
2021. Does it help to see the speaker’s lip movements?. Translation, Cognition & Behavior 4:1 ► pp. 1 ff.
This list is based on CrossRef data as of 12 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
