Article published In: Technology
Edited by Franz Pöchhacker and Minhua Liu
[Interpreting 26:2] 2024
► pp. 282–315
The augmented interpreter
An exploratory study of the usability of augmented reality technology in interpreting
Published online: 3 October 2024
https://doi.org/10.1075/intp.00108.gie
https://doi.org/10.1075/intp.00108.gie
Abstract
Computer-assisted interpreting (CAI) tools use speech recognition and machine translation to display numbers and
names on a screen or automatically suggest renditions for technical terms. One way to improve the usability of CAI tools may be to
use augmented reality (AR) technology, which allows information to be displayed wherever convenient. Instead of having to look
down at a tablet or a laptop, the interpreter can see the term or number projected directly into their field of vision, allowing
them to maintain their focus on the speaker and the audio input. In this study, we investigated the affordances of AR in
simultaneous interpreting. Nine professional conference interpreters each interpreted two technical talks: one with numerals,
proper nouns and suggestions for technical terms automatically shown on an AR display and the other with an MS Word glossary on a
laptop. The results indicate a hypothetical use case for AR technologies in interpreting but highlight the practical limitations,
such as a lack of comfort in wearing the AR equipment, a lack of ergonomic and intuitive interaction with virtual objects, and
distraction and interference with the interpreting process in the form of additional visual input.
Article outline
- Introduction
- Computer-assisted interpreting
- CAI with AR technology
- Method
- Participants
- Source texts and terminology
- Equipment and application
- Procedure
- Data analysis
- Participants’ notes
- Rendition of critical items
- Interviews
- Results
- Familiarity with the speech topics and terminology management software
- Preparation
- Rendering of critical items
- Preferred position of the term box and technical issues
- Experiences using the AR technology
- Discussion
- Conclusion
- Acknowledgements
- Notes
References
References (38)
Boersma, P. & Weenink, D. (2013). Praat:
Doing phonetics by computer (5.3.51) [computer
software]. [URL] (accessed 27
November 2023).
Brooke, J. (1996). SUS:
A ‘quick and dirty’ usability scale. In P. Jordan, W. B. Thomas, I. L. McClelland & B. Weerdmeester (Eds.), Usability
evaluation in industry. London: CRC Press, 189–194.
Chang, C., Wu, M. M. & Kuo, T. G. (2018). Conference
interpreting and knowledge acquisition: How professional interpreters tackle unfamiliar
topics. Interpreting 20 (2), 204–231.
Chang, H.-Y., Binali, T., Liang, J.-C., Chiou, G.-L., Cheng, K.-H., Lee, S. W.-Y. & Tsai, C.-C. (2022). Ten
years of augmented reality in education: A meta-analysis of (quasi-) experimental studies to investigate the
impact. Computers &
Education 1911, 104641.
Clough, S. & Duff, M. C. (2020). The
role of gesture in communication and cognition: Implications for understanding and treating neurogenic communication
disorders. Frontiers in Human
Neuroscience 141, 323.
Corpas Pastor, G. (2022). Technology
solutions for interpreters: The VIP system. Hermēneus. Revista de Traducción e
Interpretación 231, 91–123.
Defrancq, B. & Fantinuoli, C. (2021). Automatic
speech recognition in the booth: Assessment of system performance, interpreters’ performances and interactions in the context
of
numbers. Target 33 (1), 73–102.
Dehaene, S. (2011). The
number sense: How the mind creates mathematics (2nd edition). New York: Oxford University Press.
Desmet, B., Vandierendonck, M. & Defrancq, B. (2018). Simultaneous
interpretation of numbers and the impact of technological
support. In C. Fantinuoli (Ed.), Interpreting
and technology. Berlin: Language Science Press, 13–27.
Fantinuoli, C. (2016). InterpretBank:
Redefining computer-assisted interpreting tools. Proceedings of the 38th Conference Translating
and the
Computer. London: AsLing, 42–52.
(2017). Speech
recognition in the interpreter workstation. Proceedings of the 39th Conference Translating and
the
Computer. London: AsLing. 25–34. [URL]
Fantinuoli, C., Marchesini, G., Landan, D. & Horak, L. (2022). KUDO
Interpreter Assist: Automated real-time support for remote interpretation. arXiv:2201.01800
[Cs]. [URL]
Frittella, F. M. (2023). Usability
research for interpreter-centred technology: The case study of
SmarTerp. Berlin: Language Science Press.
Frittella, F. M. & Rodríguez, S. (2022). Putting
SmartTerp to test: A tool for the challenges of remote interpreting. INContext: Studies in
Translation and
Interculturalism 2 (2), 137–166.
Harris, P. A., Taylor, R., Minor, B. L., Elliott, V., Fernandez, M., O’Neal, L., McLeod, L., Delacqua, G., Delacqua, F., Kirby, J. & Duda, S. N. (2019). The
REDCap consortium: Building an international community of software platform partners. Journal
of Biomedical
Informatics 951, 103208.
Hart, S. G. & Staveland, L. E. (1988). Development
of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in
Psychology 521, 139–183.
Jiang, H. (2013). The
interpreter’s glossary in simultaneous interpreting: A
survey. Interpreting 15 (1), 74–93.
Kipman, A. & Pollefeys, M. (2019, 3 October). Hololens
2: Unpacked. [URL] (accessed 27 November 2023).
Mckee, S. P. & Nakayama, K. (1984). The
detection of motion in the peripheral visual field. Vision
Research 24 (1), 25–32.
Mellinger, C. D. & Hanson, T. A. (2018). Interpreter
traits and the relationship with technology and visibility. Translation and Interpreting
Studies 13 (3), 366–392.
Microsoft (2023). Hololens 2
technical
specifications. HoloLens 21. [URL] (accessed 27 November 2023).
Milgram, P., Takemura, H., Utsumi, A. & Kishino, F. (1995). Augmented
reality: A class of displays on the reality–virtuality
continuum. In H. Das (Ed.), Proc.
SPIE 2351. Boston: SPIE, 282–292.
Parekh, P., Patel, S., Patel, N. & Shah, M. (2020). Systematic
review and meta-analysis of augmented reality in medicine, retail, and games. Visual Computing
for Industry, Biomedicine, and
Art 3 (1), 21.
Prandi, B. (2015). The
use of CAI tools in interpreters’ training: A pilot study. Proceedings of the 37th Conference
Translating and the
Computer. London: AsLing, 48–57.
(2023). Computer-assisted
simultaneous interpreting: A cognitive-experimental study on
terminology. Berlin: Language Science Press.
R Core Team. (2020). R: A language and
environment for statistical computing (3.6.3) [computer software]. [URL] (accessed 27
November 2023).
Saeed, M. A., González, E. R., Korybski, T., Davitti, E. & Braun, S. (2022). Connected
yet distant: An experimental study into the visual needs of the interpreter in remote simultaneous
interpreting. In M. Kurosu (Ed.), Human–computer
interaction. User experience and
behavior. Cham: Springer, 214–232.
Seubert, S. (2019). Visuelle
Informationen beim Simultandolmetschen: Eine
Eyetracking-Studie. Berlin: Frank & Timme.
Vater, C., Kredel, R. & Hossner, E.-J. (2017). Detecting
target changes in multiple object tracking with peripheral vision: More pronounced eccentricity effects for changes in form
than in motion. Journal of Experimental Psychology: Human Perception and
Performance 43 (5), 903–913.
VERBI Software (2021). MAXDA. Software für
qualitative Datenanalyse [Windows]. VERBI Software. [URL] (accessed 27
November 2023).
Wu, H.-K., Lee, S. W.-Y., Chang, H.-Y. & Liang, J.-C. (2013). Current
status, opportunities and challenges of augmented reality in education. Computers &
Education 621, 41–49.
Xu, R. (2018). Corpus-based
terminological preparation for simultaneous
interpreting. Interpreting 20 (1), 33–62.
Cited by (2)
Cited by two other publications
Chan, Venus
This list is based on CrossRef data as of 12 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
