Article published In: Audiovisual Translation: Theoretical and methodological challenges
Edited by Yves Gambier and Sara Ramos Pinto
[Target 28:2] 2016
► pp. 206–221
Machine translation quality in an audiovisual context
Published online: 9 August 2016
https://doi.org/10.1075/target.28.2.03bur
https://doi.org/10.1075/target.28.2.03bur
Abstract
The volume of Audiovisual Translation (AVT) is increasing to meet the rising demand for data that needs to be accessible around the world. Machine Translation (MT) is one of the most innovative technologies to be deployed in the field of translation, but it is still too early to predict how it can support the creativity and productivity of professional translators in the future. Currently, MT is more widely used in (non-AV) text translation than in AVT. In this article, we discuss MT technology and demonstrate why its use in AVT scenarios is particularly challenging. We also present some potentially useful methods and tools for measuring MT quality that have been developed primarily for text translation. The ultimate objective is to bridge the gap between the tech-savvy AVT community, on the one hand, and researchers and developers in the field of high-quality MT, on the other.
Article outline
- 1.Introduction
- 2.Background: Statistical Machine Translation in a nutshell
- 2.1The challenge of assessing MT Quality
- 2.2What MT does best and why
- 3.Problems impacting the automatic translation of subtitles
- 3.1Domain and genre
- 3.2Lack of visual context
- 3.3Oral style
- 3.4Lack of context
- 4.Measuring Machine Translation quality
- 4.1Quality evaluation in MT Research
- 4.2Multidimensional Quality Metrics (MQM)
- 5.Summary
- Acknowledgments
- Notes
References
References (15)
Avramidis, Eleftherios, Aljoscha Burchardt, Christian, Federmann, Maja Popovićs, Cindy Tscherwinka, and David Vilar. 2012. “Involving Language Professionals in the Evaluation of Machine Translation.” In Proceedings of LREC 2012, 1127–1130. [URL]. Accessed December 12, 2015.
Banerjee, Satanjeev, and Alon Lavie. 2005. “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments.” In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, ed. by Jade Goldstein, Alon Lavie, Chin-Yew Lin, and Clare Voss, 65–72. Michigan, MI: University of Michigan.
Bywood, Lindsay, Martin Volk, Mark Fishel, and Panayota Georgakopoulou. 2013. “Parallel Subtitle Corpora and their Applications in Machine Translation and Translatology.” In Corpus Linguistics and AVT: in Search of an Integrated Approach, special issue of Perspectives: Studies in Translatology 21 (4): 1–16.
De Sousa, Sheila C. M., Wilker Aziz, and Lucia Specia. 2011. “Assessing the Post-Editing Effort for Automatic and Semi-Automatic Translations of DVD Subtitles.” In Proceedings of the International Conference on Recent Advances in Natural Language Processing, ed. by Galia Angelova, Kalina Bontcheva, Ruslan Mitkov, and Nikolai Nikolov, 97–103. [URL]. Accessed December 22, 2015.
Díaz-Cintas, Jorge, and Aline Remael. 2007. Audiovisual Translation, Subtitling. Manchester: St. Jerome.
Etchegoyhen, Thierry, Lindsay Bywood, Mark Fishel, Panayota Georgakopoulou, Jie Jiang, Gerard van Loenhout, Arantza del Pozo, Mirjam Sepesy Maucec, Anja Turner, and Martin Volk. 2014. “Machine Translation for Subtitling: A Large-Scale Evaluation.” In Proceedings of LREC 2014, 46–53. [URL]. Accessed December 22, 2015.
Lommel, Arle, Aljoscha Burchardt, and Hans Uszkoreit. 2014. “Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Translation Quality Metrics.” In Tradumàtica: tecnologies de la traducció 0 (12): 455–463.
Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. “BLEU: A Method for Automatic Evaluation of Machine Translation.” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. [URL]. Accessed December 22, 2015.
Popović, Maja. 2011a. “Hjerson: An Open Source Tool for Automatic Error Classification of Machine Translation Output.” The Prague Bulletin of Mathematical Linguistics 961: 59–68.
. 2011b. “Morphemes and POS Tags for N-gram Based Evaluation Metrics.” In Proceedings of the Sixth Workshop on Statistical Machine Translation, 104–107. [URL]. Accessed December 22, 2015.
Romero-Fresco, Pablo, and Juan Martínez Pérez. 2015. “Accuracy Rate in Live Subtitling – the NER Model.” In Audiovisual Translation in a Global Context: Mapping an Ever-changing Landscape, ed. by Jorge Díaz Cintas, and Rocío Baños Pinero, 28–50. London: Palgrave Macmillan. [URL](draft). Accessed November 4, 2015.
Rubin, Ann D. 1978. “A Theoretical Taxonomy of the Differences between Oral and Written Language.” Center for the Study of Reading Technical Report 351.
Shah, Kashif, Eleftherios Avramidis, Ergun Biçicic, and Lucia Specia. 2013. “QuEst – Design, Implementation and Extensions of a Framework for Machine Translation Quality Estimation.” The Prague Bulletin of Mathematical Linguistics 1001: 19–30.
Vilar, David, Jia Xu, Luis Fernando d’Haro, and Hermann Ney. 2006. “Error Analysis of Statistical Machine Translation Output.” In Proceedings of LREC 2006, 697–702. [URL]. Accessed December 22, 2015.
Cited by (16)
Cited by 16 other publications
Enríquez-Aranda, Mercedes
Liu, Lisheng
Parini, Ilaria
Zaghouani, Wajdi, Sanja Seljan, Ivan Dunđer, Rashid Yahiaoui & Amer Al-Adwan
Bolaños García-Escribano, Alejandro
Tahseen, Wesam Mohsen & Shifa'a Hadi Hussein
Arnáiz-Uzquiza, Verónica & Paula Igareda González
Tekwa, Kizito & Jessica Liu Jiexiu
Zhao, Yanmei, Mohd Mokhtar Muhamad, Siti Salina Mustakim, Wenling Li, Xuanyi Wu & Aiping Wang
Karakanta, Alina
Burchardt, Aljoscha, Arle Lommel & Vivien Macketanz
Huang, Jie & Michael Carl
Szarkowska, Agnieszka, Jorge Díaz Cintas & Olivia Gerber-Morón
吴, 双姣
Bolaños-García-Escribano, Alejandro & Jorge Díaz-Cintas
This list is based on CrossRef data as of 4 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
