In:Translation in Transition: Between cognition, computing and technology
Edited by Arnt Lykke Jakobsen and Bartolomé Mesa-Lao
[Benjamins Translation Library 133] 2017
► pp. 207–233
Chapter 8Translation technology and learner performance
Professionally-oriented translation quality assessment with three translation technologies
Published online: 30 September 2017
https://doi.org/10.1075/btl.133.08mor
https://doi.org/10.1075/btl.133.08mor
This chapter examines how a three-dimensional translation quality assessment grid (based on error type, effect and criticality) can be used to assess student translation performance with three different tools (standalone TM system, speech recognition and post-edited machine translation). The study was professionally-oriented, using a technical English-language source text, short deadlines for completion of each translation, and professional quality criteria. Group and individual performance in the translation of five 500-word extracts were assessed for quality and efficiency, with and without translation tools, using our assessment grid. The factors affecting group and individual performance and possible correlations between tool and performance were studied. The potential usefulness of the grid as a fine-grained training and professional assessment tool is discussed.
Article outline
- 1.Introduction
- 2.Related research
- 3.Experimental design
- 4.The TRASILT three-dimensional, functional assessment grid
- 4.1Development
- 4.2Principles
- 4.3Grid dimensions
- 4.3.1Error typology
- 4.3.2Effect typology
- 4.3.3Degree of criticality
- 4.3.4Adjustments
- 5.Results and discussion
- 5.1Overall and individual results in relation to benchmark performance
- 5.1.1Overall results per method
- 5.1.2Individual student performance
- 5.2Factors potentially affecting student performance
- 5.2.1Technical and organisational factors
- 5.2.2Attitudes towards the translation technologies used
- 5.3Impact of translation tool or method on student performance, according to error typology and effect
- 5.1Overall and individual results in relation to benchmark performance
- 6.General conclusion
References
References (22)
Bowker, Lynne, and Melissa Ehgoetz. 2007. “Exploring User Acceptance of Machine Translation Output: A Recipient Evaluation.” In Across Boundaries: International Perspectives on Translation Studies, ed. by Dorothy Kenny, and Kyongjoo Ryou, 209–224. Cambridge: Cambridge Scholars Publishing.
Callison-Burch, Chris, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. “(Meta-) evaluation of machine translation.” In Second Workshop on Statistical Machine Translation, Proceedings, ed. by the Association for Computational Linguistics, 136–158. Prague: Czech Republic.
Chall, Jeanne S., and Edgar Dale. 2000. Readability revisited: The new Dale-Chall Readability Formula. Brookline Books, US.
Daems, Joke, Lieve Macken, and Sonia Vandepitte. 2013. “Quality as the sum of its parts: A two-step approach for the identification of translation problems and translation quality assessment for HT and MT+PE.” In MT Summit XIV Workshop on Post-editing Technology and Practice, Proceedings, ed. by Sharon O’Brien, Michel Simard, and Lucia Specia, 63–71. European Association for Machine Translation.
Doherty, Stephen, and Sharon O’Brien. 2014. “Assessing the Usability of Raw Machine Translated Output: A User-Centered Study Using Eye Tracking.” International Journal of Human-Computer Interaction 30 (1): 40–51.
Dragsted, Barbara, Inger M. Mees, and Inge Gorm Hansen. 2011. “Speaking your translation: students’ first encounter with speech recognition technology.” The International Journal for Translation & Interpreting Research 3 (1): 1–43.
François, Thomas, Laëtitia Brouwers, Hubert Naets, and Cédrick Fairon. 2014. “AMESURE: une plateforme de lisibilité pour les textes administratifs.” In 21ème Traitement Automatique des Langues Naturelles.
Kincaid, J. Peter, Robert P. Fishburne, Jr., Richard L. Rogers, and Brad S. Chissom. 1975. “Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel.” Research Branch Report 8–75, Millington, TN: Naval Technical Training, U. S. Naval Air Station, Memphis, TN.
Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. “Moses: Open Source Toolkit for Statistical Machine Translation.” In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ed. by the Association for Computational Linguistics, 177–180.
Nord, Christiane. 1997. Translating As A Purposeful Activity: Functionalist Approaches Explained. Manchester: St. Jerome.
O'Brien, Sharon. 2012. “Towards a Dynamic Quality Evaluation Model for Translation.” Journal Of Specialised Translation 17: 55–77.
Reiss, Katharina. 1981. “Type, Kind and Individuality of Text: Decision Making in Translation”, translated by Susan Kitron. Poetics Today 2 (4): 121–131.
Roux, Franck-Emmanuel, Jean-Baptiste Durand, Emilie Réhault, Samuel Planton, Louisa Draper, and Jean-François Démonet. 2013. “The neural basis for writing from dictation in the temporoparietal cortex.” Cortex 50: 64–75.
Toudic, Daniel, Katell Hernandez Morin, Fabienne Moreau, Franck Barbin, and Gaëlle Phuez-Fabris. 2014. “Du contexte didactique aux pratiques professionnelles: proposition d’une grille multicritères pour l’évaluation de la qualité en traduction spécialisée.” ILCEA 19.
Toudic, Daniel, Katell Hernandez Morin, and Fabienne Moreau. 2016. “Impact de deux approches technologiques sur un panel d’apprentis traducteurs: aide ou obstacle sur le chemin du sens?” In Actes du colloque Tralogy II - Anticiper les technologies pour la traduction. [URL].
Toury, Gideon. 1995. Translation Studies and Beyond. Amsterdam: Benjamins.
Vermeer, Hans J. 1979. “Vom ‘richtigen’ Übersetzen.” Mitteilungsblatt für Dolmetscher und Ubersetzer 25 (4): 2–8.
Cited by (1)
Cited by one other publication
This list is based on CrossRef data as of 15 november 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
