Article published In: Interpreting
Vol. 24:1 (2022) ► pp.59–83
Assessing spoken-language interpreting
The method of comparative judgement
Published online: 1 October 2021
https://doi.org/10.1075/intp.00068.han
https://doi.org/10.1075/intp.00068.han
Abstract
In this study, we applied and evaluated a scoring method known as comparative judgement to assess spoken-language interpreting. This methodological exploration represents an extension of previous efforts to optimise scoring methods for assessing interpreting. Essentially, comparative judgement requires judges to compare two similar objects and make a binary decision about their relative qualities. To evaluate its reliability, validity and usefulness in the assessment of interpreting, we recruited two groups of judges (novice and experienced) to assess 66 two-way English/Chinese interpretations based on a computerised comparative judgement system. Our data analysis shows that the new method produced reliable and valid results across judge types and interpreting directions. However, the judges held polarised opinions about the method’s usefulness: while some considered it convenient, efficient and reliable, the opposite view was expressed by others. We discuss the results by providing an integrated analysis of the data collected, outline the perceived drawbacks and propose possible solutions to the drawbacks. We call for more evidence-based, substantive investigation into comparative judgement as a potentially useful method for assessing spoken-language interpreting in certain settings.
Article outline
- 1.Introduction
- 2.Comparative judgement
- 2.1An introduction to comparative judgment
- 2.2A comparative judgement approach to assessing spoken-language interpreting
- 2.3Potential research gaps
- 3.Research questions
- 4.Method
- 4.1Interpreting recordings
- 4.2Participants
- 4.3Online comparative judgment platform
- 4.4Judge preparation and training
- 4.5Procedures for comparative judgement
- 4.6Post-hoc interview
- 4.7Data analysis
- 5.Results
- 5.1Reliability evidence
- 5.2Validity evidence
- 5.3Judges’ perceived usefulness
- 6.Discussion
- 6.1Reliability
- 6.2Validity
- 6.3Perceived usefulness
- 6.4Further analysis of potential drawbacks of comparative judgement
- 7.Conclusion
- Notes
References
References (45)
Andrich, D. (1978). Relationships between the Thurstone and Rasch approaches to item scaling. Applied Psychological Measurement 2 (3), 451–462.
Barik, H. C. (1971). A description of various types of omissions, additions and errors of translation encountered in simultaneous interpretation. Meta 16 (4), 199–210.
Bradley, R. A. & Terry, M. E. (1952). Rank analysis of incomplete block designs: The method of paired comparisons. Biometrika 39 (3/4), 324–345.
Bramley, T. (2015). Investigating the reliability of adaptive comparative judgement. [URL] (Accessed 9 June 2021).
Bramley, T., Bell, J. & Pollitt, A. (1998). Assessing changes in standards over time using Thurstone paired comparisons. Education Research and Perspectives 25 (2), 1–23.
Bühler, H. (1986). Linguistic (semantic) and extralinguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters. Multilingua 51, 231–235.
CCHI. (2012). Technical report on the development and pilot testing of the Certified Healthcare Interpreter™ (CHI™) examination for Arabic and Mandarin. [URL] (Accessed 9 June 2021).
Chen, J., Yang, H-B. & Han, C. (2021). Holistic versus analytic scoring of spoken-language interpreting: A multi-perspectival comparative analysis. Manuscript submitted for publication.
Cheung, A. K. F. (2015). Scapegoating the interpreter for listeners’ dissatisfaction with their level of understanding: An experimental study. Interpreting 17 (1), 46–63.
Han, C. (2015). Investigating rater severity/leniency in interpreter performance testing: A multifaceted Rasch measurement approach. Interpreting 17 (2), 255–283.
(2017). Using analytic rating scales to assess English–Chinese bi-directional interpreting: A longitudinal Rasch analysis of scale utility and rater behaviour. Linguistica Antverpiensia, New Series: Themes in Translation Studies 161, 196–215. [URL]
(2018). Using rating scales to assess interpretation: Practices, problems and prospects. Interpreting 20 (1), 59–95.
(2019). A generalizability theory study of optimal measurement design for a summative assessment of English/Chinese consecutive interpreting. Language Testing 36 (3), 419–438.
Hartley, A., Mason, I., Peng, G. & Perez, I. (2003). Peer- and self-assessment in conference interpreter training. Centre for Languages, Linguistics and Area Studies. [URL]
International School of Linguists. (2017). Diploma in Public Service Interpreting: Learner handbook. London, UK. [URL] (Accessed 9 June 2021).
Jones, I. & Inglis, M. (2015). The problem of assessing problem solving: Can comparative judgement help? Educational Studies in Mathematics 89 (3), 337–355.
Jones, I. & Wheadon, C. (2015). Peer assessment using comparative and absolute judgement. Studies in Educational Evaluation 471, 93–101.
Jones, I., Swan, M. & Pollitt, A. (2015). Assessing mathematical problem solving using comparative judgement. International Journal of Science and Mathematics Education 131, 151–177.
Laming, D. (2004). Marking university examinations: Some lessons from psychophysics. Psychology Learning and Teaching 3 (2), 89–96.
Lee, J. (2008). Rating scales for interpreting performance assessment. The Interpreter and Translator Trainer 2 (2), 165–184.
Lee, S-B. (2015). Developing an analytic scale for assessing undergraduate students’ consecutive interpreting performances. Interpreting 17 (2), 226–254.
Linacre, J. M. (2002). What do infit and outfit, mean-square and standardized mean? Rasch Measurement Transactions 16 (2), 878.
Liu, M. (2013). Design and analysis of Taiwan’s interpretation certification examination. In D. Tsagari & R. van Deemter (Eds.), Assessment issues in language translation and interpreting. Frankfurt: Peter Lang, 163–178.
McMahon, S. & Jones, I. (2015). A comparative judgement approach to teacher assessment. Assessment in Education: Principles, Policy & Practice 22 (3), 368–389.
Myford, C. M. & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement: Part I. Journal of Applied Measurement 4 (4), 386–422.
National Center for State Courts. (2019). Federal Court Interpreter Certification Examination for Spanish/English: Examinee handbook. [URL] (Accessed 9 June 2021).
Pöchhacker, F. (2002). Researching interpreting quality: Models and methods. In G. Garzone & M. Viezzi (Eds.), Interpreting in the 21st century: Challenges and opportunities. Amsterdam: John Benjamins, 95–106.
Pollitt, A. (2012a). Comparative judgement for assessment. International Journal of Technology and Design Education 22 (2), 157–170.
(2012b). The method of adaptive comparative judgement. Assessment in Education: Principles, Policies & Practice 19 (3), 281–300.
Pollitt, A. & Murray, N. L. (1996). What raters really pay attention to? In M. Milanovic, & N. Saville (Eds.), Studies in language testing 3: Performance testing, cognition and assessment. Cambridge: Cambridge University Press, 74–91.
PSI Services LLC. (2013). Development and validation of oral examinations for medical interpreter certification: Mandarin, Russian, Cantonese, Korean, and Vietnamese forms. [URL] (Accessed 9 June 2021).
Riccardi, A. (1998). Evaluation in interpretation: Macrocriteria and microcriteria. In E. Hung (Ed.), Teaching translation and interpreting 4. Building bridges. Amsterdam: John Benjamins, 115–127.
Roberts, R. P. (2000). Interpreter assessment tools for different settings. In R. P. Roberts, S. E. Carr, D. Abraham & A. Dufour (Eds.), Critical link 2: Interpreters in the community. Amsterdam: John Benjamins, 103–130.
Schjoldager, A. (1996). Assessment of simultaneous interpreting. In C. Dollerup & V. Appel (Eds.), Teaching translation and interpreting 3: New horizons. Amsterdam: John Benjamins, 187–195.
Setton, R. & Dawrant, A. (2016). Conference interpreting: A trainer’s guide. Amsterdam: John Benjamins.
Setton, R. & Motta, M. (2007). Syntacrobatics: Quality and reformulation in simultaneous-with-text. Interpreting 9 (2), 199–230.
Tiselius, E. (2009). Revisiting Carroll’s scales. In C. V. Angelelli & H. E. Jacobson (Eds.), Testing and assessment in translation and interpreting studies. Amsterdam: John Benjamins, 95–121.
Verhavert, S., Bouwer, R., Donche, V. & De Maeyer, S. (2019). A meta-analysis on the reliability of comparative judgement. Assessment in Education: Principles, Policy & Practice 26 (5), 541–562.
Wu, S. (2010). Assessing simultaneous interpreting: A study on test reliability and examiners’ assessment behavior. PhD thesis, Newcastle University.
Cited by (8)
Cited by eight other publications
Han, Chao & Yueqing Wang
2025. Conducting replication in translation and interpreting studies. Target. International Journal of Translation Studies 37:3 ► pp. 444 ff.
Sickinger, Rebecca, Tineke Brunfaut & John Pill
Wang, Jingwen & Ying Zheng
Guo, Wei, Xun Guo, Junkang Huang & Sha Tian
Thwaites, Peter, Charalambos Kollias & Magali Paquot
Thwaites, Peter & Magali Paquot
Lu, Xiaolei & Chao Han
2023. Automatic assessment of spoken-language interpreting based on machine-translation evaluation metrics. Interpreting. International Journal of Research and Practice in Interpreting 25:1 ► pp. 109 ff.
This list is based on CrossRef data as of 12 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
