Article published In: Learner Corpus Research for Pedagogical Purposes
Edited by Sandra Götz and Sylviane Granger
[International Journal of Learner Corpus Research 10:1] 2024
► pp. 183–215
Expressions of epistemic stance in computer-mediated L2 speaking assessment
A corpus-based approach
Available under the Creative Commons Attribution (CC BY) 4.0 license.
For any use beyond this license, please contact the publisher at rights@benjamins.nl.
Open Access publication of this article was funded through a Transformative Agreement with Lancaster University.
Published online: 28 June 2024
https://doi.org/10.1075/ijlcr.00044.gab
https://doi.org/10.1075/ijlcr.00044.gab
Abstract
Learner and L2 user corpora are increasingly valued in language testing and assessment as they can inform test
design, revision, and validation. This paper illustrates the benefits of using an L2 corpus to explore patterns of epistemic
stance marking in computer-mediated speaking tests with no live human interlocutor. Drawing on the British
Council-Lancaster Aptis Corpus – comprising over 630,000 words of L2 speech – we explored the frequency of epistemic
stance markers (adverbial, adjectival and verbal) across proficiency levels and speaking task types. The analysis revealed that
epistemic stance was prevalent in test-taker discourse and that frequency was influenced by L2 proficiency and task type. The
findings demonstrate that computer-mediated speaking tests can elicit expressions of epistemic stance in a comparable way to tests
which involve human-human interaction. Implications are drawn for examiner training, test preparation, and an enriched
understanding of the elements of pragmatic competence that can be elicited in computer-mediated speaking assessment.
Article outline
- 1.Introduction
- 2.Applying a corpus-based approach to explore computer-mediated L2 speaking assessment
- 3.Stance-taking and epistemic stance
- 4.Exploring epistemic stance in the Aptis Speaking Test
- 5.Methodology
- 5.1Research questions
- 5.2Data
- 5.3Analytic procedure
- 6.Results
- 6.1Frequency and distribution of ESMs in the Aptis Corpus (RQ1)
- 6.2The interaction between the task type and proficiency in the frequency of the epistemic stance markers
- 7.Discussion
- 7.1Discussion of findings
- 7.2Implications and applications for language testing and assessment
- 7.3Challenges and further research
- 8.Conclusion
- Acknowledgements
- Notes
References
References (65)
Aijmer, K. (2004). Pragmatic
markers in spoken interlanguage. Worlds of Words. A Tribute to Arne Zettersten. Special Issue
of Nordic Journal of English
Studies, 3(1), 173–190.
(2011). Well
I’m not sure I think… The use of well by non-native speakers. International
Journal of Corpus
Linguistics, 16(2), 231–254.
Bae, J., & Lee, Y. S. (2011). The
validation of parallel test forms: ‘Mountain’ and ‘beach’ picture series for assessment of language
skills. Language
Testing, 28(2), 155–177.
Barker, F., Salamoura, A., & Saville, N. (2015). Learner
corpora and language testing. In S. Granger, G. Gilquin, & F. Meunier (Eds.) The
Cambridge Handbook of Learner Corpus
Research (pp. 511–533). Cambridge University Press.
Biber, D., Johansson, S., Leech, G., Conrad, S., & Finegan, E. (1999). The
grammar of spoken and written English. Longman.
Biber, D. (2006). University
language: A corpus-based study of spoken and written registers. John Benjamins.
Brezina, V. (2012). Epistemic
markers in university advisory sessions: Towards a local grammar of epistemicity. Unpublished
PhD Dissertation. University of Auckland.
Brezina, V., Weill-Tessier, P., & McEnery, A. (2020). #LancsBox
(Version 5) [software]. [URL]
British Council (2023). Aptis. Guide for teachers. [electronic resource]. [URL]
Brooks, L., & Swain, M. (2014). Contextualizing
performances: Comparing performances during TOEFL iBTTM and real-life academic speaking
activities. Language Assessment
Quarterly, 11(4), 353–373.
Callies, M., & Götz, S. (2015). Learner
corpora in language testing and assessment: Prospects and
challenges. In M. Callies, & S. Götz (Eds.), Learner
corpora in language testing and
assessment (pp. 1–10). John Benjamins.
Chapelle, C. A., & Lee, H. (2021). Conceptions
of validity. In G. Fulcher, & L. Harding (Eds.), The
Routledge Handbook of language testing (2nd
ed., pp. 17–31). Routledge.
Crosthwaite, P. R., & Raquel, M. (2019). Validating
an L2 academic group oral assessment: Insights from a spoken learner corpus. Language
Assessment
Quarterly, 16(1), 39–63.
Cushing, S. T. (2017). Corpus
linguistics in language testing research. Language
Testing, 34(4), 441–449.
Du Bois, J. W. (2007). The
stance triangle. In R. Englebretson (Ed.), Stancetaking
in discourse: Subjectivity, evaluation,
interaction (pp. 139–182). John Benjamins.
Fox Tree, J. E. (2010). Discourse
markers across speakers and settings. Language and Linguistics
Compass, 4(5), 269–281.
Fuller, J. M. (2003). The
influence of speaker roles on discourse marker use. Journal of
Pragmatics, 35(1), 23–45.
Gablasova, D. (2021). Corpora
for second language assessments. In P. Winke, & T. Brunfaut (Eds.), The
Routledge handbook of Second Language Acquisition and Language
Testing (pp. 45–53). Routledge.
Gablasova, D., Brezina, V., Harding, L., & Dunlea, J. (2020). The
British Council – Lancaster Aptis Speaking corpus [electronic dataset]. Lancaster University.
Gablasova, D., Brezina, V., McEnery, T., & Boyd, E. (2017). Epistemic
stance in spoken L2 English: The effect of task and speaker style. Applied
Linguistics, 38(5), 613–637.
Gablasova, D., & Brezina, V. (2015). Does
speaker role affect the choice of epistemic adverbials in L2 speech? Evidence from the Trinity Lancaster
Corpus. In J. Romero-Trillo (Ed.), Yearbook
of corpus linguistics and pragmatics
2015 (pp. 117–136). Springer.
Gablasova, D., Harding, L., Brezina, V., & Dunlea, J. (2023, July). Talking
to an imagined interlocutor: Interactional and interpersonal discourse features in computer-mediated semi-direct speaking
assessment. Paper presented at the CL2023 conference, Lancaster
University.
Galaczi, E. D. (2014). Interactional
competence across proficiency levels: How do learners manage interaction in paired speaking
tests?. Applied
linguistics, 35(5), 553–574.
Granger, S., Dupont, M., Meunier, F., Naets, H., & Paquot, M. (2020). The
International Corpus of Learner English (Version 3). Presses universitaires de Louvain.
Gray, B., & Biber, D. (2012). Current
conceptions of stance. In K. Hyland, & C. Guinda (Eds.), Stance
and voice in written academic
genres (pp. 15–33). Palgrave Macmillan.
Gyllstad, H., & Snoder, P. (2021). Exploring
learner corpus data for language testing and assessment purposes: The case of verb + noun
collocations. In S. Granger (Ed.), Perspectives
on the L2 phrasicon. The view from learner
corpora (pp. 49–71). Multilingual Matters.
Hasselgren, A. (1994). Lexical
teddy bears and advanced learners: A study into the ways Norwegian students cope with English
vocabulary. International Journal of Applied
Linguistics, 4(2), 237–258.
He, L., & Dai, Y. (2006). A
corpus-based investigation into the validity of the CET-SET group discussion. Language
Testing, 23(3), 370–401.
Hunston, S., & Thompson, G. (Eds.) (2000). Evaluation
in text: Authorial stance and the construction of discourse. Oxford University Press.
Isaacs, T. (2018). Fully
automated speaking assessment: Changes to proficiency testing and the role of
pronunciation. In O. Kang, R. I. Thomson, & J. Murphy (Eds.), The
Routledge Handbook of English
pronunciation (pp. 570–584). Routledge.
Iwashita, N., May, L., & Moore, P. (2017). Features
of discourse and lexical richness at different performance levels in the APTIS speaking test
(AR-G/2017/2). [URL]
Iwashita, N., May, L., & Moore, P. J. (2021). Operationalising
interactional competence in computer-mediated speaking
tests. In M. R. Salaberry, & A. R. Burch (Eds.), Assessing
speaking in context: Expanding the construct and its
applications (pp. 283–302). Multilingual Matters.
Johansen, S. H. (2020). Hedging
in spoken conversations by Norwegian learners of English. Nordic Journal of Language Teaching
and
Learning, 8(2), 27–48.
Kärkkäinen, E. (2003). Epistemic
stance in English conversation: A description of its interactional functions, with a focus on ‘I
think’. John Benjamins.
(2006). Stance
taking in conversation: From subjectivity to intersubjectivity. Text &
Talk, 26(6), 699–731.
(2010). Position
and scope of epistemic phrases in planned and unplanned American
English. In G. Kaltenböck, W. Mihatsch, & S. Schneider (Eds.), New
approaches to
hedging (pp. 203–236). Brill.
Kilgarriff, A., Baisa, V., Bušta, J., Jakubíček, M., Kovář, V., Michelfeit, J., Rychlỳ, P., & Suchomel, V. (2014). The
Sketch Engine: Ten years
on. Lexicography, 1(1), 7–36.
Kizu, M., Gyogi, E., & Dougherty, P. (2022). Epistemic
stance in L2 English discourse: The development of pragmatic strategies in study
abroad. Applied
Pragmatics, 4(1), 33–62.
LaFlair, G. T., & Staples, S. (2017). Using
corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking
assessment. Language
Testing, 34(4), 451–475.
Lam, P. W. (2009). The
effect of text type on the use of so as a discourse particle. Discourse
Studies, 11(3): 353–72.
Larsson, T., Callies, M., Hasselgård, H., Laso, N. J., Van Vuuren, S., Verdaguer, I., & Paquot, M. (2020). Adverb
placement in EFL academic writing: Going beyond syntactic transfer. International Journal of
Corpus
Linguistics, 25(2), 156–185.
Leedham, M., & Cai, G. (2013). Besides…
on the other hand: Using a corpus approach to explore the influence of teaching materials on Chinese students’ use of linking
adverbials. Journal of Second Language
Writing, 22(4), 374–389.
Liao, S. (2009). Variation
in the use of discourse markers by Chinese teaching assistants in the US. Journal of
Pragmatics, 41(7), 1313–1328.
Nakatsuhara, F., May, L., Inoue, C., Willcox-Ficzere, E., Westbrook, C., & Spiby, R. (2021). Exploring
the potential for assessing interactional and pragmatic competence in semi-direct speaking
tests. British Council.
Neary-Sundquist, C. (2013). Task
type effects on pragmatic marker use by learners at varying proficiency levels. L2
Journal, 5(2), 1–21.
Ockey, G. J. (2009). Developments
and challenges in the use of computer-based testing for assessing second language ability. The
Modern Language
Journal, 931, 836–847.
Ockey, G. J., & Chukharev-Hudilainen, E. (2021). Human
versus computer partner in the paired oral discussion test. Applied
Linguistics, 42(5), 924–944.
O’Loughlin, K. J. (2001). The
equivalence of direct and semi-direct speaking tests. Cambridge University Press.
O’Sullivan, B. (2015). Linking
the Aptis reporting scales to the CEFR. Aptis Technical Report
(TR/2015/003). British Council. [URL]
O’Sullivan, B., Dunlea, J., Spiby, R., Westbrook, C., & Dunn, K. (2020). Aptis
General technical manual [Version 2.2]. British Council.
O’Sullivan, B., Weir, C., & Saville, N. (2002). Using
observation checklists to validate speaking-test tasks. Language
Testing, 19(1), 33–56.
Quaid, E., & Barrett, A. (2020). Towards
the future of computer-assisted language testing: Assessing spoken performance through semi-direct
tests. In B. Zou, & M. Thomas (Eds.), Recent
developments in technology-enhanced and computer-assisted language
learning (pp. 208–235). IGI Global.
Roever, C., & Ikeda, N. (2022). What
scores from monologic speaking tests can (not) tell us about interactional competence. Language
Testing, 39(1), 7–29.
Roever, C., & Kasper, G. (2018). Speaking
in turns and sequences: Interactional competence as a target construct in testing
speaking. Language
Testing, 35(3), 331–355.
Roever, C., & McNamara, T. (2006). Language
testing: The social dimension. International Journal of Applied
Linguistics, 16(2), 242–258.
Römer, U. (2017). Language
assessment and the inseparability of lexis and grammar: Focus on the construct of
speaking. Language
Testing, 34(4), 477–492.
Salsbury, T., & Bardovi-Harlig, K. (2000). Oppositional
talk and the acquisition of modality in L2 English. In B. Swierzbin, F. Morris, M. E. Anderson, C. A. Klee, & E. Tarone (Eds.), Social
and cognitive factors in second language
acquisition (pp. 57–76). Cascadilla Press.
Shohamy, E. (1994). The
validity of direct versus semi-direct oral tests. Language
Testing, 11(2), 99–123.
Staples, S., Biber, D., & Reppen, R. (2018). Using
corpus-based register analysis to explore the authenticity of high-stakes language exams: A register comparison of TOEFL iBT
and disciplinary writing tasks. The Modern Language
Journal, 102(2), 310–332.
Timpe-Laughlin, V., Wain, J., & Schmidgall, J. (2015). Defining
and operationalizing the construct of pragmatic competence: Review and recommendations. ETS
Research Report Series,
2015(1), 1–43.
Cited by (2)
Cited by two other publications
Cock, Sylvie De
This list is based on CrossRef data as of 12 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
