Article published In: Applied Pragmatics
Vol. 7:1 (2025) ► pp.56–81
Ontological realism as a validity criterion in second-language strategic competence assessment
Published online: 5 August 2025
https://doi.org/10.1075/ap.21010.oco
https://doi.org/10.1075/ap.21010.oco
Abstract
Strategic competence, conceptualized as the ability to put semantic, grammatical, and pragmatic knowledge into
use, is a key element in models of communicative language proficiency but remains a difficult construct to assess in language
tests. In the oral proficiency interview (OPI), strategic competence is typically assessed through the use of role-plays with a
complication. Assessment of test-taker performance on the role-play is subjective and is contingent on raters accurately
identifying interactional evidence of strategic competence. Accordingly, validation of the strategic competence exhibited in
role-plays has been mostly interpretive. To obtain evidential support for an interpretive argument that role-plays can indeed
isolate and provide assessment evidence of strategic competence, the criterion of ontological realism is applied in this study.
Towards that end, eleven samples of English-as-a-foreign-language OPI role-plays with a complication were judged by 52 untrained
English native speakers. Evidence in support of the ontological validity of assessing strategic competence via role-plays is
presented through analyses of the untrained raters’ judgments, augmented by quantitative analyses that identify sources of
variation among the raters, including a post-study additional round of coding in which the notion of “success” in the role-plays
was examined more granularly than can be done with dichotomous decisions.
Article outline
- 1.Introduction
- 2.Literature review
- 2.1Strategic competence
- 2.2The OPI role-play task
- 2.3Validity
- 2.4Current study
- 3.Method
- 3.1Participants
- 3.2Instrument
- 3.3Procedure
- 4.Results
- 4.1Logistic regression analyses
- 4.2Analysis of interaction
- 5.Discussion
- 6.Conclusion
References
References (48)
Astington, J. W. & Dack, L. A. (2008). Theory
of mind. In M. M. Haith & J. B. Benson (Eds.), Encyclopedia
of infant and early childhood
development (pp. 343–356). Academic Press.
Bachman, L. F., & Palmer, A. S. (1996). Language
testing in practice: Designing and developing useful language tests. Oxford University Press.
Bachman, L. F. & Savignon, S. J. (1986). The
evaluation of communicative language proficiency: A critique of the ACTFL oral interview. The
Modern Language
Journal, 70(4), 380–390.
Baron-Cohen, S. (1991). Precursors
to a theory of mind: Understanding attention in others. In A. Whiten (Ed.), Natural
theories of mind: Evolution, development, and simulation of everyday
mindreading (pp. 233–251). Blackwell.
Bejar, I. I. (2012). Rater
cognition: Implications for validity. Educational Measurement: Issues and
Practice, 31(3), 2–9.
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The
concept of validity. Psychological
Review, 111(4), 1061–1071.
Canale, M. (1983). On
some dimensions of language proficiency. In J. W. Oller (Ed.), Issues
in language testing
research (pp. 333–342). Newbury House.
Canale, M., & Swain, M. (1981). A
theoretical framework for communicative competence. In A. S. Palmer, P. J. M. Groot, & G. A. Trosper (Eds.), The
construct validation of tests of communicative
competence (pp. 31–36). Teachers of English to Speakers of Other Languages (TESOL).
Chapelle, C. A. (2018, Mar. 21). Validity
arguments in language assessment: Contributions from applied linguistics [Opening
address]. Language Assessment Research Conference, Iowa State
University, Ames, IA, United States.
Frost, K., Elder, C., & Wigglesworth, G. (2012). Investigating
the validity of an integrated listening-speaking task: A discourse-based analysis of test takers’ oral
performances. Language
Testing, 29(3), 345–369.
Galaczi, E. & Ockey, G. (2022, Mar. 10). Why
is interaction so difficult to assess and what can we do about
it? [Symposium]. Language Testing Research Colloquium, online. [URL]
Harding, L. (2014). Communicative
language testing: Current issues and future research. Language Assessment
Quarterly, 11(2), 186–197.
Herzog, M. (n.d.). History
of the ILR scale. Interagency Language Roundtable. [URL]
Hinkle, D. E., Wiersma, W., & Jurs, S. G. (2003). Applied
statistics for the behavioral sciences. Houghton Mifflin Company.
Hymes, D. H. (1972). On
communicative competence. In J. B. Pride & J. Holmes (Eds.), Sociolinguistics (pp. 269–293). Penguin.
Jacoby, S., & McNamara, T. (1999). Locating
competence. English for Specific
Purposes, 18(4), 213–241.
(2006). Validation. In R. Brennen (Ed.), Educational
measurement (pp. 17–64). Rowan and Littlefield Publishers.
(2011). Validating
score interpretations and uses: Messick lecture, Language Testing Research Colloquium, Cambridge, April
2010. Language
Testing, 29(1), 3–17.
Kasper, G., & Ross, S. J. (2007). Multiple
questions in oral proficiency interviews. Journal of
Pragmatics, 371, 2045–2070.
(2013). Assessing
second language pragmatics: An overview and introductions. In S. J. Ross & G. Kasper (Eds.), Assessing
second language
pragmatics (pp. 1–40). Palgrave Macmillan.
Knoch, U. & Chapelle, C. A. (2018). Validation
of rating processes within an argument-based framework. Language
Testing, 35(4), 477–499.
Kormos, J. (1999). Simulating
conversations in oral-proficiency assessment: A conversation analysis of role plays and non-scripted interviews in language
exams. Language
Testing, 16(2), 163–188.
Kramsch, C. (1986). From
language proficiency to interactional competence. The Modern Language
Journal, 70(4), 366–372.
(1997). ‘Interaction’
in second language performance assessment: Whose performance? Applied
Linguistics, 18(4), 446–466.
Okada, Y. (2010). Role-play
in oral proficiency interviews: Interactive footing and interactional competencies. Journal of
Pragmatics, 42(6), 1647–1668.
Plough, I., Banerjee, J., & Iwashita, N. (2018). Interactional
competence: Genie out of the bottle. Language
Testing, 35(3), 427–455.
Roever, C., & Kasper, G. (2018). Speaking
in turns and sequences: Interactional competence as a target construct in testing
speaking. Language
Testing, 35(2), 331–355.
Ross, S. J. (1992). Accommodative
questions in oral proficiency interviews. Language
Testing, 9(2), 173–185.
(2007). A
comparative task-in-interaction analysis of OPI backsliding. Journal of
Pragmatics, 39(11), 2017–2044.
Ross, S. J., & O’Connell, S. P. (2013). The
situation with complication as a site for strategic
competence. In S. J. Ross & G. Kasper (Eds.), Assessing
second language
pragmatics (pp. 311–326). Palgrave Macmillan.
Salaberry, R. (2000). Revising
the revised format of the ACTFL Oral Proficiency Interview. Language
Testing, 17(3), 289–310.
Sato, T., & McNamara, T. (2019). What
counts in second language oral communication ability? The perspective of linguistic
laypersons. Applied
Linguistics, 40(4), 894–916.
Schmidgall, J., & Powers. (2020). TOEIC
Writing test scores as indicators of the functional adequacy of writing in the international workplace:
Evaluation by linguistic laypersons. Assessing
Writing 461, 100492
Stansfield, C. W., & Kenyon, D. M. (1992). Research
on the comparability of the oral proficiency interview and the simulated oral proficiency
interview. System, 20(3), 347–364.
Stokoe, E. (2013). The
(in)authenticity of simulated talk: Comparing role-played and actual interaction and the implications for communication
training. Research on Language & Social
Interaction, 46(2), 165–185.
Surface, E. A., & Dierdorff, E. C. (2003). Reliability
and the ACTFL Oral Proficiency Interview: Reporting indices of interrater consistency and agreement for 19
languages. Foreign Language
Annals, 36(4), 507–519.
Trim, J. L. M. (2012). Preface. In A. Green (Ed.), Language
functions revisited: Theoretical and empirical bases for language construct definition across the ability
range, (pp. xxi–xli). Cambridge University Press.
