Seeing first person changes gesture but saying first person does not
Published online: 19 October 2018
https://doi.org/10.1075/gest.00014.par
https://doi.org/10.1075/gest.00014.par
Abstract
Events with a motor action component (e.g., handling an object) tend to evoke gestures from the point of view of a character (character viewpoint, or CVPT) while events with a path component (moving through space) tend to evoke gestures from the point of view of an observer (observer viewpoint, or OVPT). Events that combine both components (e.g., rowing a boat across a lake) seem to evoke both types of gesture, but it is unclear why narrators use one or the other. We carry out two manipulations to explore whether gestural viewpoint can be manipulated. Participants read a series of stories and retold them in two conditions. In the image condition, story sentences were presented with images from either the actor’s perspective (actor version) or the observer’s perspective (observer version). In the linguistic condition, the same sentences were presented in either the second person (you…) or the third person point of view (h/she…). The second person led participants to use the first person (I) in retelling. Gestures produced during retelling were coded as CVPT or OVPT. Participants produced significantly more CVPT gestures after seeing images from the point of view of an actor, but the linguistic manipulation did not affect viewpoint in gesture. Neither manipulation affected overall gesture rate, or co-occurring speech. We relate these findings to frameworks in which motor action and mental imagery are linked to viewpoint in gesture.
Keywords: gesture, viewpoint, perspective, embodiment
Article outline
- Method
- Materials
- Image manipulation condition
- Linguistic manipulation stories
- Norming
- Participants
- Coding
- Reliability
- Results
- Gesture
- Speech
- Discussion
- Conclusions
- Note
References
References (40)
Bergen, Benjamin, Shane Lindsay, Teenie Matlock, & Srini Narayanan (2007). Spatial and linguistic aspects of visual imagery in sentence comprehension. Cognitive Science, 311, 733–764.
Borghi, Anna, Arthur Glenberg, & Michael P. Kaschak (2004). Putting words in perspective. Memory and Cognition, 321, 863–873.
Brown, Amanda (2008). Gesture viewpoint in Japanese and English. Gesture, 8 (2), 256–276.
Brunyé, Tad T., Tali Ditman, Caroline R. Mahoney, Jason S. Augustyn, & Holly A. Taylor (2009). When you and I share perspectives. Psychological Science, 201, 27–32.
Crasborn, Onno & Han Sloetjes (2008). Enhanced ELAN functionality for sign language corpora. In Onno Crasborn, Thomas Hanke, Eleni Efthimiou, Inge Zwitserlood, & Ernst-Daniel Thoutenhoofd (Eds.), Construction and exploitation of sign language corpora: 3rd workshop on the representation and processing of sign languages (pp. 39–43). Paris: ELRA.
Davis, Joshua Ian, Adam Benforado, Ellen Esrock, Alasdair Turner, Ruth C. Dalton, Leon van Noorden, & Marc Leman (2012). Four applications of embodied cognition. Topics in Cognitive Science, 41, 786–793.
Debreslioska, Sandra, Aslı Özyürek, Marianne Gullberg, & Pamela Perniss (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 501, 431–456.
Deliema, David & Eve Sweetser (2016). Rethinking gestural viewpoint as multidimensional rather than a dichotomy. Paper presented at the International Society for Gesture Studies, Paris, 18–22 July.
Emmorey, Karen, Barbara Tversky, & Holly A. Taylor (2000). Using space to describe space: Perspective in speech, sign, and gesture. Spatial Cognition and Computation, 261, 157–180.
Engberg-Pedersen, Elizabeth (2015). Perspective in signed discourse: the privileged status of the signer’s locus and gaze. Open Linguistics, 11, 411–431.
Frederiksen, Anne Therese (2017). Separating viewpoint from mode of representation in iconic co-speech gestures: Insights from Danish narratives. Language and Cognition, 9 (4), 677–708.
Glenberg, Arthur & Michael P. Kaschak (2002). Grounding language in action. Psychonomic Bulletin & Review, 961, 558–565.
Glenberg, Arthur, Marc Sato, Luigi Cattaneo, Lucia Riggio, Daniele Palumbo, & Giovanni Buccino (2013). Processing abstract language modulates motor system activity. Quarterly Journal of Experimental Psychology, 61 (6), 905–919.
Grezes, Julie, Mary J. Tucker, Jorge L. Armony, Robert R. Eillis, & Richard E. Passingham (2003). Objects automatically potentiate action: an fMRI study of implicit processing. European Journal of Neuroscience, 171, 2735–2740.
Hostetter, Autumn B. & Martha W. Alibali (2008). Visible embodiment: Gesture as simulated action. Psychonomic Bulletin & Review, 151, 495–514.
(2010). Language, gesture, action! A test of the Gesture as Simulated Action framework. Journal of Memory & Language, 631, 245–257.
Kaschak, Michael P. & Arthur M. Glenberg (2000). Constructing meaning: The role of affordances and grammatical constructions in sentence comprehension. Journal of Memory & Language, 431, 508–529.
Kushch, Olga, Carmen Pérez Vidal, Marianne Gullberg, & Pilar Prieto (2016). Does viewpoint matter for word recall in second language acquisition? Paper presented at the International Society for Gesture Studies, Paris, 18–22 July.
Landis, J. Richard & Gary G. Koch (1977). The measurement of observer agreement for categorical data. Biometrics, 331, 159–174.
Markman, Arthur & Eric Dietrich (2000). Extending the classical view of representation. Trends in Cognitive Sciences, 41, 470–471.
McNeill, David (1992). Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press.
Özyürek, Aslı & Pamela Perniss (2011). Event representation in sign language: A crosslinguistic perspective. Event representation in language: Encoding events at the language-cognition interface. In Jürgen Bohnemeyer & Eric Pederson (Eds.), Event representation in language and cognition (pp. 84–107). Cambridge: Cambridge University Press.
Parrill, Fey (2009). Dual viewpoint gestures. Gesture, 9 (3), 271–289.
(2010a). The hands are part of the package: Gesture, common ground, and information packaging. In Sally Rice & John Newman (Eds.), Empirical and experimental methods in cognitive/functional research (pp. 285–302). Stanford: CSLI Publications.
(2010b). Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure. Language and Cognitive Processes, 251, 650–668.
(2011). The relation between the encoding of motion event information and viewpoint in English-accompanying gestures. Gesture, 111, 61–80.
(2012). Interactions between discourse status and viewpoint in co-speech gesture. In Barbara Dancygier & Eve Sweetser (Eds.), Viewpoint in language: A multimodal perspective (pp. 97–112). Cambridge: Cambridge University Press.
Parrill, Fey, Jennifer Bullen, & Huston Hoburg (2010). Effects of input modality on speech-gesture integration. Journal of Pragmatics, 421, 3130–3137.
Perniss, Pamela (2007). Achieving spatial coherence in German Sign Language narratives: The use of classifiers and perspective. Lingua, 1171, 1315–1338.
Pulvermüller, Friedemann (1999). Words in the brain’s language. Behavioral and Brain Sciences, 221, 253–279, discussion 280–336.
Ruby, Perrine & Jean Decety (2001). Effect of subjective perspective taking during simulation of action: A PET investigation of agency. Nature Neuroscience, 41, 546–550.
Stec, Kashmiri (2012). Meaningful shifts: A review of viewpoint markers in co-speech gesture and sign language. Gesture, 121, 327–360.
Sweetser, Eve E. & Kashmiri Stec (2016). Maintaining multiple viewpoints with gaze. In Barbara Dancygier, Wei-lun Lu, & Arie Verhagen (Eds.), Viewpoint and the fabric of meaning: Form and use of viewpoint tools across languages and modalities (pp. 237–258). Berlin: de Gruyter.
Vogeley, Kai & Gereon R. Fink (2003). Neural correlates of the first-person perspective. Trends in Cognitive Sciences, 71, 38–42.
Vogeley, Kai, Mark May, Afra Ritzl, Peter Falkai, Karl Zilles, & Gereon R. Fink (2004). Neural correlates of first-person perspective as one constituent of human self-consciousness. Journal of Cognitive Neuroscience, 161, 817–827.
Cited by (8)
Cited by eight other publications
Ariño Bizarro, Andrea
Boden, Ulrich J., Friederike Kern, Sofia Koutalidis, Olga Abramov, Anne Nemeth, Stefan Kopp & Katharina J. Rohlfing
2024. Four-year-olds’ visuospatial cognitive abilities and their relation to observer‑viewpoint gestures across three
communicative tasks. Pragmatics & Cognition 31:1 ► pp. 49 ff.
Devylder, Simon, Jennifer Hinnell, Joost van de Weier, Linea Brink Andersen, Lucie Laporte‐Devylder & Heron Ken Tomaki Kulukul
Dyrmo, Tomasz
Quinto-Pozos, David, Fey Parrill & Caitie Coons
2022. The interface between grammar and bodily enactment in ASL and English. Languages in Contrast 22:2 ► pp. 195 ff.
Chan, Dana Michelle & Spencer Kelly
Hostetter, Autumn B. & Martha W. Alibali
This list is based on CrossRef data as of 9 december 2025. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
