Reset filters

Search publications


By keyword
By department

No publications found.

 

Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes.

Authors: de Almeida RGDi Nardo JAntal Cvon Grünau MW


Affiliations

1 Department of Psychology, Concordia University, Montreal, QC, Canada.
2 Department of Linguistics, Yale University, New Haven, CT, United States.

Description

Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes.

Front Psychol. 2019;10:2162

Authors: de Almeida RG, Di Nardo J, Antal C, von Grünau MW

Abstract

As Macnamara (1978) once asked, how can we talk about what we see? We report on a study manipulating realistic dynamic scenes and sentences aiming to understand the interaction between linguistic and visual representations in real-world situations. Specifically, we monitored participants' eye movements as they watched video clips of everyday scenes while listening to sentences describing these scenes. We manipulated two main variables. The first was the semantic class of the verb in the sentence and the second was the action/motion of the agent in the unfolding event. The sentences employed two verb classes-causatives (e.g., break) and perception/psychological (e.g., notice)-which impose different constraints on the nouns that serve as their grammatical complements. The scenes depicted events in which agents either moved toward a target object (always the referent of the verb-complement noun), away from it, or remained neutral performing a given activity (such as cooking). Scenes and sentences were synchronized such that the verb onset corresponded to the first video frame of the agent motion toward or away from the object. Results show effects of agent motion but weak verb-semantic restrictions: causatives draw more attention to potential referents of their grammatical complements than perception verbs only when the agent moves toward the target object. Crucially, we found no anticipatory verb-driven eye movements toward the target object, contrary to studies using non-naturalistic and static scenes. We propose a model in which linguistic and visual computations in real-world situations occur largely independent of each other during the early moments of perceptual input, but rapidly interact at a central, conceptual system using a common, propositional code. Implications for language use in real world contexts are discussed.

PMID: 31649574 [PubMed]


Keywords: event comprehensioneye movementslanguage-vision interactionmodularitysentence comprehensionsituated language processingverb meaningvisual world paradigm


Links

PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31649574?dopt=Abstract

DOI: 10.3389/fpsyg.2019.02162