|国家预印本平台
首页|Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis

Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis

Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis

来源:bioRxiv_logobioRxiv
英文摘要

Abstract It has been proposed that people generate probabilistic predictions at multiple levels of linguistic representation during language comprehension. Here we used Magnetoencephalography (MEG) and Electroencephalography (EEG) in combination with Representational Similarity Analysis (RSA) to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as participants read three-sentence scenarios in which the verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns. The broader context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the spatial similarity pattern of the brain activity measured by MEG and EEG following the verbs until just before the presentation of the nouns. We found clear and converging evidence across the MEG and EEG datasets that the spatial pattern of neural activity following animate constraining verbs was more similar than the spatial pattern following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflects the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether it was possible to predict a specific word on the basis of the prior discourse context. This provides strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words. Significant statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead, we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we know that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.

Wlotko Edward、Schoot Lotte、Warnke Lena、Wang Lin、Kuperberg Gina R.、Alexander Edward、Kim Minjae

Department of Psychology, Tufts University||Moss Rehabilitation Research InstituteDepartment of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School||Department of Psychology, Tufts UniversityDepartment of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School||Department of Psychology, Tufts UniversityDepartment of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School||Department of Psychology, Tufts UniversityDepartment of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School||Department of Psychology, Tufts UniversityDepartment of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School||Department of Psychology, Tufts UniversityDepartment of Psychology, Tufts University||Psychology Department, Morrissey College of Arts and Sciences, Boston College

10.1101/709394

语言学

Wlotko Edward,Schoot Lotte,Warnke Lena,Wang Lin,Kuperberg Gina R.,Alexander Edward,Kim Minjae.Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis[EB/OL].(2025-03-28)[2025-05-05].https://www.biorxiv.org/content/10.1101/709394.点此复制

评论