Subventions et des contributions :

Titre :
Predicting Speech: Cortical Oscillations and Dynamics
Numéro de l’entente :
RGPIN
Valeur d'entente :
115 000,00 $
Date d'entente :
10 mai 2017 -
Organisation :
Conseil de recherches en sciences naturelles et en génie du Canada
Location :
Ontario, Autre, CA
Numéro de référence :
GC-2017-Q1-03026
Type d'entente :
subvention
Type de rapport :
Subventions et des contributions
Informations supplémentaires :

Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)

Nom légal du bénéficiaire :
Monahan, Philip (University of Toronto)
Programme :
Programme de subventions à la découverte - individuelles
But du programme :

Speech is a complex and variable medium, and yet when we listen in our native language, comprehension appears effortless and automatic. Simultaneously, through experience with our native language, we have a wealth of knowledge about how it is structured at the level of sounds, words, and sentences. Recent work has begun to emphasize the importance of this knowledge during real-time language processing. One overarching mechanism that could shape the effortless nature of language comprehension is leveraging prior knowledge of our language’s structure to predict upcoming information at all levels of linguistic analysis, thereby facilitating the processing of future information. My five-year research program is to identify how this prediction is encoded in the temporal and spatial dynamics of neural oscillations during speech perception: the earliest stages of language comprehension. In particular, this research builds on recent findings which have identified a brain marker of prediction during speech perception.

My long-term research goal is to incorporate findings from linguistics with cognitive neuroscience to understand the brain bases of speech perception. My short-term research goals focus on the “when”, “what” and “where” of prediction during speech perception and are centred around the following three research objectives:

  1. Spatial Localization of Prediction: Where in the brain is this brain marker for prediction, does it overlap with other networks involved in speech perception, and how does it relate to other indices of speech recognition?
  2. Speech Cues that Drive Prediction: What aspects of speech sound representations contribute to driving this predictive mechanism? While my previous work has demonstrated a role for the contrastive articulatory properties of speech sounds, could other cues also be at play, including speech sound co-occurrences, statistical cues, and word structure?
  3. Language Specificity of Prediction: Are these predictive mechanisms general or do they rely solely on the linguistic properties of speech sounds: music vs. speech, different speakers, different speaking rates? Is the prediction of different types of information encoded in distinct oscillatory brain patterns?

This research has applications for the construction of biologically plausible circuits that model language processing, necessary for Artificial Intelligence research, aid in the rehabilitation of various speech language motor-deficits by pinpointing the relevant speech properties necessary for the accurate perception and production, as well as in aphasias, where the deficit may partially result from an inability of aphasic brains to synchronize speech cues when talking to another individual, the latter two having major impacts in Canada both as we become more sensitive to speech-motor deficits in children and as our population ages, and with it, the number of cases of aphasia.