Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
Impact. Understanding language processing is critical to the well-being of every Canadian. The importance of this topic is exemplified by the $4.5 billion that provincial education ministries across Canada spend on special language-related programs, including language training for newcomers. It is also seen in the major industrial opportunities brain-inspired models have for artificial intelligence (AI), such as the $10 billion in revenue generated by Microsoft’s natural language AI. An improved theory of the brain’s language system will therefore enable a range of innovative approaches to language instruction, remediation, and AI.
Plan. My research focuses on advancing our understanding of the neural basis of word processing by incrementally building and testing neural network models of language made up of groups of simulated neurons. The models that I develop improve upon classic, highly influential neural network models by incorporating additional principles from systems and cellular neuroscience. My long term objectives and associated short-term aims will break new ground by building upon my prior research on the following key aspects of language:
Objective 1: Develop a unified theory of learning, representation, and generalization of language in neural networks: This work will examine how neural networks can generalize regularities in a language to newly learned words (a new word like gint probably rhymes with the regular pronunciation in mint and lint , not the exceptional pronuncation of pint ). This research will inform how teaching a sample of words can yield widespread generalization to an entire language during first and second language acquisition.
Objective 2: Create a model of language processing validated in multiple languages: Most modelling research to date has focused primarily on English. This research will study how neural networks adapt to the properties of different languages to explain key cross-linguistic differences (e.g., differential sensitivity to letter position in Hebrew vs. English readers). In turn, this will inform how neural networks are shaped by the properties of a range of languages, which can improve language instruction and remediation in many languages.
Objective 3: Develop a model of ambiguous word processing and task performance. Simulations and empirical work related to this objective will determine whether a range of effects associated with semantically ambiguous words (e.g., bank, refers to a river in some contexts and a financial institute in others) are due to the time-course of meaning selection, or how the decision system taps into the language system. These investigations will inform theories of comprehension impairments, and contribute to industrial applications related to how brain-inspired AI systems identify a word’s meaning.