Subventions et des contributions :

Titre :
Scene interpretation for sensory substitution and intelligent agents: a neurocomputational approach
Numéro de l’entente :
RGPIN
Valeur d'entente :
120 000,00 $
Date d'entente :
10 mai 2017 -
Organisation :
Conseil de recherches en sciences naturelles et en génie du Canada
Location :
Québec, Autre, CA
Numéro de référence :
GC-2017-Q1-03125
Type d'entente :
subvention
Type de rapport :
Subventions et des contributions
Informations supplémentaires :

Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)

Nom légal du bénéficiaire :
ROUAT, Jean (Université de Sherbrooke)
Programme :
Programme de subventions à la découverte - individuelles
But du programme :

The long-term goal of my research is the development of assistive technologies with sensory substitution devices. Thanks to the great plasticity of the brain and the densely interconnected cortical areas, it is possible to complement and substitute one sensory modality with another. However current sensory substitution systems are not able to analyze visual or auditory scenes as they are essentially describing the environment without interpretation. Therefore they overload the audition or the vision of the user which might be uncomfortable or unhealthy. Interpretation of auditory, visual and multimodal scenes is also of great need to the development of intelligent agents that interact with their environment and human beings. But actually scene interpretation for intelligent agents is more or less a limited description of scenes and restraints to very specific applications.
Through this research program I want to push as much as possible the research in multimodal scene interpretation for the next 5 coming years.
To maximize the impact and while doing the research, I will frequently transfer results to applications in sensory substitution (FQRNT équipe) and in artificial intelligence for interacting agents (CHIST-ERA IGLU).
To minimize the environmental footprint, the design and realization of the algorithms will favour architectures with the lowest power consumption. To this end, I will focus during the last 2 years on implementations with networks of spiking neurons.
I am restricting the research program to sources that are visual (they are moving sources of given shapes, texture & color) and audible (they are sources of given timbre & pitch) characteristics. Therefore I am assuming that they can be segregated based on the time evolution of multimodal representations derived from these characteristics.
The research is conducted in the context of the cognitive action/perception loop in which the scene interpretation system is mobile and can move to reduce ambiguities. The scene interpretation research will be organized around 5 tasks:
1) Bottom-up segregation with online learning of simple features to coarsely segregate in a first pass the multimodal objects inside the scene.
2) Perceptual learning to build multimodal complex features and representations that preserve the perceptive organization of characteristics (shapes, timbre, pitch);
3) Semantic characterization of objects;
4) Scene analysis and interpretation.
5) Low-power neuromorphic implementations.
Sensory substitution has a very strong potential for "exercises" and rewiring of the brain along with development of new brain prosthesis. Intelligent agents that can perceive their environment through scene analysis is a corner stone to the new AI economy and industry. Also, there is a strong need to develop the Canadian industry of spiking neural networks and neuromorphic systems.