Subventions et des contributions :

Titre :
Integrating object segmentation for robust object tracking
Numéro de l’entente :
RGPIN
Valeur d'entente :
120 000,00 $
Date d'entente :
10 mai 2017 -
Organisation :
Conseil de recherches en sciences naturelles et en génie du Canada
Location :
Québec, Autre, CA
Numéro de référence :
GC-2017-Q1-02235
Type d'entente :
subvention
Type de rapport :
Subventions et des contributions
Renseignements supplémentaires :

Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)

Nom légal du bénéficiaire :
Amer, Aishy (Université Concordia)
Programme :
Programme de subventions à la découverte - individuelles
But du programme :

Video object tracking is an active research field with applications diverse as augmented reality, self-navigation systems (drones or self-driving cars), human-computer interactions, hyper-linked social video, or athlete performance monitoring. Tracking is performed on raw video signals, captured typically from a single camera; different types of objects can be tracked, such as persons, athletes, vehicles, animals, or ships. Tracking of objects across real-world video signals has many challenges such as variations in object appearance due to scale change, occlusion, articulation, deformation, and interactions of multiple objects.
Recent algorithms to object tracking have well advanced the topic in some of these challenges. In this proposal, I will address the following research challenges to advance the state-of-the-arts to single object tracking: 1) drift of object tracking: what to do when a tracker drifts from the target and how to detect such drift? 2) variable object features: how to deal with features, such as color histogram, that significantly vary during tracking? 3) scale change of objects: can we explicitly model scale change or rather implicitly such as through the integration with object segmentation and detection?
To detect drifts of a specific tracker, I propose to analyze its internal state by monitoring its latent parameters over time. For tracker-independent drift detection, I will integrate object segmentation and abjectness (likelihood an image region is an object) measures. To correct drifts, I will integrate object segmentation and object detection (localization) into tracking as drift happens. The use of segmentation is motivated by recent discoveries on the human visual system which seems to rely on spatial and temporal spacing of the objects for effective tracking. I will specifically use object segmentation and localization to explore present-future (or space-time) interaction between video objects.
To solve the problem of a variable feature across a video sequence, I will use machine learning, e.g., online multi-kernel learning with support vector machines, to online confirm or reject features.
I will handle scale change of objects through monitoring of object states in space and time, through homography transformation, and depth as an added feature.
Availability of many datasets, ground-truth data, and metrics of object tracking will enable effective evaluation of the new approaches. I anticipate that developed methods will complement existing online learning approaches, adding an extra level of robustness to tracking-assisted video applications. Interest in widely-applicable (robust) but fast object tracking is large. With my over 20 years of research and industrial experience, I am confident to well advance knowledge and transfer it to related sectors of the Canadian video technology industries.