Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
Automatic computer/robot vision for manufacturing, health care, security, and multi-media is a very active area of research, but real data performance is still far from perfect in quality, speed, or both. Due to explosion in availability of ever diverse digital media and computational resources, the number of potential applications grew significantly, but so did the size and complexities of the data. Despite significant progress in the last 10-15 years, the computer vision and biomedical image analysis communities are looking for new algorithms and mathematical models even for basic problems like segmentation, reconstruction, detection. Robustness, computational efficiency, and scalability of algorithms are crucial factors. Efficient optimization methods for high-order regularization constraints in the context large real data with high-dimensional features remain a challenge.
My approach to computer vision and biomedical image analysis is largely based on discrete models and combinatorial optimization methods. My past research demonstrated many powerful combinatorial algorithms (e.g. graph cut, a-expansion, facility location, etc.) or discrete approximation methods (e.g. based on bound optimization, trust region) computing either globally optimal or provably good solutions for mathematically justified high-order graphical models in a wide range of problems in vision and biomedical imaging. These optimization methods lead to breakthrough results for difficult problems like N-D image segmentation, multi-camera stereo, texture synthesis, motion analysis, object detection/recognition, and thin-structure estimation.
There are many reasons to continue research in discrete algorithms for regularization, which is my long term goal. Alternative continuous approaches require GPU implementations only to generate running times comparable to discrete methods. They are also not as stable or repeatable explaining lack of publicly available code - in contrast to combinatorial algorithms (with 20-30 daily downloads from my group's web site). Other alternative methodologies lack geometric justification making it impossible to integrate principled structural/topological constraints. The problems I plan to work on in the next five years are related to efficient optimization for high-order priors, structured partially-ordered labeling, curvature-based vessel extraction, regularization constraints for clustering in high-dimensional feature spaces, and integration of principled fast multi-object segmentation techniques with machine learning methodologies for object classification based on kernel SVM and neural networks.