Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
Machine learning is a set of techniques for programming computers by feeding them data so that they can help with tasks when our understanding of how to turn data into decisions is limited. As such, machine learning is a key technology for addressing various challenges regardless of whether they arise in science, economics, manufacturing, technology development or any other area. Similarly to other sub-disciplines of computing science, the role of theory in machine learning is to guide the design and analysis of algorithms. Such learning theory helps to determine which problems can or cannot be learned efficiently, and, when a problem is " learnable "', how much data is needed to reach a desired performance level.
Most research in machine learning focuses on worst-case guarantees, which leaves a significant gap between the predictions of existing theory and the everyday experience of machine learning practitioners . In particular, over many years (if not decades), practitioners have collected plenty of evidence that in practice algorithms with meager or no worst-case guarantees often perform quite well on some particular task of practical interest, while provably nearly worst-case optimal learning algorithms can behave poorly on the same tasks. The most recent surge of examples involve deep neural networks, whose unprecedented performance is anything but expected given our state of knowledge.
A potential solution to this dilemma is to develop algorithms that have the ability to adapt to the "easiness", or "regularities" of data, if and when such regularities exist. Similarly to how we expect a "clever" algorithm to solve a linear system of equations with fewer algebraic operations when the underlying matrix is triangular, adaptive algorithms are also expected to make better use of information when used on data that has some extra structure.
Adaptivity is a much studied idea both in statistics and machine learning. However, so far adaptivity has been studied in a case-by-case fashion and there is no comprehensive theory that would help one to design and analyze adaptive algorithms.
The first goal of this proposal is to fill this void . In particular, the main aim is to develop a robust theory of optimally adaptive learning algorithms and also to demonstrate the usefulness of the theory by applying it to some specific learning scenarios. The robustness of the new theory will come from making minimal assumptions on the data generating mechanism borrowing ideas from the framework of online learning, while the notion of optimal adaptivity arises from the novel idea of studying how well any learning algorithm amongst those that are robust in some worst-case sense can behave on a single, individual problem instance. The main benefit of this approach is that adaptivity is defined without relying on ad-hoc notions of data regularity and leads to "natural" notions of regularity instead.