Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
Optimization is the study of extreme values – in one form or another, it is ubiquitous in both theoretical and applied scientific inquiry. In combinatorial optimization one might wish to minimize the total distance travelled when visiting a fixed set of cities (the travelling salesman problem) or the total miles of asphalt required to ensure one can drive from one city to any other (the minimum spanning tree problem). In integer programming one wishes to satisfy demand using a minimum number of units; perhaps, the number of busses needed in an urban transit system, or the number of cell towers required for full coverage of a city. For a more organic example, consider the trajectories of river systems, which twist and turn around obstacles en route to the ocean. A river's route is not a Euclidean shortest path, but it is optimal in that, at least locally, it approximately minimizes the energy expended by the water in its trajectory from source to sea.
There are vast differences in algorithmic difficulty for different optimization problems. However, the worst-case performance of an algorithm is often a poor gauge of its efficiency for real-world problems where the input is the product of complex interactions between large numbers of independent factors. This is the impetus behind my research program, which focusses on the analysis and development of algorithms for optimization problems on random data . Remarkably, the probabilistic analysis of many optimization problems leads directly to the heart of modern probability; often, the same probabilistic models arise in describing diverse physical, biological and information systems. Pointing out such “universal” behavior is one of the key conceptual insights that modern probability has to offer to the sciences.
Some of the specific questions I aim to answer are:
* For which optimization problems over random data is global asymptotic behavior essentially determined by local interactions? When this is the case, what does it imply about fluctuation sizes?
* Many constraint satisfaction problems (CSPs) are expected to obey the so called 1-replica symmetry breaking heuristic from spin glasses. This corresponds to a very simple correlation structure, equivalent to that of extremes in branching processes. Is there a systematic way of reverse-engineering the correlation structure from the CSP description? This would yield efficient approximation algorithms for hard CSPs on random data.
* Does the now-well-understood behavior of maxima in log-correlated Gaussian random fields predict that of more general log-correlated fields? What is the fine behavior of extremes in Markov random fields?
* Is there a large deviations theory for combinatorial optimization problems over random data?
* How does competition for resources affect front propagation speed in branching systems? This relates to the fluctuations of nonlocal Fisher-KPP type integrodifferential equations.