Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
Summary of Proposal: Large-scale Application of Automatic Differentiation in Computational Finance (and beyond)
The field of Automatic Differentiation (AD) has taken great strides in recent years. Nevertheless, AD is not heavily used for large-scale problems often due to efficiency concerns. This is true in many application areas, including economics and finance. For example, large portfolios of variable annuities can take many hours to evaluate by standard methods; hedging, typically requiring the determination of derivatives, would require significantly more time. Clearly, any rapid hedging methods based on derivatives for such portfolios is problematic. This poses a serious challenge for effective risk management. Similar challenges exist for other portfolios (e.g., Credit Value Adjustment (CVA’s)).
We propose research to increase the efficiency and applicability of AD to large-scale structured problems. While our proposed advances are applicable to many branches of computational science and engineering, we emphasize computational finance and risk management applications.
AD technology has become more efficient in recent years. We have been involved in the adaption of AD methodology to problems with structure, working under the assumption that most practical large-scale problems are structured. A general and practical definition of a structured problem is given in ([1], 4.16) . The idea is that the function can be written as a (partially ordered) sequence of steps -- each step is a nonlinear mapping in itself and is a (sub-) function of previously defined (intermediate) variables. This structure definition captures composite function computations (i.e., a sequence of chained computations), generalized partially separable functions, and (nested) Monte-Carlo functions (and many other common structures).
As indicated in previous work this structure covers many applications and is often the most natural way to express the evaluation code . Given a code that exposes this structure AD can be applied in a “slice-by-slice” manner to gain significant efficiency (both in space and time). Generally this gain is because the Jacobians of the component functions are either sparse (i.e., hidden sparsity) or compact (with few rows), whereas the Jacobian of the overall (original objective) function is often dense.
In conclusion, the determination of derivatives is a fundamental computational task in much of computational science and engineering. The ideas we propose here can have a very significant impact across these areas - certainly with respect to applied optimization problems, as well as our target class: sensitivity problems in computational finance/risk management.
[1] Automatic Differentiation in MATLAB Using ADMAT with Applications. Thomas F. Coleman and Wei Xu, SIAM, 2016.