Adaptive Gaussian Process Regression for Bayesian inverse problems

Main Article Content

Paolo Villani Jörg F. Unger Martin Weiser

Abstract

We introduce a novel adaptive Gaussian Process Regression (GPR) methodology for efficient construction of surrogate models for Bayesian inverse problems with expensive forward model evaluations. An adaptive design strategy focuses on optimizing both the positioning and simulation accuracy of training data in order to reduce the computational cost of simulating training data without compromising the fidelity of the posterior distributions of parameters. The method interleaves a goal-oriented active learning algorithm selecting evaluation points and tolerances based on the expected impact on the Kullback-Leibler divergence of surrogated and true posterior with a Markov Chain Monte Carlo sampling of the posterior. The performance benefit of the adaptive approach is demonstrated for two simple test problems.

Article Details

How to Cite
Villani, P., Unger, J., & Weiser, M. (2024). Adaptive Gaussian Process Regression for Bayesian inverse problems. Proceedings Of The Conference Algoritmy, , 214 - 224. Retrieved from http://www.iam.fmph.uniba.sk/amuc/ojs/index.php/algoritmy/article/view/2175/1043
Section
Articles

References

[1] M.A. Álvarez, L. Rosasco, and N.D. Lawrence. Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3):195–266, 2012.
[2] T. Bai, A.L. Teckentrup, and K.C. Zygalakis. Gaussian processes for Bayesian inverse problems associated with linear partial differential equations. Technical report, arXiv:2307.08343, 2023.
[3] M. Croci, K.E. Willcox, and S.J. Wright. Multi-output multilevel best linear unbiased estimators via semidefinite programming. Comput. Meth. Appl. Mech. Eng., 413:116130, 2023
[4] K. Crombecq, E. Laermans, and T. Dhaene. Efficient space-filling and non-collapsing sequential design strategies for simulation-based modeling. European Journal of Operational Research, 214:683–696, 2011.
[5] D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman. emcee: The MCMC hammer. PASP, 125:306–312, 2013.
[6] M. B. Giles. Multilevel monte carlo methods. Acta Numerica, 24:259–328, 2015.
[7] A. Giunta, S. Wojtkiewicz, and M. Eldred. Overview of modern design of experiments methods for computational simulations (invited). In 41st Aerospace Sciences Meeting and Exhibit, AIAA 2003-649, pages 1–17, 2003.
[8] P.J. Green, K. Latuszyński, M. Pereyra, and C.P. Robert. Bayesian computation: a sumary of the current state, and samples backwards and forwards. Stat. Comput., 25:835–862, 2015.
[9] V. Joseph and Y. Hung. Orthogonal-maximin latin hypercube designs. Statistica Sinica, 18:171–186, 2008.
[10] M. Järvenpää, M. U. Gutmann, A. Vehtari, and P. Marttine. Parallel Gaussian process surrogate Bayesian inference with noisy likelihood evaluations. Bayesian Analysis, 16, pp. 147–178., 2021.
[11] R. Lehmensiek, P. Meyer, and M. Müller. Adaptive sampling applied to multivariate, multiple output rational interpolation models with application to microwave circuits. International Journal of RF and Microwave Computer-Aided Engineering, 12(4):332–340, 2002.
[12] I. Neitzel, K. Pieper, B. Vexler, and D. Walter. A sparse control approach to optimal sensor placement in PDE-constrained parameter estimation problems. Numer. Math., 143(4):943–984, 2019.
[13] J. Nitzler, J. Biehler, N. Fehn, P.-S. Koutsourelakis, and A. Wall. A generalized probabilistic learning approach for multi-fidelity uncertainty quantification in complex physical simulations. Comp. Meth. Appl. Mech. Eng., 400:115600, 2022.
[14] N. Queipo, R. Haftka, W. Shyy, T. Goel, R. Vaidyanathan, and P. Tucker. Surrogate-based analysis and optimization. Progress in Aerospace Sciences, 41(1):1–28, 2005.
[15] C. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[16] G. Sagnol, H.-C. Hege, and M. Weiser. Using sparse kernels to design computer experiments with tunable precision. In Proceedings of COMPSTAT 2016, pages 397–408, 2016.
[17] D. Schaden and E. Ullmann. Asymptotic analysis of multilevel best linear unbiased estimators. SIAM/ASA J. Uncertainty Quantification, 9(3):953–978, 2021.
[18] P. Semler and M. Weiser. Adaptive Gaussian process regression for efficient building of surrogate models in inverse problems. Inverse Problems, 39:125003, 2023.
[19] P. Semler and M. Weiser. Adaptive gradient enhanced gaussian process surrogates for inverse problems. In Proceedings of the MATH+ Thematic Einstein Semester 2023, 2024 (submitted).
[20] M. Sinsbeck and W. Nowak. Sequential Design of Computer Experiments for the Solution of Bayesian Inverse Problems. SIAM/ASA Journal on Uncertainty Quantification, 5:1, 640-664., 2017.
[21] M. Sugiyama. Active learning in approximately linear regression based on conditional expectation of generalization error. Journal of Machine Learning Research, 7:141––166, 2006.
[22] Z. Wang and M. Broccardo. A novel active learning-based Gaussian process metamodelling strategy for estimating the full probability distribution in forward UQ analysis. Struct. Safety, 84:101937, 2020.
[23] M. Weiser and S. Ghosh. Theoretically optimal inexact spectral deferred correction methods. Commu. Appl. Math. Comp. Sci., 13(1):53–86, 2018.