Solving Ordinary Differential Equations using Artificial Neural Networks - A study on the solution variance

Main Article Content

Toni Schneidereit Michael Breuß

Abstract

Solving differential equations can be realised with simple artificial neural network architectures. Several methods make use of trial solutions with different construction approaches and can provide reliable results. However, many parameters, different optimisation methods and random weight initialisation result in a non constant variance to the exact solution. To our knowledge, this variance has not been studied yet. We investigate several parameters and constant versus random weight initialisation for two solution methods to determine their reliability with the use of backpropagation and ADAM optimisation.

Article Details

How to Cite
Schneidereit, T., & Breuß, M. (2020). Solving Ordinary Differential Equations using Artificial Neural Networks - A study on the solution variance. Proceedings Of The Conference Algoritmy, , 21 - 30. Retrieved from http://www.iam.fmph.uniba.sk/amuc/ojs/index.php/algoritmy/article/view/1547/811
Section
Articles

References

[1] M. Hanke-Bourgeois, Grundlagen der Numerischen Mathematik und des Wissenschaftlichen Rechnens, B.G. Teubner Verlag / GWV Fachverlage GmbH, (2006).
[2] K. Kumar, and G.S.M. Thakur, Advanced Applications of Neural Networks and Artificial Intelligence: A Review, I.J Information Technology and Computer Science, 6 (2012), pp. 57–68.
[3] S. Mall, and S. Chakraverty, Application of Legendre Neural Network for solving ordinary differential equations, Applied Soft Computing, 43 (2016), pp. 347–356.
[4] I.E. Lagaris, A. Likas, and D.I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks, 9.5 (1998), pp. 987–1000.
[5] M.L. Piscopo, M. Spannowsky, and P. Waite, Solving differential equations with neural networks: Applications to the calculation of cosmological phase transitions, Physical Review D, 100.1 (2019), pp. 016002-1–016002-12.
[6] D. Nguyen, and B. Widrow, Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights, 1990 IJCNN International Joint Conference on Neural Networks, 3 (1990), pp. 21–26.
[7] R. Hecht-Nielsen, Theory of the Backpropagation Neural Network, Academic Press, 1922, pp. 65–93.
[8] V.V. Phansalkar, and P.S. Sastry, Analysis of the Back-Propagation Algorithm with Momentum, IEEE Transactions on Neural Networks, 5.3 (1994), pp. 505–506.
[9] D.P. Kingma, and J. Ba, ADAM: A Method for Stochastic Optimization, arXiv preprint:1412.6980v9, (2014).
[10] L. Luo, Y. Xiong, Y. Liu, and X. Sun, Adaptive Gradient Methods with Dynamic Bound of Learning Rate, arXiv preprint:1902.09843v1, (2019).
[11] G. Cybenko, Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, and Systems, 2.4 (1989), pp. 303–314.
[12] Y. Kaneda, Q. Zhao, Y. Liu, and Y. Pei, Strategies for determining effective step size of the backpropagation algorithm for on-line learning, 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), (2015), pp. 155–160.
[13] G.G. Dahlquist, G-stability is equivalent to A-stability, BIT Numerical Mathematics, 18.4 (1978), pp. 384–401.