THE FUTURE IS HERE

DDPS | Scientific Machine Learning through the Lens of Physics-Informed Neural Networks

Description: Traditional approaches for scientific computation have undergone remarkable progress, but they still operate under stringent requirements, such as the need for precise knowledge of underlying physical laws, precise knowledge of boundary and/or initial conditions, and often time-consuming workflows such as mesh generation and long-term simulation. On top of these limitations, high-dimensional problems governed by parameterized PDEs are difficult to tackle. Moreover, seamlessly incorporating noisy data is still a challenge for solving inverse problems efficiently. Physics-informed machine learning (PIML) has emerged as a promising alternative for solving the problems mentioned above. In this talk, we will discuss a particular type of PIML method, namely, physics-informed neural networks (PINNs). We review some of the current capabilities and limitations of PINNs and discuss diverse applications where PINN has proved to be very effective compared to traditional approaches. We also discuss the scalable extensions of the vanilla PINN method, such as conservative PINNs (cPINNs) and extended PINNs (XPINNs), for big data and/or large models. To this end, we will also discuss a unified and scalable framework for causal sweeping strategies for PINNs and their temporal decompositions.

Bio: Ameya Jagtap is an Assistant Professor of Applied Mathematics (Research) at Brown University, USA. He received PhD and Master degrees, both, in Aerospace Engineering from the Indian Institute of Science (IISc), India. He then joined the Tata Institute of Fundamental Research – Centre for Applicable Mathematics (TIFR-CAM), India as a postdoctoral research fellow. Later, he moved to Brown University to pursue his postdoctoral research in the division of applied mathematics.

Due to his interdisciplinary background in mechanical/aerospace engineering, applied mathematics and computation, a key focus of his research work is to develop data and physics-driven scientific machine learning algorithms applicable to a wide range of problems in computational physics. His expertise lies in the field of Scientific Machine Learning, Deep Learning, Data/Physics-driven deep learning techniques with multi-fidelity data, Uncertainty Quantification/Propagation, Multi-scale & Multi-physics simulations, Computational Continuum Mechanics (Solids, Fluids, and Acoustics), Spectral/Finite Element Methods, WENO/DG schemes, Domain decomposition techniques. He is also interested in the development of novel artificial neural network architectures, which give faster convergence.

DDPS webinar: https://www.librom.net/ddps.html

💻 LLNL News: https://www.llnl.gov/news
📲 Instagram: https://www.instagram.com/livermore_lab
🤳 Facebook: https://www.facebook.com/livermore.lab
🐤 Twitter: https://twitter.com/Livermore_Lab
🔔 Subscribe: / livermorelab

About LLNL: Lawrence Livermore National Laboratory has a mission of strengthening the United States’ security through development and application of world-class science and technology to: 1) enhance the nation’s defense, 2) reduce the global threat from terrorism and weapons of mass destruction, and 3) respond with vision, quality, integrity and technical excellence to scientific issues of national importance. Learn more about LLNL: https://www.llnl.gov/.

LLNL-VIDEO-847740