CRUNCH Seminars

January 5, 2024:

Presentation #1: Neural Operator Learning Enhanced Physics-informed Neural Networks for solving differential equations with sharp solutions, Professor Mao, Xiamen University

Link: https://youtu.be/7NNyjWxp2zQ?feature=shared

Abstract: In the talk, Professor Mao shall present some numerical results for the forward and inverse problems of PDEs with sharp solutions by using deep neural network-based methods. In particular, he developed a deep operator learning enhanced PINN for PDEs with sharp solutions, which can be asymptotically approached by using problems with smooth solutions. Firstly, Mao solves the smooth problems by using deep operator learning, and adopts the framework of DeepONet. Then Professor Mao combines the pre-trained DeepONet and PINN to solve the sharp problem. Professor Mao demonstrates the effectiveness of the present method by testing several equations, including viscous Burger equation, Cavity flow as well Navier-stokes equation. Furthermore, we solve the ill-posed problems that with insufficient boundary conditions by using the present method.

Presentation #2: Physics-Informed Parallel Neural Networks with Self-Adaptive Loss
Weighting for the Identification of Structural Systems, Rui Zhang, Pennsylvania State University

Link: https://youtu.be/7NNyjWxp2zQ?feature=shared

Abstract: Rui Zhang has developed a physics-informed parallel neural networks (PIPNNs) framework for the identification of continuous structural systems described by a system of partial differential equations. PIPNNs integrate the physics of the system into the loss function of the NNs, enabling the simultaneous updating of both unknown structural and NN parameters during the process of minimizing the loss function. The PIPNNs framework accommodates structural discontinuities by dividing the computational domain into subdomains, each uniquely represented through a parallelized and interconnected NN architecture. Furthermore, the PIPNNs framework is incorporated a self-adaptive weighted loss function based on Neural Tangent Kernel (NTK) theory. The self-adaptive weights, determined based upon the eigenvalues of the NTK matrix of the PIPNNs, dynamically adjust the convergence rates of each loss term to achieve a balanced convergence, while requiring less training data. This advancement is particularly beneficial for inverse problem-solving and structural identification, as the NTK matrix reflects the training progress of both unknown structural and NN parameters. The PIPNNs framework is verified, and its accuracy is assessed through the application of numerical examples of several continuous structural systems, including bars, beams, and plates.

January 12, 2024:

Presentation #1: PPDONet: Deep Operator Networks for forward and inverse problems in astronomy, Shunyuan Mao, University of Victoria

Link: https://youtu.be/_IhB9R33zCk?feature=shared

Abstract: This talk presents Shunyuan Mao’s research on applying Deep Operator Networks (DeepONets) to fluid dynamics in astronomy. The focus is specifically on protoplanetary disks — the gaseous disks surrounding young stars, which are the birthplaces of planets. The physical processes in these disks are governed by Navier-Stokes (NS) equations. Traditional numerical methods for solving these equations are computationally expensive, especially when modeling multiple systems for tasks such as exploring parameter spaces or inferring parameters from observations. Shunyuan Mao addresses this issue by using DeepONets to rapidly map PDE parameters to their solutions. His development, Protoplanetary Disk Operator Network (PPDONet), significantly reduces computational cost, predicting field solutions within seconds— a task that would typically require hundreds of CPU hours. The utility of this tool is demonstrated in two key applications: 1) Its swift solution predictions facilitate the exploration of relationships between PDE parameters and observables extracted from field solutions. 2) When integrated with the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), DeepONets effectively address inverse problems by efficiently inferring PDE parameters from unseen solutions.

Presentation #2: Physics-informed neural networks for solving phonon Boltzmann transport equations, Dr. Tengfei Luo, University of Notre Dame

Link: https://youtu.be/_IhB9R33zCk?feature=shared

Abstract: The phonon Boltzmann transport equation (pBTE) has been proven to be capable of precisely predicting heat conduction in sub-micron electronic devices. However, numerically solving pBTE is extremely computationally costly due to its high dimensionality, especially when phonon dispersion and time evolution are considered. In this study, we use physics-informed neural networks (PINNs) to solve pBTE for multiscale non-equilibrium thermal transport problems both efficiently and accurately. In particular, a PINN framework is devised to predict phonon energy distribution by minimizing the residuals of governing equations, boundary conditions, and initial conditions without the need for any labeled training data. With phonon energy distribution predicted by the PINN, temperature and heat flux can be obtained thereby. In addition, geometric parameters, such as characteristic length scale, are also considered as a part of the input to PINN, which makes our model capable of predicting heat distribution in different length scales. Besides pBTE, Dr. Tengfei Luo also extended the applicability of the PINN framework for modeling coupled electron-phonon (e-ph) transport. e-ph coupling and transport are ubiquitous in modern electronic devices. The coupled electron and phonon Boltzmann transport equations (BTEs) hold great potential for the simulation of thermal transport in metal and semiconductor systems.

January 19, 2024:

Presentation #1: U-DeepONet: U-Net Enhanced Deep Operator Network for Geologic Carbon Sequestration, Waleed Diab, Khalifa University

Link: https://youtu.be/AUPou43OuYo?feature=shared

Abstract: Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) are by far the most popular neural operator learning algorithms. FNO seems to enjoy an edge in popularity due to its ease of use, especially with high dimensional data. However, a lesser-acknowledged feature of DeepONet is its modularity. This feature allows the user the flexibility of choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial because it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, Waleed Diab will take advantage of this feature by carefully designing a more efficient neural operator based on the DeepONet architecture. Waleed will introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. In addition, the proposed U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. Waleed also shows that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.

January 26, 2024:

Presentation #1: Physics-informed neural networks for quantum control, Dr. Ariel Norambuena, Pontifical Catholic University

Link: https://youtu.be/Ci85LdBM_J0?feature=shared

Abstract: In this talk, Dr. Norambuena will introduce a computational method for optimal quantum control problems using physics-informed neural networks (PINNs). Motivated by recent advances in open quantum systems and quantum computing, he will discuss the relevance of PINNs for finding realistic and robust control fields. Through this talk, we will learn about the flexibility and universality of PINNs to solve different quantum control problems, showing the main advantages of PINNs compared to standard control techniques.

February 2, 2024:

Presentation #1: Efficient and Physically Consistent Surrogate Modeling of Chemical Kinetics Using Deep Operator Networks, Anuj Kumar, North Carolina State University

Link: https://youtu.be/UYzU7q37tPk?feature=shared

Abstract: In the talk, Anuj Kumar explores a new combustion chemistry acceleration scheme he has developed for reacting flow simulations, utilizing deep operator networks (DeepONets). The scheme, implemented on a subset of thermochemical scalars crucial for chemical system’s evolution, advances the current solution vector by adaptive time steps.  In addition, the original DeepONet architecture is modified to incorporate the parametric dependence of these stiff ODEs associated with chemical kinetics.  Unlike previous DeepONet training approaches, his training is conducted over short time windows, using intermediate solutions as initial states. An additional framework of latent-space kinetics identification with modified DeepONet is proposed, which enhances the computational efficiency and widens the applicability of the proposed scheme. The scheme is demonstrated on the “simple” chemical kinetics of hydrogen oxidation and the more complex chemical kinetics of n-dodecane high-and low-temperatures. The proposed framework accurately learns the chemical kinetics and efficiently reproduces species and temperature temporal profiles. Moreover, a very large speed-up with a good extrapolation capability is also observed with the proposed scheme. Additional framework of incorporating physical constraints such as total mass and elemental conservation, into the training of DeepONet for subset of thermochemical scalars of complex reaction mechanisms is proposed. Levering the strong correlation between full and subset of scalars, the framework establishes an accurate and physically consistent mapping. The framework is demonstrated on the chemical kinetics of CH4 oxidation.

Presentation #2: SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training, Kazem Meidani, Carnegie Mellon University

Link: https://youtu.be/UYzU7q37tPk?feature=shared

Abstract: In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic unified understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training, which employs joint contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the pre-trained embeddings. By performing latent space analysis, Dr. Meidani observes that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. Kazem evaluates SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in few-shot learning scenarios where available data is limited.

February 9, 2024:

Presentation #1: Neural oscillators for generalization of physics-informed machine learning, Taniya Kapoor, TU Delft

Link: https://youtu.be/zJExHI-MYvE?feature=shared

Abstract: A primary challenge of physics-informed machine learning (PIML) is its generalization beyond the training domain, especially when dealing with complex physical problems represented by partial differential equations (PDEs). This paper aims to enhance the generalization capabilities of PIML, facilitating practical, real-world applications where accurate predictions in unexplored regions are crucial. Taniya Kapoor leverages the inherent causality and temporal sequential characteristics of PDE solutions to fuse PIML models with recurrent neural architectures based on systems of ordinary differential equations, referred to as neural oscillators. Through effectively capturing long-time dependencies and mitigating the exploding and vanishing gradient problem, neural oscillators foster improved generalization in PIML tasks. Extensive experimentation involving time-dependent nonlinear PDEs and biharmonic beam equations demonstrates the efficacy of the proposed approach. Incorporating neural oscillators outperforms existing state-of-the-art methods on benchmark problems across various metrics. Consequently, the proposed method improves the generalization capabilities of PIML, providing accurate solutions for extrapolation and prediction beyond the training data.

February 16, 2024:

Presentation #1: DeepOnet Based Preconditioning Strategies For Solving Parametric Linear Systems of Equations, Alena Kopanicakova, Brown University

Link: https://youtu.be/_ziSqwA8NzM?feature=shared

Abstract: Dr. Kopanicakova introduces a new class of hybrid preconditioners for solving parametric linear systems of equations. The proposed preconditioners are constructed by hybridizing the deep operator network, namely DeepONet, with standard iterative methods. Exploiting the spectral bias, DeepONet-based components are harnessed to address low-frequency error components, while conventional iterative methods are employed to mitigate high-frequency error components. Dr. Kopanicakova’s preconditioning framework comprises two distinct hybridization approaches: direct preconditioning (DP) and trunk basis (TB) approaches. In the DP approach, DeepONet is used to approximate an action of an inverse operator to a vector during each preconditioning step. In contrast, the TB approach extracts basis functions from the trained DeepONet to construct a map to a smaller subspace, in which the low-frequency component of the error can be effectively eliminated. Dr. Kopanicakova’s numerical results demonstrate that utilizing the TB approach enhances the convergence of Krylov methods by a large margin compared to standard non-hybrid preconditioning strategies. Moreover, the proposed hybrid preconditioners exhibit robustness across a wide range of model parameters and problem resolutions.

February 23, 2024:

Presentation #1: Density physics-informed neural networks reveal sources of cell heterogeneity in signal transduction, Jae Kyoung Kim, KAIST

Link: https://youtu.be/dq_-iUrMhiY?feature=shared

Abstract: In this talk, Dr. Jae Kyoung Kim introduces Density-Physics Informed Neural Networks (Density-PINNs) for inferring probability distributions from timeseries data. Density-PINNs leverage Rayleigh distributions as kernel and a variational autoencoder for noise filtering. Dr. Kim demonstrates the power of Density-PINNs by analyzing single-cell gene expression data from sixteen promoters regulated by unknown pathways during antibiotic stress response. By inferring the probability distributions of gene expression patterns, Density-PINNs successfully identify key signaling pathways crucial for consistent cellular responses, offering a valuable strategy for treatment optimization.

March 1, 2024:

Presentation #1: Lax pairs informed neural networks solving integrable systems, Chen Yong, East China Normal University

Link: https://youtu.be/rKvekSv8j0Q?feature=shared

Abstract: Lax pairs are one of the most important features of integrable system. In this talk, Dr. Yong proposes the Lax pairs informed neural networks (LPINNs) tailored for integrable systems with Lax pairs by designing novel network architectures and loss functions, comprising LPINN-v1 and LPINN-v2. The most noteworthy advantage of LPINN-v1 is that it can transform the solving of complex integrable systems into the solving of a simpler Lax pairs to simplify the study of integrable systems, and it not only efficiently solves data-driven localized wave solutions, but also obtains spectral parameters and corresponding spectral functions in Lax pairs. On the basis of LPINN-v1, Dr. Yong additionally incorporates the compatibility condition/zero curvature equation of Lax pairs in LPINN-v2, its major advantage is the ability to solve and explore high-accuracy data-driven localized wave solutions and associated spectral problems for all integrable systems with Lax pairs. The numerical experiments in this work involve several important and classic low-dimensional and high-dimensional integrable systems, abundant localized wave solutions and their Lax pairs , including the soliton of the Korteweg-de Vries (KdV) equation and modified KdV equation, rogue wave solution of the nonlinear Schrodinger equation, kink solution of the sine-Gordon equation, non-smooth peakon solution of the Camassa-Holm equation and pulse solution of the short pulse equation, as well as the line-soliton solution Kadomtsev-Petviashvili equation and lump solution of high-dimensional KdV equation. The innovation of this work lies in the pioneering integration of Lax pairs informed of integrable systems into deep neural networks, thereby presenting a fresh methodology and pathway for investigating data-driven localized wave solutions and spectral problems of Lax pairs.

March 8, 2024:

Presentation #1: Can Physics-Informed Neural Networks beat the Finite Element Method?, Jonas Latz, University of Manchester

Link: https://youtu.be/bgsqCTgF24w?feature=shared

Abstract: Partial differential equations play a fundamental role in the mathematical modelling of many processes and systems in physical, biological and other sciences. To simulate such processes and systems, the solutions of PDEs often need to be approximated numerically. The finite element method, for instance, is a usual standard methodology to do so. The recent success of deep neural networks at various approximation tasks has motivated their use in the numerical solution of PDEs. These so-called physics-informed neural networks and their variants have shown to be able to successfully approximate a large range of partial differential equations. So far, physics-informed neural networks and the finite element method have mainly been studied in isolation of each other. In this work, Dr. Latz compares the methodologies in a systematic computational study. Dr. Latz employed both methods to numerically solve various linear and nonlinear partial differential equations: Poisson in 1D, 2D, and 3D, Allen-Cahn in 1D, semilinear Schrödinger in 1D and 2D. He then compared computational costs and approximation accuracies. In terms of solution time and accuracy, physics-informed neural networks have not been able to outperform the finite element method in our study. In some experiments, they were faster at evaluating the solved PDE.

Presentation #2: On flows and diffusions: from many-body Fokker-Planck to stochastic interpolants, Nicholas Boffi, Courant Institute of Mathematical Sciences

Link: https://youtu.be/bgsqCTgF24w?feature=shared

Abstract: Given a stochastic differential equation, its corresponding Fokker-Planck equation is generically intractable to solve, because its high dimensionality prohibits the application of standard numerical techniques. In this talk, Dr. Boffi will exploit an analogy between the Fokker-Planck equation and modern generative models from machine learning to develop an algorithm for its solution in high dimension. The method enables the computation of previously intractable quantities of interest, such as the entropy production rate of active matter systems, which quantifies the magnitude of nonequilibrium effects. Dr. Boffi will then highlight how insight from the Fokker-Planck equation facilitates the development of a new class of generative models known as stochastic interpolants, which generalize state-of-the-art diffusion models in several key ways that can be leveraged to improve practical performance.

March 15, 2024:

Presentation #1: A Python module for easily and efficiently solving problems with the Theory of Functional Connections, Carl Leake, Texas A&M University

Link: https://youtu.be/qDB66Vt1JH4?feature=shared

Abstract: Theory of Functional Connections (TFC) is a functional interpolation framework that can be used to solve a wide variety of problems, e.g., boundary value problems. The tfc Python module, the focus of this talk, is designed to help its users solve problems with TFC easily and efficiently: easily here refers to the time it takes the user to write a Python script to solve their problem and efficiently refers to the computational efficiency of said script. The tfc module leverages the automatic differentiation and just-in-time compilation capabilities of the JAX library to do this. In addition, the module provides other convenience, quality-of-life, and sanity-checking capabilities that reduce/alleviate the most common errors users make when numerically solving problems with TFC.

March 22, 2024:

Presentation #1: Domain decomposition for physics-informed neural networks, Alexander Heinlein, Delft University of Technology

Link: https://youtu.be/087Y9pLFNqI?feature=shared

Abstract: Physics-informed neural networks (PINNs) are a class of methods for solving differential equation-based problems using a neural network as the discretization. They have been introduced by Raissi et al. in [6] and combine the pioneering collocation approach for neural network functions introduced by Lagaris et al. in [4] with the incorporation of data via an additional loss term. PINNs are very versatile as they do not require an explicit mesh, allow for the solution of parameter identification problems, and are well-suited for high-dimensional problems. However, the training of a PINN model is generally not very robust and may require a lot of hyper parameter tuning. In particular, due to the so-called spectral bias, the training of PINN models is notoriously difficult when scaling up to large computational domains as well as for multiscale problems. In this talk, overlapping domain decomposition-based techniques for PINNs are being discussed. Compared with other domain decomposition techniques for PINNs, in the finite basis physics-informed neural networks (FBPINNs) approach [5], the coupling is done implicitly via the overlapping regions and does not require additional loss terms. Using the classical Schwarz domain decomposition framework, a very general framework, that also allows for mult-level extensions, can be introduced [1]. The method outperforms classical PINNs on several types of problems, including multiscale problems, both in terms of accuracy and efficiency. Furthermore, the combination of the multi-level domain decomposition strategy with multifidelity stacking PINNs [3], as introduced in [2] for time-dependent problems, will be discussed. It can be observed that the combination of multifidelity stacking PINNs with a domain decomposition in time clearly improves the reference results without a domain decomposition.

References: Dolean, Victorita, et al. “Multilevel domain decomposition-based architectures for physics-informed neural networks.” arXiv preprint arXiv:2306.05486 (2023). Heinlein, Alexander, et al. “Multifidelity domain decomposition-based physics-informed neural networks for time-dependent problems.” arXiv preprint arXiv:2401.07888 (2024). Howard, Amanda A., et al. “Stacked networks improve physics-informed training: applications to neural networks and deep operator networks.” arXiv preprint arXiv:2311.06483 (2023). Lagaris, Isaac E., Aristidis Likas, and Dimitrios I. Fotiadis. “Artificial neural networks for solving ordinary and partial differential equations.” IEEE transactions on neural networks 9.5 (1998): 987-1000. Moseley, Ben, Andrew Markham, and Tarje Nissen-Meyer. “Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations.” Advances in Computational Mathematics 49.4 (2023): 62. Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.” Journal of Computational physics 378 (2019): 686-707.

Presentation #2: Physics-based and data-driven methods for precision medicine in computational cardiology, Matteo Salva

Link: