CRUNCH Seminars

January 5, 2024:

Presentation #1: Neural Operator Learning Enhanced Physics-informed Neural Networks for solving differential equations with sharp solutions, Professor Mao, Xiamen University

Link: https://youtu.be/7NNyjWxp2zQ?feature=shared

Abstract: In the talk, I shall present some numerical results for the forward and inverse problems of PDEs with sharp solutions by using deep neural network-based methods. In particular, we developed a deep operator learning enhanced PINN for PDEs with sharp solutions, which can be asymptotically approached by using problems with smooth solutions. Firstly, we solve the smooth problems by using deep operator learning, and we adopt the framework of DeepONet. Then we combine the pre-trained DeepONet and PINN to solve the sharp problem. We demonstrate the effectiveness of the present method by testing several equations, including viscous Burger equation, Cavity flow as well Navier-stokes equation. Furthermore, we solve the ill-posed problems that with insufficient boundary conditions by using the present method.

Presentation #2: Physics-Informed Parallel Neural Networks with Self-Adaptive Loss
Weighting for the Identification of Structural Systems, Rui Zhang, Pennsylvania State University

Link: https://youtu.be/7NNyjWxp2zQ?feature=shared

Abstract: We have developed a physics-informed parallel neural networks
(PIPNNs) framework for the identification of continuous structural systems
described by a system of partial differential equations. PIPNNs integrate the
physics of the system into the loss function of the NNs, enabling the simultaneous
updating of both unknown structural and NN parameters during the process of
minimizing the loss function. The PIPNNs framework accommodates structural
discontinuities by dividing the computational domain into subdomains, each
uniquely represented through a parallelized and interconnected NN architecture.
Furthermore, the PIPNNs framework is incorporated a self-adaptive weighted loss
function based on Neural Tangent Kernel (NTK) theory. The self-adaptive weights,
determined based upon the eigenvalues of the NTK matrix of the PIPNNs,
dynamically adjust the convergence rates of each loss term to achieve a balanced
convergence, while requiring less training data. This advancement is particularly
beneficial for inverse problem-solving and structural identification, as the NTK
matrix reflects the training progress of both unknown structural and NN
parameters. The PIPNNs framework is verified and its accuracy is assessed
through the application of numerical examples of several continuous structural
systems, including bars, beams, and plates.

January 12, 2024:

Presentation #1: PPDONet: Deep Operator Networks for forward and inverse problems in astronomy, Shunyuan Mao, University of Victoria

Link: https://youtu.be/_IhB9R33zCk?feature=shared

Abstract: This talk presents our research on applying Deep Operator Networks (DeepONets) to fluid dynamics in astronomy. The focus is specifically on protoplanetary disks — the gaseous disks surrounding young stars, which are the birthplaces of planets. The physical processes in these disks are governed by Navier-Stokes (NS) equations. Traditional numerical methods for solving these equations are computationally expensive, especially when modeling multiple systems for tasks such as exploring parameter spaces or inferring parameters from observations. We address this issue by using DeepONets to rapidly map PDE parameters to their solutions. Our development, Protoplanetary Disk Operator Network (PPDONet), significantly reduces computational cost, predicting field solutions within seconds— a task that would typically require hundreds of CPU hours. The utility of this tool is demonstrated in two key applications: 1) Its swift solution predictions facilitate the exploration of relationships between PDE parameters and observables extracted from field solutions. 2) When integrated with the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), DeepONets effectively address inverse problems by efficiently inferring PDE parameters from unseen solutions.

February 2, 2024:

Presentation #1: Efficient and Physically Consistent Surrogate Modeling of Chemical Kinetics Using Deep Operator Networks, Anuj Kumar, North Carolina State University

Link: https://youtu.be/UYzU7q37tPk?feature=shared

Abstract: In the talk, we’ll explore a new combustion chemistry acceleration scheme we’ve developed for reacting flow simulations, utilizing deep operator networks (DeepONets). The scheme, implemented on a subset of thermochemical scalars crucial for chemical system’s evolution, advances the current solution vector by adaptive time steps.  In addition, the original DeepONet architecture is modified to incorporate the parametric dependence of these stiff ODEs associated with chemical kinetics.  Unlike previous DeepONet training approaches, our training is conducted over short time windows, using intermediate solutions as initial states. An additional framework of latent-space kinetics identification with modified DeepONet is proposed, which enhances the computational efficiency and widens the applicability of the proposed scheme. The scheme is demonstrated on the “simple” chemical kinetics of hydrogen oxidation and the more complex chemical kinetics of n-dodecane high- and low-temperatures. The proposed framework accurately learns the chemical kinetics and efficiently reproduces species and temperature temporal profiles. Moreover, a very large speed-up with a good extrapolation capability is also observed with the proposed scheme. Additional framework of incorporating physical constraints such as total mass and elemental conservation, into the training of DeepONet for subset of thermochemical scalars of complex reaction mechanisms is proposed. Levering the strong correlation between full and subset of scalars, the framework establishes an accurate and physically consistent mapping. The framework is demonstrated on the chemical kinetics of CH4 oxidation.

Presentation #2: SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training, Kazem Meidani, Carnegie Mellon University

Abstract: In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic unified understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training, which employs joint contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the pre-trained embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in few-shot learning scenarios where available data is limited.