CRUNCH Seminars
November, 8 2024:
Presentation #1: Autonomous Materials Discovery: Bridging Experiments and Theory Through Optimized Rewards, Kalinin, Sergei, University of Tennessee, Knoxville
Link: https://youtu.be/SGPuj3yutm8?feature=shared
Abstract: The trajectory of scientific research worldwide is guided by long-term objectives, ranging from curiosity-driven fundamental discoveries in physics to the applied challenges of enhancing materials and devices for a broad spectrum of applications. Development of automated and cloud labs, rapid cloudification of existing experimental tools, and broad adoption of edge control enabled by Python APIs necessitates developing the principles for orchestrating operation of these tools. The implementation of autonomous experimental workflows in automated and hybrid laboratories in turn requires the establishment of robust reward functions and their seamless integration across various domains. Should these reward functions be universally established, the entirety of experimental efforts could be conceptualized as optimization problems. Here, we present our latest advancements in the development of autonomous research systems based on electron and scanning probe microscopy, as well as for automated materials synthesis. We identify several categories of reward functions that are discernible during the experimental process, encompassing fundamental physical discoveries, the elucidation of correlative structure-property relationships, and the optimization of microstructures. The operationalization of these rewards function on autonomous microscopes is demonstrated, as well as need and strategies for human in the loop intervention. Utilizing these classifications, we construct a framework that facilitates the integration of multiple optimization workflows, demonstrated through the synchronous orchestration of diverse characterization tools across a shared chemical space, and the concurrent navigation of costly experiments and models that adjust for epistemic uncertainties between them. Our findings lay the groundwork for the integration of multiple discovery cycles, ranging from rapid, laboratory-level exploration within relatively low-dimensional spaces and strong basic physics priors to more gradual, manufacturing-level optimization in highly complex parameter spaces underpinned by poorly known and phenomenological physical models. The very tempting opportunity this research opens is further use of the LLMs for creation of the probabilistic reward functions.
November 1, 2024:
Presentation #1: DeepNetBeam: A Framework for the Analysis of Functionally Graded Porous Beams and A Physic-informed Neural Operator based on the principle of least action, Mohammad Sadegh Eshaghi Khanghah, Leibniz University Hannover
Link: https://youtu.be/yaU5gWIWn_M?feature=shared
Abstract:
Topic 1:
DeepNetBeam: A Framework for the Analysis of Functionally Graded Porous Beams
Abstract: This study investigates different Scientific Machine Learning (SciML) approaches for the analysis of functionally graded (FG) porous beams and compares them under a new framework. The beam material properties are assumed to vary as an arbitrary continuous function. The methods consider the output of a neural network/operator as an approximation to the displacement fields and derive the equations governing beam behavior based on the continuum formulation. The methods are implemented in the framework and formulated by three approaches: (a) the vector approach leads to a Physics-Informed Neural Network (PINN), (b) the energy approach brings about the Deep Energy Method (DEM), and (c) the data-driven approach, which results in a class of Neural Operator methods. Finally, a neural operator has been trained to predict the response of the porous beam with functionally graded material under any porosity distribution pattern and any arbitrary traction condition. The results are validated with analytical and numerical reference solutions. The data and code accompanying this manuscript will be publicly available at https://github.com/eshaghi-ms/DeepNetBeam.
Topic 2:
Variational Physic-informed Neural Operator (VINO) for Learning Partial Differential Equations
This study proposes the Variational Physics-Informed Neural Operator (VINO), a neural operator method designed for solving Partial Differential Equations (PDEs) by minimizing variational format of PDEs. Unlike existing methods such as the physics-informed neural operator (PINO) and physics-informed DeepONets, which rely on data-driven training, this method can be trained without any paired input-output data, resulting in improved performance and accuracy. By enabling domain discretization, the variational format allows VINO to overcome key challenges in neural operators, including numerical integration and differentiation in loss computation, through analytical integration and differentiation over a discretized domain.
Presentation #2: Physically Constrained Regression for Equation Inference, Dr. Roman O Grigoriev, Georgia Tech
Link: https://youtu.be/yaU5gWIWn_M?feature=shared
Abstract: Generative machine learning offers unprecedented capabilities to discover physical laws encoded in the form of easily interpretable evolution equations and constitutive relations from noisy and occasionally incomplete data. This talk will discuss a general equation inference framework that can be used to synthesize a complete hydrodynamic model of continuous or discrete systems such as fluids and active matter. Dr. Grigoriev will illustrate the power and flexibility of this framework using several applications including inference of a complete systems of governing equations describing an experimental active nematic system and validation of direct numerical simulations. In conclusion, Dr. Grigoriev will discuss how this framework can be used for subgrid-scale modeling of multi-scale phenomena such as fluid turbulence.
October 25, 2024:
Presentation #1: Physics-Informed Holomorphic Neural Networks (PIHNNs): Solving 2D linear elasticity problems, Matteo Calafa, Aarhus University
Link: https://youtu.be/37mDjIVfSho?feature=shared
Abstract: Matteo introduces Physics-Informed Holomorphic Neural Networks (PIHNNs), an innovative approach for solving boundary value problems characterized by solutions expressible through holomorphic functions. His focus is on plane linear elasticity, where he leverages the Kolosov-Muskhelishvili representation to develop complex-valued neural networks capable of fulfilling stress and displacement boundary conditions while inherently satisfying the governing equations. The network architecture is carefully designed to ensure that approximations respect the Cauchy-Riemann conditions through specific choices of layers and activation functions. Additionally, Matteo proposes a novel weight initialization technique to address the challenge of vanishing or exploding gradients during training. Compared to standard Physics-Informed Neural Networks (PINNs), this approach offers several advantages, including more efficient training—requiring evaluations only on the domain’s boundary—lower memory requirements due to a reduced number of training points, and the smoothness of the learned solution.
October 18, 2024:
Presentation #1: AttnPINNs: Physics-informed neural networks under the self-attention mechanism for solving partial differential equations, Dr. Zhenya Yan, Chinese Academy of Sciences
Link: https://youtu.be/vyz-tz6sKEY?feature=shared
Abstract: Physics-informed neural networks (PINNs) have been widely applied in solving various physical models. However, they have a considerable probability of failure when simulating dynamical systems with multi-scale, high-frequency, or chaotic behaviors. The possible reason might be that the majority of PINNs methodologies regard space-time as a unified entity, thereby neglecting temporal dependencies across previous or subsequent time steps. In this report, they propose an advanced network structure, referred to as AttnPINNs, by stacking self-attention blocks behind a pre-trained PINNs. Zhenya Yan introduces a sequence operator to transform the inputs from spatio-temporal points into sequential formats and a self-attention layer to capture the correlation of solution sequences. Furthermore, they provide a rigorous proof of convergence, indicating that merely the introduction of self-attention blocks is sufficient to bring about significant performance improvements by comparing the AttnPINNs with other advanced architectures (e.g., PINNs, QRes, First-Layer Sine, and PINNsFormer). Meanwhile, the numerical experiment results show that the AttnPINNs demonstrate the advanced performance and outperforms most of the other strategies on a wide range of PDEs, the solutions of which tend to have abrupt changes or exhibit multi-scale, high-frequency, and chaotic properties.
Presentation #2: Improving Spectral Bias in Neural Operators with Diffusion Models, Vivek Oommen, Brown University
Link: https://youtu.be/vyz-tz6sKEY?feature=shared
Abstract: Vivek integrates neural operators with diffusion models to address the spectral limitations of neural operators in surrogate modeling of turbulent flows. While neural operators offer computational efficiency, they exhibit deficiencies in capturing high-frequency flow dynamics, resulting in overly smooth approximations. To overcome this, he conditions diffusion models on neural operators to enhance the resolution of turbulent structures. His approach is validated for different neural operators on diverse datasets, including a high Reynolds number jet flow simulation and experimental Schlieren velocimetry. The proposed method significantly improves the alignment of predicted energy spectra with true distributions compared to neural operators alone. Additionally, proper orthogonal decomposition analysis demonstrates enhanced spectral fidelity in space-time. This work establishes a new paradigm for combining generative models with neural operators to advance surrogate modeling of turbulent systems, and it can be used in other scientific applications that involve microstructure and high-frequency content.
Presentation #3: SympGNNs: Symplectic Graph Neural Networks for identifying high-dimensional Hamiltonian systems and node classification, Alan John Varghese, Brown University
Link: https://youtu.be/vyz-tz6sKEY?feature=shared
Abstract: Existing neural network models to learn Hamiltonian systems, such as SympNets, although accurate in low-dimensions, struggle to learn the correct dynamics for high-dimensional many-body systems. Herein, Alan introduces Symplectic Graph Neural Networks (SympGNNs) that can effectively handle system identification in high-dimensional Hamiltonian systems, as well as node classification. SympGNNs combine symplectic maps with permutation equivariance, a property of graph neural networks. Specifically, we propose two variants of SympGNNs: i) GSympGNN and ii) LA-SympGNN, arising from different parameterizations of the kinetic and potential energy. Alan demonstrates the capabilities of SympGNN on two physical examples: a 40-particle coupled Harmonic oscillator, and a 2000-particle molecular dynamics simulation in a two-dimensional Lennard-Jones potential. Furthermore, Alan demonstrates the performance of SympGNN in the node classification task, achieving accuracy comparable to the state-of-the-art. Alan also empirically shows that SympGNN can overcome the oversmoothing and heterophily problems, two key challenges in the field of graph neural networks.
October 11, 2024:
Presentation #1: A Gaussian Process Framework for Operator Learning, Carlos Mora, University of California
Link: https://youtu.be/yFAxA6vPECA?feature=shared
Abstract: In this presentation, Carlos introduces an operator learning framework based on Gaussian Processes (GPs) to approximate mappings between function spaces. His method is accompanied by a theoretical justification and provides a first-of-its-kind mechanism to simultaneously integrate the strengths of neural operators, such as DeepONet or Fourier Neural Operator (FNO), with kernel methods for operator learning. Carlos Mora’s proposed GP framework can be efficiently trained by minimizing a loss function derived from maximum likelihood estimation, leveraging the Kronecker product to exploit the structure of datasets in operator learning. Unlike other kernel methods for operator learning, our framework accounts for correlations not only in the input function space but also in the support of the target function space. This unique feature allows the incorporation of the physics of the system —including PDEs, boundary conditions, and initial conditions— directly into the loss function through automatic differentiation. Through an extensive set of benchmarks in operator learning, it is demonstrated that the zero- mean GP-based framework provides competitive performance while requiring drastically fewer parameters to estimate than common neural operators. Furthermore, when using a neural operator as the mean function, the method is able to consistently outperform state-of-the-art techniques and enhance the performance of the neural operator standalone. Finally, it is demonstrated that the model effectively combines data and physics, resulting in improved overall performance.
Presentation #2: Topology Optimization via Physics-informed Gaussian Processes, Amin Yousefpour, University of California
Link: https://youtu.be/yFAxA6vPECA?feature=shared
Abstract: Topology optimization (TO) is a mathematical approach for optimizing the performance of structures by designing material distribution within a predefined domain under specific constraints. TO approaches are typically computationally expensive since they are nested; involving iterative design updates where each step requires solving a system of partial differential equations (PDEs) to simulate the structure’s response. Another common thread in existing TO methods is that these methods rely heavily on meshing the structure since numerical solvers need to discretize the design domain. In contrast to these existing methods, they introduce a simultaneous and mesh-free TO approach that unifies the design and analysis steps into a single optimization loop. Their method is grounded on Gaussian processes (GPs) which incorporate deep neural networks as their mean functions. Their method is inherently mesh-independent and significantly aids in (1) satisfying equality constraints in the design problem, (2) minimizing gray areas which are unfavorable in real-world applications, and (3) simplifying the inverse design by reducing the sensitivity of neural networks to factors such as random initialization, architecture type, and choice of optimizer. To show the impact of their work, they evaluate the performance of their approach against COMSOL on a few benchmark examples.
Presentation #3: Input Encoding for Operator Learning and Neural Partial Differential Equation Solvers, Shirin Hosseinmardi, University of California
Link: https://youtu.be/yFAxA6vPECA?feature=shared
Abstract: Deep neural networks (DNNs) are increasingly used to solve partial differential equations (PDEs) that naturally arise while modeling a wide range of systems and physical phenomena. However, the accuracy of such DNNs decreases as the PDE complexity increases and they also suffer from spectral bias as they tend to learn the low-frequency solution characteristics. To address these issues, we introduce Parametric Grid Convolutional Attention Networks (PGCANs) that can solve PDE systems without leveraging any labeled data in the domain. The main idea of PGCAN is to parameterize the input space with a grid-based encoder whose parameters are connected to the output via a DNN decoder that leverages attention to prioritize feature training. Their encoder provides a localized learning ability and uses convolution layers to avoid overfitting and improve information propagation rate from the boundaries to the interior of the domain. They test the performance of PGCAN on a wide range of PDE systems and show that it effectively addresses spectral bias and provides more accurate solutions compared to competing methods. They also sketch ideas on how PGCANs can be used for operator learning.
October 4, 2024:
Presentation #1: Reconstructing Turbulent Multi-Phase Flow States from Inertial Particle Tracks, Dr. Samuel Grauer, Pennsylvania State University
Link: https://youtu.be/JoUkqN7TjKI?feature=shared
Abstract: Physics-informed neural networks (PINNs) are simple, robust tools for inverse problems in fluid dynamics. There is significant interest in utilizing PINNs to reconstruct turbulent flows from experimentally measured Lagrangian particle tracks. Flow reconstruction becomes particularly challenging in the context of multi-phase, multi-physics flow, especially when the governing equations contain unknown parameters. Such scenarios often arise in particle tracking experiments involving high-speed flows or natural tracers like droplets, bubbles, and snowflakes. This talk will explore strategies for flow reconstruction that accommodate broadband flow states, inertial particle transport, and compliant surfaces. The focus will be on flows governed by the Navier–Stokes and Maxey–Riley (MR) equations, with one MR equation per particle that is parameterized by the particle’s size and density. Solutions are obtained by training parallel PINNs, parameterized surfaces, and a particle model with hard constraints on the particle kinematics. The transition of flow reconstruction from an ill-posed problem to a well-posed one will be examined through the lens of spectral error analysis.
September 27, 2024:
Presentation #1: Error and Error Bounds Estimation for Physics-Informed Neural Networks with Residuals, Augusto Tomás Chantada, University of Buenos Aires
Link: https://youtu.be/t3zvWHpwDY4?feature=shared
Abstract: Physics-Informed Neural Networks (PINNs) have gained widespread adoption across various fields in science and engineering. However, a robust method for estimating the error in PINNs’ predictions without relying on a reference solution has yet to be fully developed for all kinds of problems. This talk introduces a method that achieves this feat for certain types of ordinary differential equations (ODEs) and partial differential equations (PDEs). This method requires only the trained PINNs, their residuals, and the governing equations used during training. Furthermore, it allows for estimating a bound on the error, which is crucial in applications where underestimating the error is not acceptable. This advancement enhances the reliability of PINNs and broadens their applicability in real-world scenarios where reference solutions are either unavailable or computationally expensive to compute.
Presentation #2: Learning nonlocal constitutive models with neural networks, Xuhi Zhou, Virginia Tech
Abstract: Constitutive and closure models play important roles in computational mechanics and computational physics in general. Classical constitutive models for solid and fluid materials are typically local, algebraic equations or flow rules describing the dependence of stress on the local strain and/or strain-rate. Closure models such as those describing Reynolds stress in turbulent flows and laminar-turbulent transition can involve transport PDEs. Such models play similar roles to constitutive relation, but they are more challenging to develop and calibrate as they describe nonlocal mappings and often contain many submodels. Xuhi Zhou’s objective is to develop nonlocal constitutive models using neural networks. Inspired by the structure of the exact solutions to linear transport PDEs, he initially proposes a convolutional neural network (CNN) to represent region-to-point mappings for nonlocal constitutive models. The range of nonlocal dependence and the convolution structure are derived from the formal solution to transport equations. The CNN-based nonlocal constitutive model is trained with data and demonstrates the predictive capability of the proposed method. Moreover, the proposed network learns the embedded submodel without using data from that level, thanks to its interpretable mathematical structure. However, constitutive modeling requires objectivity — invariance under changes in the material frame — a criterion that CNN-based models fail to meet. To address this, we develop the vector-cloud neural network (VCNN), where the closure variable at a point depends on a set of surrounding points (referred to as cloud). The VCNN-based nonlocal constitutive model is frame-independent and adaptable to arbitrary discretizations. The merits of the proposed network are demonstrated on both scalar and tensor transport PDEs on parameterized periodic hill geometries and data from direct numerical simulations.
September 20, 2024:
Presentation #1: Transfer learning-based physics-informed neural networks for magnetostatic field simulation with domain variations, Jonathan Lippert, TU Darmstadt
Link: https://youtu.be/IXxLvoWIx6w?feature=shared
Abstract: Physics-informed neural networks (PINNs) provide a new class of mesh-free methods for solving differential equations. However, due to their long training times, PINNs are currently not as competitive as established numerical methods. A promising approach to bridge this gap is transfer learning (TL), that is, reusing the weights and biases of readily trained neural network models to accelerate model training for new learning tasks. This work applies TL to improve the performance of PINNs in the context of magnetostatic field simulation, in particular to resolve boundary value problems with geometrical variations of the computational domain. The suggested TL workflow consists of three steps: (a) A numerical solution based on the finite element method (FEM). (b) A neural network that approximates the FEM solution using standard supervised learning. (c) A PINN initialized with the weights and biases of the pre-trained neural network and further trained using the deep Ritz method. The FEM solution and its neural network-based approximation refer to a computational domain of fixed geometry, while the PINN is trained for a geometrical variation of the domain. The TL workflow is first applied to Poisson’s equation on different 2D domains and then to a 2D quadrupole magnet model. Comparisons against randomly initialized PINNs reveal that the performance of TL is ultimately dependent on the type of geometry variation considered, leading to significantly improved convergence rates and training times for some variations, but also to no improvement or even to performance deterioration in other cases.
Presentation #2: SMLE: Safe Machine Learning via Embedded Overapproximation, Matteo Francobaldi, University of Bologna
Link: https://youtu.be/IXxLvoWIx6w?feature=shared
Abstract: Despite the success that Machine Learning systems, in particular Neural Networks, have witnessed during the last decade, they still lack formal guarantees on their behavior, representing crucial requirements for their adoption in regulated or safety-critical scenarios, such as autonomous driving. This is why a wide range of frameworks have been proposed to ensure the satisfaction of safety properties in ML systems. These frameworks, however, still struggle to scale to real-world use cases, due to the computational complexity of verifying and enforcing formal specifications in modern neural models. To address this challenge, Matteo Francobaldi introduces SMLE (Safe Machine Learning via Embedded overapproximation), a novel framework consisting of: 1) a simple neural architecture that facilitates the verification of formal properties, 2) a dedicated training algorithm that, by leveraging this simplification, is able to scale to practical applications, and to produce safe-by-design systems. By evaluating Matteo Francobaldi’s approach on a set of different properties, in both regression and classification, he shows that the price for full satisfaction guarantees only consists of a slight accuracy deterioration.
September 13, 2024:
Presentation #1: Secure Foundation Models for Any Resolution and Any Physics Simulations, Dr. Noseong Park, Korea Advanced Institute of Science and Technology (KAIST)
Link: N/A
Abstract: Physics simulations are closely related to our daily lives, ranging from weather forecasting to virtual product designs. In this talk, Dr. Park will first explore recent advancements in foundation models for physics simulations. Dr. Park will then present a foundation model that i) protects training data from membership and model inversion attacks, even if its parameters are exposed, ii) solves any partial differential equations (PDEs) across various fields, and iii) operates across any spatiotemporal resolutions. Designing and training foundation models involve multiple aspects, so Dr. Park will also detail their approach—from data collection to pre-training and fine-tuning these models.
September 6, 2024:
Presentation #1: Application of Multi-Fidelity Modeling Based on Nonlinear Autoregressive Gaussian Process Regression for the Prediction of Structural Dynamics, Dr. Eirini Katsidoniotak, Massachusetts Institute of Technology (MIT)
Link: https://youtu.be/Zvb_d9hKiyg?feature=shared
Abstract: The Nonlinear Autoregressive Gaussian Process (NARGP) regression represents a class of multi-fidelity nonlinear information fusion algorithms. This approach enables accurate inference of quantities of interest by effectively combining low-fidelity model realizations with a limited set of high-fidelity observations. NARGP is highly effective at learning complex, nonlinear, and spatially dependent cross-correlations between models of differing fidelity. Despite its validation in benchmark problems, the application of NARGP in real-world scenarios has not been widely explored. Dr. Katsidoniotak’s research leverages NARGP to predict the dynamics of flexible marine structures interacting with ocean waves and currents. By utilizing data from numerical simulations to capture system trends and field sensor measurements to reflect actual system behavior, we achieve highly accurate predictions of structural deformations and loads under diverse marine conditions. The model significantly corrects low-fidelity solutions to align with real observations rapidly and at a low computational cost. This modeling technique can be seamlessly integrated into digital twins for real-time monitoring of marine structure dynamics. Such integration is crucial for informed decision-making and autonomous remote operations in marine environments.
August 30, 2024:
Presentation #1: SOC-MartNet: A Martingale Neural Network for the Hamilton-Jacobi-Bellman Equation without Explicit inf_u H in Stochastic Optimal Controls, Dr. Wei Cai, Southern Methodist University
Link: https://youtu.be/ZneE7B-5qQ8?feature=shared
Abstract: In this talk, Dr. Cai presents a martingale-based neural network, SOC-MartNet, for solving high-dimensional Hamilton-Jacobi-Bellman (HJB) equations where no explicit expression is needed for the infimum of the Hamiltonian, infu∈U H(t,x,u,z,p), and stochastic optimal control problems (SOCPs) with controls on both drift and volatility. Dr Cai reformulates the HJB equations for the value function by training two neural networks, one for the value function and one for the optimal control with the help of two stochastic processes- a Hamiltonian process and a cost process. The control and value networks are trained such that the associated Hamiltonian process is minimized to satisfy the minimum principle of a feedback SOCP, and the cost process becomes a martingale, thus, ensuring the value function network as the solution to the corresponding HJB equation. Moreover, to enforce the martingale property for the cost process, Dr. Cai employs an adversarial network and construct a loss function characterizing the projection property of the conditional expectation condition of the martingale. Numerical results show that the proposed SOC-MartNet is effective and efficient for solving HJB-type equations and SOCPs with a dimension up to 2000 in a small number of epochs (less than 20) or stochastic gradient method iterations (less than 2000) for the training.
August 23, 2024:
Presentation #1: Bridging the gap between isogeometric analysis and deep operator learning, Dr. Matthias Möller, Delft University of Technology, The Netherlands
Link: https://youtu.be/oBck0CTJQAs?feature=shared
Abstract: Isogeometric Analysis (IgA) introduced by Hughes et al. in 2005 has revived the vision of design-through-analysis (DTA) originally proposed by Augustitus et al. in 1977. DTA means the fully virtual creation, analysis and optimization of engineering designs, which requires bidirectional exchange of data between computer-aided design (CAD) and engineering analysis (CAE) tools. While IgA targets at bridging the gap between CAD and CAE through the use of spline-type basis functions throughout the entire process, the full potential of DTA is hold back by high computational costs of simulation-based analysis tools that hinder truly interactive DTA workflows. In this presentation Dr. Möller will briefly review the mathematical basics of IgA and present a novel approach – IgANets – that integrates the concept of deep operator learning into the isogeometric framework. In particular, Dr Möller will show that IgANets can be interpreted as a network-based variant of least-squares collocation IgA (Lin et al. 2020), thereby inheriting its consistency and convergence properties. Dr Möller will moreover present a software prototype that enables the collaborative creation and analysis of designs across multiple end-user devices including tablets and VR/XR headsets.
Presentation #2: Neural PDEs for Robot Motion Planning, Dr. Ahmed Qureshi, Purdue University
Link: https://youtu.be/oBck0CTJQAs?feature=shared
Abstract: Motion planning is a crucial aspect of robot intelligence, involving finding a path for a robot to move from its starting position to a goal position while avoiding collisions. Although traditional planning methods are available, they are computationally expansive and suffer from the curse of dimensionality. Recent
advancements have resulted in imitation learning-based motion planners that can generate solutions much faster than traditional methods. However, these learning-based methods require a significant amount of expert trajectories for training, which are computationally expensive to produce. To address this issue, this talk will discuss a new class of physics-informed neural motion planners. These methods directly learn to solve the Eikonal partial differential equation (PDE) for motion planning and do not rely on expert trajectories for training. The results demonstrate that these new approaches outperform state-of-the-art traditional and imitation
learning-based motion planning methods in terms of computational planning speed, path quality, and success rates. Additionally, the data generation times for these physics-informed methods only take a few minutes compared to hours or days for imitation learning-based methods.
August 16, 2024:
Presentation #1: More General Edge Learning KAN to explain, explore Brain, and Comparison of CEKAN and PSPINN in Disease Dynamics, Junbo Tao, Harbin Institute of Technology
Link: https://youtu.be/xX-MdA85KJ0?feature=shared
Abstract: Junbo Tao and the Harbin Institute of Technology first solved disease dynamics using Constant Edge CEKAN (Constant Edge Kolmogorov-Arnold Networks) and PSPINN (Point Superimposition Physics Informed Neural Network) almost four years ago, but the kernel functions of CEKAN include the exponential number of confirmed and removed individuals and is also in line with the Kolmogorov-Arnold representation theorem, and the shared weights in the edge include the infection rate, reinfection rate and cure rate, and used the activation function ‘tanh’ at the edge node. Our March 2022 arXiv preprint v1 is an upgraded version of KAN, considering the variant fine-grained which calculated by residual or gradient of MSE loss. The improved KAN, called PNN (Plasticity Neural Networks) or ELKAN (Edge Learning KAN), considers edge learning and trimming. Junbo Tao not inspired by the Kolmogorov-Arnold representation theorem, but rather by brain science. The ELKAN explains the brain by showing that its variables correspond to different types of neurons. Learning edges can be explained by synaptic strength rebalancing and glial cells phagocytosis of synapses. Kernel functions represent neuron and synapse discharges, with different neurons and edges symbolizing brain regions, that means classical brain. PSPINN, forming edge by center point and adjacent nodes, that calculates shared weights using back propagation and superimposition. The architecture of PSPINN lies between PINN and KAN, a center point covers surrounding nodes to form an edge, coincident edges have some nodes or one center point in common, and calculates the edge shared weights by back propagation of this center point and surrounding nodes, each shared weights of node comes from center point of other edge. The activation functions are on the edge is not transmitted by the nodes; it is transmitted by this center point. Every point in one edge has two types which are one center point and nodes, the nodes and center point of an edge both come from the degradation of the coincident edges. PSPINN selects the shared weights superimposition of the corresponding each node and center point of current edge, and whether to update each shared weights of node through shared weight of center point and node, based on the comparison of residuals between the node and center point in this edge, that means non-classical brain. Based on turbulent energy flow in hierarchical brain regions of cognitive dynamics, ELKAN is more general and can explore mechanisms such as consciousness, Alzheimer’s disease, memory, heart-brain quantum entanglement, brain aging, depressive disorder, prejudice, schizophrenia, and Hebb’s cell assembly hypothesis. Through testing with cosine similarity, the ELKAN is significantly better than the CEKAN. Junbo Tao also gave the simulations PSPINN, CEKAN (Constant Edge KAN) and DEKAN (Decreasing Edge KAN) in SIR model of disease dynamics. Even though non-classical PSPINN has more fine-grained calculations and considers point superimposition calculations, but simulation is much better than classical CEKAN and DEKAN, and has fewer iterations and less run time.
August 9, 2024:
Presentation #1: Simulating large-scale from the molecular scale with machine learning: an exploration of fluid systems, Dr. Peiyuan Gao, Pacific Northeast National Laboratory (PNNL)
Link: https://youtu.be/x7MTmkbejBU?feature=shared
Abstract: The field of multiscale modeling and simulation has recently embraced the use of machine
learning. Enhancements in these machine learning-aided models strive to extend the spatial and
temporal scale of simulations, while maintaining a high level of accuracy. In this talk, Dr. Gao will offer a brief
introduction to a multiscale modeling framework, particularly focusing on my work for integration of
machine learning methodologies, specifically neural networks. Dr. Gao will also present some applications with
the framework in the investigation of thermodynamics and dynamics of fluids. The results highlight the
significant potential of machine learning-aided multiscale model for applications in thermodynamic
state theory of fluids.
August 2, 2024:
Presentation #1: KFAC for PINNs, Dr. Marius Zeinhofer, University of Freiberg
Link: https://www.youtube.com/watch?v=Quj-8jDIqyc
Abstract: In this talk, Dr. Zeinhofer explores the theoretical benefits of natural gradient descent for training Physics-Informed Neural Networks and related methods from an infinite-dimensional perspective, adhering to the principle “first optimize, then discretize.” This viewpoint led to the recently developed Energy Natural Gradient Descent (ENGD), which requires a dense linear solve at each optimization step. To scale ENGD efficiently to millions of trainable parameters, Dr. Zeinhofer proposes a Kronecker-Factored approximation of the ENGD matrix that is computationally efficient to invert and store. This approximation leverages Taylor mode autodiff and views the computation of input derivatives as the forward pass of an expanded neural network with weight-sharing layers. Dr. Zeinhofer showcases the method’s efficiency and scalability through various examples.
July 26, 2024:
Presentation #1: Multiscale particle simulation of nonequilibrium gas flows and data-driven discovery of governing equations, Dr. Zhang Jun, Beihang University
Link: https://www.youtube.com/watch?v=i-FdW5CXhyI
Abstract: The simulation of non-equilibrium gas flows has garnered significant interest in modern engineering problems, notably in micro-electro-mechanical systems and aerospace engineering. The direct simulation Monte Carlo (DSMC) method has been very successful for the simulation of rarefied gas flows. However, due to the limitation of cell sizes and time steps, DSMC requires extraordinary computational resources for the simulation of near-continuum flows. Dr. Zhang Jun presents a novel method called the unified stochastic particle (USP) method, which can be implemented using much larger time steps and cell sizes by coupling the effects of molecular movements and collisions. Various applications have demonstrated that the USP method can improve computational efficiency by several orders of magnitude compared to DSMC. On the other hand, extending the application of macroscopic equations to nonequilibrium gas flows is also intriguing. It is known that in strong nonequilibrium flows, linear constitutive relations break down, and thus, the Navier-Stokes-Fourier equations are no longer applicable. Dr. Zhang Jun presents their recent work on data-driven discovery of governing equations by combining multiscale particle simulations and two types of machine learning methods: sparse regression and gene expression programming (GEP). Specifically, Dr. Jun proposes a novel dimensional homogeneity constrained gene expression programming (DHC-GEP) method. In the shock wave structure, the derived constitutive relations using DHC-GEP are more accurate than conventional equations over a wide range of Knudsen numbers and Mach numbers.
Presentation #2: One Factor to Bind the Cross-Section of Returns, Dr. Nicola Borri & Dr. Aleh Tsyvinski, LUISS University
Link: https://www.youtube.com/watch?v=i-FdW5CXhyI
Abstract: Dr. Nicola Borri & Dr. Aleh Tsyvinski propose a new non-linear single-factor asset pricing model. Despite its parsimony, this model represents exactly any non-linear model with an arbitrary number of factors and loadings – a consequence of the Kolmogorov-Arnold representation theorem. It features only one pricing component, comprising a nonparametric link function of the time-dependent factor and factor loading that Dr. Borri and Dr. Tsyvisnki jointly estimate with sieve-based estimators. Using 171 assets across major classes, Dr. Borri’s and Dr. Tsyvisnki’s model delivers superior cross-sectional performance with a low-dimensional approximation of the link function. Most known finance and macro factors become insignificant controlling for their single-factor.
July 19, 2024:
Presentation #1: Toward Efficient Neuromorphic Computing, Sen Lu, Penn State University
Link: https://www.youtube.com/watch?v=i-FdW5CXhyI
Abstract: Spiking Neural Networks (SNNs) are considered to be the third generation of artificial neural networks due to their unique temporal, event-driven characteristics. By leveraging bio-plausible spike-based computing between neurons in tandem with sparse on-demand computation, SNNs can demonstrate orders of magnitude power efficiency on neuromorphic hardware in contrast to traditional Machine Learning (ML) methods. This seminar reviews some of Sen Lu’s recent proposals in the domain of neuromorphic SNN algorithms from an overarching system science perspective with an end-to-end co-design focus from algorithms to hardware and applications. Sen Lu will specifically discuss SNN designs in the extreme quantization regime, neuroevolutionary optimized SNNs along with scaling deep unsupervised learning in SNN models. Leveraging the sparse, event-driven operation of SNNs, Sen Lu demonstrates significant energy savings of SNNs in applications that match its computing style like event-driven sensors, cybersecurity attack detection, among others. The talk outlines opportunities at designing hybrid neuromorphic platforms where leveraging benefits of both traditional ML methods and neuroscience concepts in the training and architecture design choice can actualize SNNs to their fullest potential.
July 12, 2024:
Presentation #1: Physics-informed neural network for simulation of problems in dynamic linear elasticity, Venkatesh Gopinath and Vijay Kag, Bosch Research, India
Link: https://www.youtube.com/watch?v=6dCc7OYPjFo
Abstract: This work presents the physics-informed neural network (PINN) model applied particularly to dynamic problems in solid mechanics. It focuses on forward and inverse problems. Particularly, showing how a PINN model can be used efficiently for material identification in a dynamic setting. In this work, it is assumed linear continuum elasticity. This shows results for two-dimensional (2D) plane strain problem and then we proceed to apply the same techniques for a three-dimensional (3D) problem. As for the training data used the solution based on the finite element method. This rigorously shows that PINN models are accurate, robust and computationally efficient, especially as a surrogate model for material identification problems. Also, by employing state-of-the-art techniques from the PINN literature which are an improvement to the vanilla implementation of PINN. Based on these results, it is believed that the framework has developed can be readily adapted to computational platforms for solving multiple dynamic problems in solid mechanics.
Presentation #2: Geometric deep learning and 3D field predictions using Deep Operator Network, Jimmy He, Ansys Inc.
Link: https://www.youtube.com/watch?v=6dCc7OYPjFo
Abstract: Data-driven deep learning models have been widely used as surrogate models for traditional numerical simulations. Besides material and geometric nonlinearities, one of the biggest challenges in creating surrogate models for engineering simulations is the varying geometries of the problem domains. The shape of an engineering design affects the result field distribution, and accurate, generalizable encoding of the geometries plays a vital role in a successful surrogate model. Geometric deep learning, which focuses on capturing different input geometries, has been studied intensively in the literature, with methods like graph neural networks and implicit neural representations being developed. This work enhances the Deep Operator Network (DeepONet) architecture with key elements from geometric deep learning, such as the signed distance function and the sinusoidal activation (SIREN), to further enhance the network’s spatial awareness towards varying geometries. Intermediate data fusion is introduced between the branch and trunk networks, which improves the model prediction accuracy. This novel architecture, called the Geom-DeepONet, is benchmarked against the classical PointNet and the vanilla DeepONet models. Geom-DeepONet shows a much smaller GPU memory usage footprint compared to PointNet and has the highest accuracy over the three models. Unlike PointNet, once trained, Geom-DeepONet can generate predictions on geometries discretized by arbitrary numbers of nodes and elements. Compared to finite element simulations, the predictions can be 10^5 times faster. Geom-DeepONet also demonstrates superior generalizability towards the vanilla DeepONet on dissimilar shapes, which makes it a viable candidate to be used as a surrogate model for rapid preliminary design screening.
July 5, 2024:
Presentation #1: On the use of “conventional” unconstrained minimization solvers for training regression problems in Scientific Machine Learning, Stefano Zampini, KAUST
Link: https://www.youtube.com/watch?v=taEnrJIpl1g
Abstract: In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications. At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods. However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization. In this talk, we introduce PETScML, a lightweight software framework built on top of the Portable and Extensible Toolkit for Scientific computation (PETSc) to bridge the gap between deep-learning software and conventional solvers for unconstrained minimization. Using PETScML, we empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models.
Presentation #2: On Sampling Tasks with Langevin Dynamics, Haoyang Zheng, Purdue University
Link: https://www.youtube.com/watch?v=taEnrJIpl1g
Abstract: Langevin dynamics, driven by Brownian motion, are a class of stochastic processes widely utilized in various machine learning sampling tasks. This discussion will explore the topics of sampling from gradient Langevin dynamics using Markov Chain Monte Carlo (MCMC), variant algorithms such as underdamped Langevin dynamics (ULD) and replica exchange stochastic gradient Langevin dynamics (reSGLD), as well as their applications in reinforcement learning (also called Thompson sampling) and constrained sampling. First introduced is accelerated approximate Thompson Sampling algorithm based on ULD. Under smooth and convex conditions, theoretically and empirically demonstrate that our algorithm reduces sample complexity from O(d) to O(√d) and derive O(log(N)) regrets, where d is the number of model parameters and N is the number of times to select actions. reSGLD is an effective sampler for non-convex learning in large-scale datasets. However, it may stagnate when the high-temperature chain explores the distribution tails too deeply. To address this, we propose reflected reSGLD (r2SGLD), which incorporates reflection steps within a bounded domain to enhance constrained non-convex exploration. Both theoretical and empirical evidence underscores its significance in improving simulation efficiency.
June 28, 2024:
Presentation #1: Simulation-Calibrated Scientific Machine Learning, Yiping Lu, Courant Institute of Mathematical Sciences, New York University
Link: https://www.youtube.com/watch?v=pnulf2VBeVs
Abstract: Machine learning (ML) has achieved great success in a variety of applications suggesting a new way to build flexible, universal, and efficient approximators for complex high-dimensional data. These successes have inspired many researchers to apply ML to other scientific applications such as industrial engineering, scientific computing, and operational research, where similar challenges often occur. However, the luminous success of ML is overshadowed by persistent concerns that the mathematical theory of large-scale machine learning, especially deep learning, is still lacking and the trained ML predictor is always biased. This seminar introduces a novel framework of (S)imulation-(Ca)librated (S)cientific (M)achine (L)earning (SCaSML), which can leverage the structure of physical models to achieve the following goals: 1) make unbiased predictions even based on biased machine learning predictors; 2) beat the curse of dimensionality with an estimator suffers from it. The SCASML paradigm combines a (possibly) biased machine learning algorithm with a de-biasing step design using rigorous numerical analysis and stochastic simulation. Theoretically, trying to understand whether the SCaSML algorithms are optimal and what factors (e.g., smoothness, dimension, and boundness) determine the improvement of the convergence rate. Empirically introducing different estimators that enable unbiased and trustworthy estimation for physical quantities with a biased machine learning estimator. Applications include but are not limited to estimating the moment of a function, simulating high-dimensional stochastic processes, uncertainty quantification using bootstrap methods, and randomized linear algebra.
Presentation #2: HPINNs: Gradient is not enough! You need curvature., Mostafa Abbaszadeh , Amirkabir University of Technology in Tehran, Iran
Link: https://www.youtube.com/watch?v=pnulf2VBeVs
Abstract: Deep learning has proven to be an effective tool for solving partial differential equations (PDEs) through Physics-Informed Neural Networks (PINNs). PINNs embed the PDE residual into the neural network’s loss function and have been successfully used to solve various forward and inverse PDE problems. However, the first generation of PINNs often suffers from limited accuracy, necessitating the use of extensive training points. Prior work “Gradient-Enhanced PINNs”, suggested that the gradient of the residual should be zero because the residual itself should be zero. This work proposes an enhanced method for improving the accuracy and training efficiency of PINNs. By creating a smooth, flat landscape for residual losses and ensuring zero residual curvature, the approach improves the network’s ability to learn from residuals more effectively. Employing Hutchinson Trace Estimation to calculate the curvature, further refining the loss function. Extensive experiments demonstrate that the method significantly outperforms existing approaches, including Gradient-Enhanced PINNs (gPINNs). The results show improved accuracy and efficiency in solving PDEs, highlighting the effectiveness of the approach.
June 21, 2024:
Presentation #1: FastVPINNs: Tensor-Driven Acceleration of VPINNs for Complex Geometries, Dr. Sashikumaar Ganesan, Divij Tirthhankar Ghose, & Thivin Anandh, Department of Computational and Data Sciences, IISc Bangalore
Link: https://youtu.be/YAxf4gOdehQ?feature=shared
Abstract: Variational Physics-Informed Neural Networks (VPINNs) solve partial differential equations (PDEs) using a variational loss function, similar to Finite Element Methods. While hp-VPINNs are generally more effective than PINNs, they are computationally intensive and do not scale well with increasing element counts. This work introduces FastVPINNs, a tensor-based framework that significantly reduces training time and handles complex geometries. Optimized tensor operations in FastVPINNs achieve up to a 100-fold reduction in median training time per epoch compared to traditional hp-VPINNs. With the right hyperparameters, FastVPINNs can outperform conventional PINNs in both speed and accuracy, particularly for problems with high-frequency solutions. The proposed method will be demonstrated with scalar and vector problems, showcasing its versatility and effectiveness in various applications.
June 14, 2024:
Presentation #1: Score-based Diffusion Models in Hilbert Spaces, Dr. Sungbin Lim, Korea University
Link: https://youtu.be/HmcjUq9DNO4?feature=shared
Abstract: Diffusion models have recently gained significant attention in probabilistic machine learning due to their theoretical properties and impressive applications in generative AI, including Stable Diffusion and DALL-E. This talk will provide a brief introduction to the theory of score-based diffusion models in Euclidean space. It will also present recent findings on score-based generative modeling in infinite-dimensional spaces, based on the time reversal theory of diffusion processes in Hilbert space.
June 7, 2024:
Presentation #1: On the Mathematical Foundations of Deep Learning Methods for Solving Partial Differential Equations, Dr. Aras Bacho, Ludwig-Maximillians University of Munich
Link: https://youtu.be/XkZ_IX_0y7Q?feature=shared
Abstract: Partial Differential Equations are essential for modeling phenomena across various domains, including physics, engineering, and finance. However, despite centuries of theoretical evolution, solving PDEs remains a challenge, both from theoretical and numerical perspectives. Traditional approaches, such as Finite Element Methods, Finite Difference Methods, and Spectral Methods, often reach their limits when faced with problems in high dimensions and with significant nonlinearity. The advent of high computational power and the availability of large datasets have made Machine Learning methods, particularly Deep Learning, a hope for practically overcoming these obstacles. Innovations such as Physics-Informed Neural Networks, Operator Networks, Neural Operators, not the Deep Ritz Method, among others, offer new pathways. Yet, the theoretical foundation of these methods is still in its infancy. In this presentation, Dr. Aras Bacho will present some recently obtained theoretical results underpinning such methods.
May 31, 2024:
Presentation #1: From Optimization to Generalization Analysis for Deep Information Bottleneck, Dr. Shujian Yu, Vrije Universiteit Amsterdam
Link: https://youtu.be/YoRQb3-veMs?feature=shared
Abstract: The information bottleneck (IB) approach is popular to improve the generalization of deep neural networks (DNNs). Essentially, it aims to find a minimum sufficient representation t from input variable x that is relevant for predicting a desirable response variable y, by striking a trade-off between a compression term I(x;t) and a prediction term I(y;t), where I refers to the mutual information (MI). However, optimizing IB remains a challenging problem. In this talk, Dr. Shujian Yu first discusses the IB principle for the regression problem and develop a new way to parameterize IB with DNNs, by replacing the Kullback-Leibler (KL) divergence with the Cauchy-Schwarz (CS) divergence. By doing so, Dr. Yu moves away from the mean squared error (MSE) loss-based regression and eases estimation of MI terms by avoiding variational approximations or distributional assumptions. Dr. Yu observes the improved generalization ability of his proposed CS-IB in benchmark datasets. Dr. Yu then delves deeper to demonstrate the benefits of the IB method by relating the compression term I(x;t) to generalization errors using a recently developed generalization error bound. Finally, Dr. Yu discusses enhancing this bound by substituting I(x;t) with loss entropy, which not only offers computational tractability but also provides quantitatively tighter estimates, particularly for large neural networks.
Presentation #2: Exploring the applicability and the optimization process of Physics Informed Neural Networks, Jorge Urbán Gutiérrez, University of Alicante & University of Valencia
Link: https://youtu.be/YoRQb3-veMs?feature=shared
Abstract: Recent advancements in Physics-Informed Neural Networks (PINNs) have positioned them as serious contenders in the domain of computational physics, challenging the longstanding monopoly held by classical numerical methods. Their disruptive potential stems from their innate ability to integrate domain-specific physics principles with the powerful learning capabilities of neural networks. Jorge Urbán Gutiérrez studies the applicability of PINNs for diverse scenarios, such as simultaneous solution of partial differential equations under varied boundary conditions and source terms, or problems where solving the differential equations are difficult to implement in finite differences. Furthermore, by introducing minor but mathematically motivated changes into the optimization process, Jorge Urbán Gutiérrez substantially improves the accuracy of PINNs for a variety of physical problems, suggesting ample room for advancement in this field.
May 24, 2024:
Presentation #1: Physics-enhanced deep surrogate models for partial differential equations, Raphael Pestourie, Georgia Tech
Link: https://youtu.be/4PP6074RO1M?feature=shared
Abstract: Surrogate models leverage data to efficiently predict a property of a partial differential equation. By accelerating the evaluation of a target property, they enable the discovery of new engineering solutions. However, in the context of supervised learning, the benefit of surrogate models is hampered by their training costs. Often dominated by the cost of the data generation, the curse of dimensionality makes the training costs prohibitive as the number of input parameters increases. Dr. Pestourie will present physics- enhanced deep surrogate models (PEDS) which combine a neural network generator and a low-fidelity solver for partial differential equations. Trained end-to-end to match high-fidelity data, the neural network learns to generate the input that will make the low- fidelity solver accurate for the target property. The geometries that are generated by the neural network can be inspected and interpreted because they are the inputs of a physical simulation. The low-fidelity solver introduces a physical bias by computing the low-fidelity solution of the governing partial differential equation. In low-data regimes, Dr. Pestourie shows on several examples that PEDS reduces the data need by at least two orders of magnitude compared to a supervised neural network. The low-fidelity solver makes PEDS slower than a neural network. However, Dr. Pestourie reports for multiple examples that PEDS is 100 to 10’000 times faster than the high-fidelity solvers. Many questions remain open regarding this methodology. Dr. Pestourie will present some insights on why it works and discuss challenges and future opportunities.
Presentation #2: From Theory to Therapy: Leveraging Universal Physics-Informed Neural Networks for Model Discovery in Quantitative Systems Pharmacology, Mohammad Kohandel, University of Waterloo
Link: https://youtu.be/4PP6074RO1M?feature=shared
Abstract: Physics-Informed Neural Networks (PINNs) have demonstrated remarkable capabilities in reconstructing solutions for differential equations and performing parameter estimations. This talk introduces Universal Physics-Informed Neural Networks (UPINNs), an advanced variant of PINNs that includes an additional neural network designed to identify unknown, hidden terms within differential equations. UPINNs are particularly effective at uncovering these hidden terms from sparse and noisy data. Furthermore, UPINNs can be integrated with symbolic regression to derive closed-form expressions for these terms. The presentation will explore how UPINNs are applied to model the dynamics of chemotherapy drugs, an area primarily addressed by Quantitative Systems Pharmacology (QSP). QSP often requires extensive manual analysis and relies on simplifying assumptions. By utilizing UPINNs, we identify the unknown components in the differential equations that dictate chemotherapy pharmacodynamics, enhancing model accuracy with both synthetic and real experimental data
May 17, 2024:
Presentation #1: Hyperdimensional Computing for Efficient, Robust, and Interpretable Cognitive Learning, Dr. Mohsen Imani, University of California Irvine
Link: N/A
Abstract: There are several challenges with today’s AI systems, including lack of interpretability, being extremely data-hungry, and inefficiency in performing learning tasks. In this talk, Dr. Mohsen Imani will present a new brain-inspired computing system that supports various learning and cognitive tasks while offering transparency and significantly higher computational efficiency and robustness than existing platforms. Dr Imani’s platform utilizes HyperDimensional Computing (HDC), an alternative computation method that implements principles of brain functionality for high-efficiency and noise-tolerant computation. HDC is motivated by the observation that the human brain operates on high-dimensional data representations. It mimics important functionalities of the human memory model with vector operations, which are computationally tractable and mathematically rigorous in describing human cognition. A key advantage of HDC is its training capability in one or a few shots, where data are learned from a few examples in a single pass over the training data, instead of requiring many iterations. These features make HDC a promising solution for today’s embedded devices with limited resources and for future computing systems facing high noise and variability issues. Dr. Imani will demonstrate how our hyperdimensional cognitive framework can detect complex scenarios, such as shoplifting, that are challenging for today’s AI systems to generalize.
May 10, 2024:
Presentation #1: Spatiotemporal Learning of High-dimensional Cell Fate, Dr. Qing Nie, University of California
Link: https://youtu.be/qwlVYnsxb9E?feature=shared
Abstract: Cells make fate decisions in response to dynamic environments, and multicellular structures emerge from multiscale interplays among cells and genes in space and time. The recent single-cell genomics technology provides an unprecedented opportunity to profile cells for all their genes. While those measurements provide high-dimensional gene expression profiles for all cells, the experimental techniques often lead to a loss of critical spatiotemporal information for individual cells. Is it possible to infer temporal relationships among cells from single or multiple snapshots? How to recover spatial
interactions among cells, for example, cell-cell communication? In this talk Qing Nie will give a short overview on our newly developed tools based on dynamical models and machine-learning methods, with a focus on inference and analysis of transitional properties of cells and cell-cell communication using both high-dimensional single-cell and spatial transcriptomics. After the overview, Dr. Yutong Sha will present details for a method called TIGON that is designed to connect a small number of snapshot datasets. This method combines high-dimensional PDEs, Optimal Transport, and machine learning approaches to reconstruct continuous temporal trajectories of high-dimensional cell fate.
Presentation #2: Discovering slow manifolds arising from fast-slow systems via Physics- Informed Neural Networks, Dr. Dimitrios Patsatzis, National Technical University of Athens
Link: https://youtu.be/qwlVYnsxb9E?feature=shared
Abstract: Slow Invariant Manifolds (SIMs) are low-dim. topological spaces parameterizing the long-term behavior of complex dynamical systems characterized
by the action of multiple timescales. The framework of Geometric Singular Perturbation Theory (GSPT) has been traditionally used for computing SIM
approximations, tackling either stiff systems where the timescale splitting was explicitly known (singularly perturbed systems), or more generally, fast-slow systems, where this information is not available. In this seminar, I will present a Physics-Informed Neural Network (PINN) approach for discovering SIM approximations in the context of GSPT for both the above classes of dynamical systems. The resulting SIM functionals are of explicit form and thus, facilitate the construction and numerical integration of reduced order models (ROMs). In comparison to classic model reduction techniques, such as QSSA, PEA and CSP, the PINN approach provides SIM approximations of equivalent or even higher approximation accuracy. Most importantly, I will demonstrate that the accuracy of the PINN approach is not affected by the magnitude of the perturbation parameter ε, or by the distance from the boundaries of the underlying SIM; to factors that critically affect the accuracy of the traditional methodologies.
May 3, 2024:
Presentation #1: Integrating PDE operators into neural network architecture in a multi-resolution manner for spatiotemporal prediction, Xin-Yang Liu, University of Notre Dame
Link: https://youtu.be/5kXiuq_sCK4?feature=shared
Abstract: Traditional data-driven deep learning models often struggle with high training costs, error accumulation, and poor generalizability for learning complex physical processes. Physics-informed deep learning (PiDL) addresses these challenges by incorporating physical principles into the model. Most PiDL approaches regularize training by embedding governing equations into the loss function, yet this process heavily depends on extensive hyperparameter tuning to balance each loss term. As an alternative strategy, Xin-Yang Liu proposes leveraging physics prior knowledge by ‘baking’ the discretized governing equations into the neural network architecture. This is achieved through the connection between the partial differential equations (PDE) operators and network structures, resulting in a neural differentiable modeling framework using differentiable programming. Embedding discretized PDEs through convolutional residual networks in a multi-resolution setting significantly improves the generalizability and long-term prediction accuracy, outperforming conventional black-box models. In this talk, Xin-Yang Liu will introduce our original multi-resolution PDE-integrated neural network architecture and its extension that is inspired by finite volume methods. This extension leverages the conservative property of finite volumes on the global scale and the strong learnability of neural operators on the local scale. Xin-Yang Liu demonstrates the effectiveness of both methods on several spatiotemporal systems governed by PDEs, including the diffusion equation, Burger’s equation, Kuramoto–Sivashinsky equations, and Navier-Stokes equations. These approaches achieve superior performance in predicting spatiotemporal dynamics, surpassing purely black-box deep learning counterparts and offering a promising avenue for emulating complex dynamic systems with improved accuracy and efficiency.
April 26, 2024:
Presentation #1: Structure-conforming Operator Learning via Transformers, Shuhao Cao, Purdue University
Link: https://youtu.be/h6d7ayfMSww?feature=shared
Abstract: GPT, Stable Diffusion, AlphaFold 2, etc., all these state-of-the-art deep learning models use a neural architecture called “Transformer”. Since the emergence of “Attention Is All You Need” paper by Google, Transformer is now the ubiquitous architecture in deep learning. At Transformer’s heart and soul is the “attention mechanism”. In this talk, we shall dissect the “attention mechanism” through the lens of traditional numerical methods, such as Galerkin
methods, and hierarchical matrix decomposition. We will report some numerical results on designing attention-based neural networks according to the structure of a problem in traditional scientific computing, such as inverse problems for Neumann-to-Dirichlet operator (EIT) or multiscale elliptic problems. Progress within different communities will be briefed to answer some open problems on the mathematical properties of the attention mechanism in
Transformers, as well as design new neural operators for a scientific computing problem.
Presentation #2: Exploring the Intersection of Diffusion Models and (Partial) Differential Equation Solving, Chieh-Hsin Lai, Sony AI
Link: https://youtu.be/h6d7ayfMSww?feature=shared
Abstract: Diffusion models, pioneers in Generative AI, have significantly propelled the creation of synthetic images, audio, 3D objects/scenes, and proteins. Beyond their role in generation, these models have found practical applications in tasks like media content editing/restoration, as well as in diverse domains such as robotics learning. In this talk, Dr. Chieh-Hsin Lai explores the origins of diffusion models and their role in solving differential equations (DE), as discussed by Song et al. in ICLR 2020. Dr. Lai introduces FP-Diffusion (Lai et al. ICML 2023), which enhances the model by aligning it with its underlying mathematical structure, the Fokker-Planck (FP) equation. Additionally, he will discuss limitations related to slow sampling speeds in thousand-step generation, motivating the introduction of the Consistency Trajectory Model (CTM) (Kim & Lai et al. ICLR 2024). The goal is to inspire mathematical research into diffusion models and deep learning methods for solving (partial) differential equations.
April 19, 2024:
Presentation #1: PirateNets: Physics-informed Deep Learning with Residual Adaptive Networks, Sifan Wang, University of Pennsylvania
Link: https://youtu.be/Rvgn_-DFpUE?feature=shared
Abstract: While physics-informed neural networks (PINNs) have become a popular deep learning framework for tackling forward and inverse problems governed by partial differential equations (PDEs), their performance is known to degrade when larger and deeper neural network architectures are employed. Dr. Sifan Wang study identifies that the root of this counter-intuitive behavior lies in the use of multi-layer perceptron (MLP) architectures with non-suitable initialization schemes, which result in poor trainability for the network derivatives, and ultimately lead to an unstable minimization of the PDE residual loss. To address this, Dr. Wang introduces Physics-informed Residual Adaptive Networks (PirateNets), a novel architecture that is designed to facilitate stable and efficient training of deep PINN models. PirateNets leverage a novel adaptive residual connection, which allows the networks to be initialized as shallow networks that progressively deepen during training. Dr. Wang also shows that the proposed initialization scheme allows us to encode appropriate inductive biases corresponding to a given PDE system into the network architecture. Dr. Wang provides comprehensive empirical evidence showing that PirateNets are easier to optimize and can gain accuracy from considerably increased depth, ultimately achieving state-of-the-art results across various benchmarks.
Presentation #2: Tackling the Curse of Dimensionality with Physics-Informed Neural Networks, Zheyuan Hu, National University of Singapore
Link: https://youtu.be/Rvgn_-DFpUE?feature=shared
Abstract: The curse-of-dimensionality taxes computational resources heavily with exponentially increasing computational cost as the dimension increases. This poses great challenges in solving high-dimensional partial differential equations (PDEs), as Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success in solving numerical PDEs in high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to high dimensions has never been achieved. Zheyuan Hu developed new methods of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional and high-order PDEs. The first new method, called Stochastic Dimension Gradient Descent (SDGD), decomposes a gradient of PDEs’ and PINNs’ residual into pieces corresponding to different dimensions and randomly samples a subset of these dimensional pieces in each iteration of training PINNs. Furthermore, inspired by the Hessian trace operator in second-order PDEs, Zheyuan introduces Hutchinson Trace Estimation (HTE) to accelerate and scale up PINN. Zheyuan demonstrates how SDGD and HTE can be unified and their difference. Lastly, with the recently developed high-dimensional PDE solvers, Zheyuan conducts extensive experiments on Hamilton-Jacobi-Bellman, Fokker-Planck, and other nonlinear PDEs. He demonstrates respective algorithms for various PDEs and scale up PINNs to 100,000 dimensions whose training can be done in a few hours or even minutes.
April 12, 2024:
Presentation #1: Stochastic Thermodynamics of Learning Parametric Probabilistic Models, Shervin Parsi, City University of New York
Link: https://youtu.be/9H2jVWWKFGM?feature=shared
Abstract: Dr. Shervin Parsi has formulated a family of machine learning problems as the time evolution of parametric probabilistic models (PPMs), inherently rendering a thermodynamic process. His primary motivation is to leverage the rich toolbox of thermodynamics of information to assess the information-theoretic content of learning a probabilistic model. Dr. Parsi first introduces two information-theoretic metrics, memorized information (M-info) and learned information (L-info), which trace the flow of information during the learning process of PPMs. Then, we demonstrate that the accumulation of L-info during the learning process is associated with entropy production, and the parameters serve as a heat reservoir in this process, capturing learned information in the form of M-info.
Presentation #2: Resolution invariant deep operator network for PDEs with complex geometries, Yue Qiu, College of Mathematics and Statistics of Chongqing University
Link: https://youtu.be/9H2jVWWKFGM?feature=shared
Abstract: Neural operators (NO) are discretization invariant deep learning methods with functional output and can approximate any continuous operators. NO has demonstrated the superiority of solving partial differential equations (PDEs) over other deep learning methods. However, the domain of its input function needs to be identical to its output, which limits its applicability. For instance, the widely used Fourier neural operator (FNO) fails to approximate the operator that maps the boundary condition to the PDE solution. To address this issue, Dr. Yue Qiu proposes a novel framework called resolution-invariant deep operator (RDO) that decouples the spatial domain of the input and output. RDO is motivated by the Deep operator network (DeepONet) and it does not require retraining the network when the input/output is changed compared with DeepONet. RDO takes functional input and its output is also functional so that it keeps the resolution invariant property of NO. It can also resolve PDEs with complex geometries whereas NO fails. Various numerical experiments demonstrate the advantage of our method over DeepONet and FNO.
April 5, 2024:
Presentation #1: U-Net-PINN for 3D lithographic simulations and nano-optical design, Vlad Medvedev, Fraunhofer IISB
Link: https://youtu.be/Q82Es_qA1Os?feature=shared
Abstract: The increasing demands on computational lithography and imaging in the design and optimization of lithography processes necessitate rigorous modeling of EUV light diffracted from the mask. Traditional numerical solvers are inefficient for large-scale technology problems, while deep neural networks rely on a huge amount of expensive rigorously simulated or measured data. To overcome these constraints, Dr. Medvedev explore the potential of physics-informed neural networks (PINN) as a promising solution for addressing complex optical problems in EUV lithography and accurate modeling of light diffraction from reflective EUV masks. The coupling of the predicted diffraction spectrum with image simulations enables the evaluation of PINN performance in terms of relevant lithographic metrics. The capabilities of the established PINNs approach to simulate typical 3D mask effects including non-telecentricities, shifts of the best focus position, and image blur will be demonstrated. Dr. Medvedev’s study proves a real benefit of PINN: differently from numerical solvers, once trained, generalized PINN can simulate light scattering in several milliseconds without re-training and independently of problem complexity.
Presentation #2: Exploring the Frontiers of Computational Medicine, Yixiang Deng, Ragon Institute of Mass General, MIT, and Harvard University
Link: https://youtu.be/Q82Es_qA1Os?feature=shared
Abstract: Computational models have greatly improved how we understand complex biological systems. Yet, the variety of these systems prohibits a one-size-fits-all solution. Hence, to effectively tackle the specific challenges posed by varying contexts within computational medicine, we must tailor our computational strategies whether they be data-driven, knowledge-driven, or a hybrid approach integrating the two. In this talk, Dr. Deng will dissect the unique strengths and situational superiority of each modeling paradigm in computational medicine. First, I will show how to provide accurate predictions and distill novel biological knowledge using data-driven models. Next, Dr. Deng will demonstrate how to validate observed disease-mediated changes in blood rheology via knowledge-driven models. Additionally, Dr. Deng will also discuss patient-specific decision-making enabled by a hybrid model. The doctor will conclude this discussion by focusing on the crucial factors, such as age and sex, that are essential to tailoring treatments in precision medicine, and how to synergistically integrate data-driven, knowledge-driven, and hybrid models to tackle these challenges.”
March 29, 2024:
Presentation #1: Modeling Fracture using Physics-Informed Deep Learning, Manav Manav, ETH Zurich
Link: https://youtu.be/mB1lWmecbro?feature=shared
Abstract: Phase-field modeling of fracture, a promising approach to model fracture, recasts the problem of fracture as a variational problem which completely determines the fracture process including crack nucleation, propagation, bifurcation, and coalescence, and obviates the need for ad-hoc conditions. In this approach, a phase field is introduced which regularizes a crack. It is, however, a nonlocal model which introduces a small length scale. Resolving this length scale in computation is expensive. Hence, uncertainty quantification, design optimization, material parameter identification, among others, using this approach become prohibitively expensive. Deep learning offers a potential pathway to address this challenge.
As an initial step in this direction, we explore the application of physics-informed deep learning to phase-field fracture modeling with the aim to capture various fracture processes [1]. Nonconvexity of the variational energy, and initiation and evolution of the fields with sharp gradients governed by this energy are the two key challenges to learning the solution field. Dr. Manav uses deep Ritz method (DRM) in which training of the network representing the solution field proceeds by directly minimizing the variational energy of the system. Guided by the challenges, Dr. Manav constructs a network and select an optimization scheme to learn the solution accurately. Dr. Manav also elucidates the challenges in learning the solution field with the same level of domain discretization as needed in finite element analysis and suggest ways to overcome it. Finally, Dr. Manav solves some benchmark problems in phase-field fracture literature, exhibiting the capability of the approach to capture crack nucleation, propagation, kinking, branching, and coalescence. The details of the model and the challenges in obtaining the correct solution will be discussed.
References:
[1] Manav, M., Molinaro, R., Mishra, S., & De Lorenzis, L. “Phase-field modeling of complex fracture processes using physics-informed deep learning,” In preparation.
March 22, 2024:
Presentation #1: Domain decomposition for physics-informed neural networks, Alexander Heinlein, Delft University of Technology
Link: https://youtu.be/087Y9pLFNqI?feature=shared
Abstract: Physics-informed neural networks (PINNs) are a class of methods for solving differential equation-based problems using a neural network as the discretization. They have been introduced by Raissi et al. in [6] and combine the pioneering collocation approach for neural network functions introduced by Lagaris et al. in [4] with the incorporation of data via an additional loss term. PINNs are very versatile as they do not require an explicit mesh, allow for the solution of parameter identification problems, and are well-suited for high-dimensional problems. However, the training of a PINN model is generally not very robust and may require a lot of hyper parameter tuning. In particular, due to the so-called spectral bias, the training of PINN models is notoriously difficult when scaling up to large computational domains as well as for multiscale problems. In this talk, overlapping domain decomposition-based techniques for PINNs are being discussed. Compared with other domain decomposition techniques for PINNs, in the finite basis physics-informed neural networks (FBPINNs) approach [5], the coupling is done implicitly via the overlapping regions and does not require additional loss terms. Using the classical Schwarz domain decomposition framework, a very general framework, that also allows for mult-level extensions, can be introduced [1]. The method outperforms classical PINNs on several types of problems, including multiscale problems, both in terms of accuracy and efficiency. Furthermore, the combination of the multi-level domain decomposition strategy with multifidelity stacking PINNs [3], as introduced in [2] for time-dependent problems, will be discussed. It can be observed that the combination of multifidelity stacking PINNs with a domain decomposition in time clearly improves the reference results without a domain decomposition.
References: Dolean, Victorita, et al. “Multilevel domain decomposition-based architectures for physics-informed neural networks.” arXiv preprint arXiv:2306.05486 (2023). Heinlein, Alexander, et al. “Multifidelity domain decomposition-based physics-informed neural networks for time-dependent problems.” arXiv preprint arXiv:2401.07888 (2024). Howard, Amanda A., et al. “Stacked networks improve physics-informed training: applications to neural networks and deep operator networks.” arXiv preprint arXiv:2311.06483 (2023). Lagaris, Isaac E., Aristidis Likas, and Dimitrios I. Fotiadis. “Artificial neural networks for solving ordinary and partial differential equations.” IEEE transactions on neural networks 9.5 (1998): 987-1000. Moseley, Ben, Andrew Markham, and Tarje Nissen-Meyer. “Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations.” Advances in Computational Mathematics 49.4 (2023): 62. Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.” Journal of Computational physics 378 (2019): 686-707.
Presentation #2: Physics-based and data-driven methods for precision medicine in computational cardiology, Matteo Salvador, Stanford University
Link: https://youtu.be/087Y9pLFNqI?feature=shared
Abstract: In recent years, blending physics-based modeling with data-driven methods has had a major impact on computational medicine. Several frameworks have been proposed to create certified digital replicas of the cardiovascular system. These computational pipelines include multiscale and multiphysics mathematical models based on rigorous differential equations, scientific machine learning methods to build accurate and efficient surrogate models, sensitivity analysis, and robust parameter estimation with uncertainty quantification. In this seminar, we will use cardiac mathematical models for electrophysiology, active and passive mechanics, and hemodynamics, combined with various artificial intelligence-based methods, such as Latent Neural Ordinary Differential Equations, Branched Latent Neural Maps, and Latent Dynamics Networks, to learn complex time and space-time physical processes underlying these systems
of ordinary and partial differential equations. Dr. Salvador will use these reduced-order models to infer physics-based parameters from cell to organ scale with uncertainty quantification in a Bayesian framework, while fitting clinical data such as 12-lead electrocardiograms and pressure-volume loops for human hearts. These computational tools define important contributions for digital twinning in computational cardiology.
March 15, 2024:
Presentation #1: A Python module for easily and efficiently solving problems with the Theory of Functional Connections, Carl Leake, Texas A&M University
Link: https://youtu.be/qDB66Vt1JH4?feature=shared
Abstract: Theory of Functional Connections (TFC) is a functional interpolation framework that can be used to solve a wide variety of problems, e.g., boundary value problems. The tfc Python module, the focus of this talk, is designed to help its users solve problems with TFC easily and efficiently: easily here refers to the time it takes the user to write a Python script to solve their problem and efficiently refers to the computational efficiency of said script. The tfc module leverages the automatic differentiation and just-in-time compilation capabilities of the JAX library to do this. In addition, the module provides other convenience, quality-of-life, and sanity-checking capabilities that reduce/alleviate the most common errors users make when numerically solving problems with TFC.
March 8, 2024:
Presentation #1: Can Physics-Informed Neural Networks beat the Finite Element Method?, Jonas Latz, University of Manchester
Link: https://youtu.be/bgsqCTgF24w?feature=shared
Abstract: Partial differential equations play a fundamental role in the mathematical modelling of many processes and systems in physical, biological and other sciences. To simulate such processes and systems, the solutions of PDEs often need to be approximated numerically. The finite element method, for instance, is a usual standard methodology to do so. The recent success of deep neural networks at various approximation tasks has motivated their use in the numerical solution of PDEs. These so-called physics-informed neural networks and their variants have shown to be able to successfully approximate a large range of partial differential equations. So far, physics-informed neural networks and the finite element method have mainly been studied in isolation of each other. In this work, Dr. Latz compares the methodologies in a systematic computational study. Dr. Latz employed both methods to numerically solve various linear and nonlinear partial differential equations: Poisson in 1D, 2D, and 3D, Allen-Cahn in 1D, semilinear Schrödinger in 1D and 2D. He then compared computational costs and approximation accuracies. In terms of solution time and accuracy, physics-informed neural networks have not been able to outperform the finite element method in our study. In some experiments, they were faster at evaluating the solved PDE.
Presentation #2: On flows and diffusions: from many-body Fokker-Planck to stochastic interpolants, Nicholas Boffi, Courant Institute of Mathematical Sciences
Link: https://youtu.be/bgsqCTgF24w?feature=shared
Abstract: Given a stochastic differential equation, its corresponding Fokker-Planck equation is generically intractable to solve, because its high dimensionality prohibits the application of standard numerical techniques. In this talk, Dr. Boffi will exploit an analogy between the Fokker-Planck equation and modern generative models from machine learning to develop an algorithm for its solution in high dimension. The method enables the computation of previously intractable quantities of interest, such as the entropy production rate of active matter systems, which quantifies the magnitude of nonequilibrium effects. Dr. Boffi will then highlight how insight from the Fokker-Planck equation facilitates the development of a new class of generative models known as stochastic interpolants, which generalize state-of-the-art diffusion models in several key ways that can be leveraged to improve practical performance.
March 1, 2024:
Presentation #1: Lax pairs informed neural networks solving integrable systems, Chen Yong, East China Normal University
Link: https://youtu.be/rKvekSv8j0Q?feature=shared
Abstract: Lax pairs are one of the most important features of integrable system. In this talk, Dr. Yong proposes the Lax pairs informed neural networks (LPINNs) tailored for integrable systems with Lax pairs by designing novel network architectures and loss functions, comprising LPINN-v1 and LPINN-v2. The most noteworthy advantage of LPINN-v1 is that it can transform the solving of complex integrable systems into the solving of a simpler Lax pairs to simplify the study of integrable systems, and it not only efficiently solves data-driven localized wave solutions, but also obtains spectral parameters and corresponding spectral functions in Lax pairs. On the basis of LPINN-v1, Dr. Yong additionally incorporates the compatibility condition/zero curvature equation of Lax pairs in LPINN-v2, its major advantage is the ability to solve and explore high-accuracy data-driven localized wave solutions and associated spectral problems for all integrable systems with Lax pairs. The numerical experiments in this work involve several important and classic low-dimensional and high-dimensional integrable systems, abundant localized wave solutions and their Lax pairs , including the soliton of the Korteweg-de Vries (KdV) equation and modified KdV equation, rogue wave solution of the nonlinear Schrodinger equation, kink solution of the sine-Gordon equation, non-smooth peakon solution of the Camassa-Holm equation and pulse solution of the short pulse equation, as well as the line-soliton solution Kadomtsev-Petviashvili equation and lump solution of high-dimensional KdV equation. The innovation of this work lies in the pioneering integration of Lax pairs informed of integrable systems into deep neural networks, thereby presenting a fresh methodology and pathway for investigating data-driven localized wave solutions and spectral problems of Lax pairs.
February 23, 2024:
Presentation #1: Density physics-informed neural networks reveal sources of cell heterogeneity in signal transduction, Jae Kyoung Kim, KAIST
Link: https://youtu.be/dq_-iUrMhiY?feature=shared
Abstract: In this talk, Dr. Jae Kyoung Kim introduces Density-Physics Informed Neural Networks (Density-PINNs) for inferring probability distributions from timeseries data. Density-PINNs leverage Rayleigh distributions as kernel and a variational autoencoder for noise filtering. Dr. Kim demonstrates the power of Density-PINNs by analyzing single-cell gene expression data from sixteen promoters regulated by unknown pathways during antibiotic stress response. By inferring the probability distributions of gene expression patterns, Density-PINNs successfully identify key signaling pathways crucial for consistent cellular responses, offering a valuable strategy for treatment optimization.
February 16, 2024:
Presentation #1: DeepOnet Based Preconditioning Strategies For Solving Parametric Linear Systems of Equations, Alena Kopanicakova, Brown University
Link: https://youtu.be/_ziSqwA8NzM?feature=shared
Abstract: Dr. Kopanicakova introduces a new class of hybrid preconditioners for solving parametric linear systems of equations. The proposed preconditioners are constructed by hybridizing the deep operator network, namely DeepONet, with standard iterative methods. Exploiting the spectral bias, DeepONet-based components are harnessed to address low-frequency error components, while conventional iterative methods are employed to mitigate high-frequency error components. Dr. Kopanicakova’s preconditioning framework comprises two distinct hybridization approaches: direct preconditioning (DP) and trunk basis (TB) approaches. In the DP approach, DeepONet is used to approximate an action of an inverse operator to a vector during each preconditioning step. In contrast, the TB approach extracts basis functions from the trained DeepONet to construct a map to a smaller subspace, in which the low-frequency component of the error can be effectively eliminated. Dr. Kopanicakova’s numerical results demonstrate that utilizing the TB approach enhances the convergence of Krylov methods by a large margin compared to standard non-hybrid preconditioning strategies. Moreover, the proposed hybrid preconditioners exhibit robustness across a wide range of model parameters and problem resolutions.
February 9, 2024:
Presentation #1: Neural oscillators for generalization of physics-informed machine learning, Taniya Kapoor, TU Delft
Link: https://youtu.be/zJExHI-MYvE?feature=shared
Abstract: A primary challenge of physics-informed machine learning (PIML) is its generalization beyond the training domain, especially when dealing with complex physical problems represented by partial differential equations (PDEs). This paper aims to enhance the generalization capabilities of PIML, facilitating practical, real-world applications where accurate predictions in unexplored regions are crucial. Taniya Kapoor leverages the inherent causality and temporal sequential characteristics of PDE solutions to fuse PIML models with recurrent neural architectures based on systems of ordinary differential equations, referred to as neural oscillators. Through effectively capturing long-time dependencies and mitigating the exploding and vanishing gradient problem, neural oscillators foster improved generalization in PIML tasks. Extensive experimentation involving time-dependent nonlinear PDEs and biharmonic beam equations demonstrates the efficacy of the proposed approach. Incorporating neural oscillators outperforms existing state-of-the-art methods on benchmark problems across various metrics. Consequently, the proposed method improves the generalization capabilities of PIML, providing accurate solutions for extrapolation and prediction beyond the training data.
February 2, 2024:
Presentation #1: Efficient and Physically Consistent Surrogate Modeling of Chemical Kinetics Using Deep Operator Networks, Anuj Kumar, North Carolina State University
Link: https://youtu.be/UYzU7q37tPk?feature=shared
Abstract: In the talk, Anuj Kumar explores a new combustion chemistry acceleration scheme he has developed for reacting flow simulations, utilizing deep operator networks (DeepONets). The scheme, implemented on a subset of thermochemical scalars crucial for chemical system’s evolution, advances the current solution vector by adaptive time steps. In addition, the original DeepONet architecture is modified to incorporate the parametric dependence of these stiff ODEs associated with chemical kinetics. Unlike previous DeepONet training approaches, his training is conducted over short time windows, using intermediate solutions as initial states. An additional framework of latent-space kinetics identification with modified DeepONet is proposed, which enhances the computational efficiency and widens the applicability of the proposed scheme. The scheme is demonstrated on the “simple” chemical kinetics of hydrogen oxidation and the more complex chemical kinetics of n-dodecane high-and low-temperatures. The proposed framework accurately learns the chemical kinetics and efficiently reproduces species and temperature temporal profiles. Moreover, a very large speed-up with a good extrapolation capability is also observed with the proposed scheme. Additional framework of incorporating physical constraints such as total mass and elemental conservation, into the training of DeepONet for subset of thermochemical scalars of complex reaction mechanisms is proposed. Levering the strong correlation between full and subset of scalars, the framework establishes an accurate and physically consistent mapping. The framework is demonstrated on the chemical kinetics of CH4 oxidation.
Presentation #2: SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training, Kazem Meidani, Carnegie Mellon University
Link: https://youtu.be/UYzU7q37tPk?feature=shared
Abstract: In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic unified understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training, which employs joint contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the pre-trained embeddings. By performing latent space analysis, Dr. Meidani observes that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. Kazem evaluates SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in few-shot learning scenarios where available data is limited.
January 26, 2024:
Presentation #1: Physics-informed neural networks for quantum control, Dr. Ariel Norambuena, Pontifical Catholic University
Link: https://youtu.be/Ci85LdBM_J0?feature=shared
Abstract: In this talk, Dr. Norambuena will introduce a computational method for optimal quantum control problems using physics-informed neural networks (PINNs). Motivated by recent advances in open quantum systems and quantum computing, he will discuss the relevance of PINNs for finding realistic and robust control fields. Through this talk, we will learn about the flexibility and universality of PINNs to solve different quantum control problems, showing the main advantages of PINNs compared to standard control techniques.
January 19, 2024:
Presentation #1: U-DeepONet: U-Net Enhanced Deep Operator Network for Geologic Carbon Sequestration, Waleed Diab, Khalifa University
Link: https://youtu.be/AUPou43OuYo?feature=shared
Abstract: Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) are by far the most popular neural operator learning algorithms. FNO seems to enjoy an edge in popularity due to its ease of use, especially with high dimensional data. However, a lesser-acknowledged feature of DeepONet is its modularity. This feature allows the user the flexibility of choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial because it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, Waleed Diab will take advantage of this feature by carefully designing a more efficient neural operator based on the DeepONet architecture. Waleed will introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. In addition, the proposed U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. Waleed also shows that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.
January 12, 2024:
Presentation #1: PPDONet: Deep Operator Networks for forward and inverse problems in astronomy, Shunyuan Mao, University of Victoria
Link: https://youtu.be/_IhB9R33zCk?feature=shared
Abstract: This talk presents Shunyuan Mao’s research on applying Deep Operator Networks (DeepONets) to fluid dynamics in astronomy. The focus is specifically on protoplanetary disks — the gaseous disks surrounding young stars, which are the birthplaces of planets. The physical processes in these disks are governed by Navier-Stokes (NS) equations. Traditional numerical methods for solving these equations are computationally expensive, especially when modeling multiple systems for tasks such as exploring parameter spaces or inferring parameters from observations. Shunyuan Mao addresses this issue by using DeepONets to rapidly map PDE parameters to their solutions. His development, Protoplanetary Disk Operator Network (PPDONet), significantly reduces computational cost, predicting field solutions within seconds— a task that would typically require hundreds of CPU hours. The utility of this tool is demonstrated in two key applications: 1) Its swift solution predictions facilitate the exploration of relationships between PDE parameters and observables extracted from field solutions. 2) When integrated with the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), DeepONets effectively address inverse problems by efficiently inferring PDE parameters from unseen solutions.
Presentation #2: Physics-informed neural networks for solving phonon Boltzmann transport equations, Dr. Tengfei Luo, University of Notre Dame
Link: https://youtu.be/_IhB9R33zCk?feature=shared
Abstract: The phonon Boltzmann transport equation (pBTE) has been proven to be capable of precisely predicting heat conduction in sub-micron electronic devices. However, numerically solving pBTE is extremely computationally costly due to its high dimensionality, especially when phonon dispersion and time evolution are considered. In this study, we use physics-informed neural networks (PINNs) to solve pBTE for multiscale non-equilibrium thermal transport problems both efficiently and accurately. In particular, a PINN framework is devised to predict phonon energy distribution by minimizing the residuals of governing equations, boundary conditions, and initial conditions without the need for any labeled training data. With phonon energy distribution predicted by the PINN, temperature and heat flux can be obtained thereby. In addition, geometric parameters, such as characteristic length scale, are also considered as a part of the input to PINN, which makes our model capable of predicting heat distribution in different length scales. Besides pBTE, Dr. Tengfei Luo also extended the applicability of the PINN framework for modeling coupled electron-phonon (e-ph) transport. e-ph coupling and transport are ubiquitous in modern electronic devices. The coupled electron and phonon Boltzmann transport equations (BTEs) hold great potential for the simulation of thermal transport in metal and semiconductor systems.
January 5, 2024:
Presentation #1: Neural Operator Learning Enhanced Physics-informed Neural Networks for solving differential equations with sharp solutions, Professor Mao, Xiamen University
Link: https://youtu.be/7NNyjWxp2zQ?feature=shared
Abstract: In the talk, Professor Mao shall present some numerical results for the forward and inverse problems of PDEs with sharp solutions by using deep neural network-based methods. In particular, he developed a deep operator learning enhanced PINN for PDEs with sharp solutions, which can be asymptotically approached by using problems with smooth solutions. Firstly, Mao solves the smooth problems by using deep operator learning, and adopts the framework of DeepONet. Then Professor Mao combines the pre-trained DeepONet and PINN to solve the sharp problem. Professor Mao demonstrates the effectiveness of the present method by testing several equations, including viscous Burger equation, Cavity flow as well Navier-stokes equation. Furthermore, we solve the ill-posed problems that with insufficient boundary conditions by using the present method.
Presentation #2: Physics-Informed Parallel Neural Networks with Self-Adaptive Loss
Weighting for the Identification of Structural Systems, Rui Zhang, Pennsylvania State University
Link: https://youtu.be/7NNyjWxp2zQ?feature=shared
Abstract: Rui Zhang has developed a physics-informed parallel neural networks (PIPNNs) framework for the identification of continuous structural systems described by a system of partial differential equations. PIPNNs integrate the physics of the system into the loss function of the NNs, enabling the simultaneous updating of both unknown structural and NN parameters during the process of minimizing the loss function. The PIPNNs framework accommodates structural discontinuities by dividing the computational domain into subdomains, each uniquely represented through a parallelized and interconnected NN architecture. Furthermore, the PIPNNs framework is incorporated a self-adaptive weighted loss function based on Neural Tangent Kernel (NTK) theory. The self-adaptive weights, determined based upon the eigenvalues of the NTK matrix of the PIPNNs, dynamically adjust the convergence rates of each loss term to achieve a balanced convergence, while requiring less training data. This advancement is particularly beneficial for inverse problem-solving and structural identification, as the NTK matrix reflects the training progress of both unknown structural and NN parameters. The PIPNNs framework is verified, and its accuracy is assessed through the application of numerical examples of several continuous structural systems, including bars, beams, and plates.