Featured Researches

Computational Physics

Kinetic modeling of multiphase flow based on simplified Enskog equation

A new kinetic model for multiphase flow was presented under the framework of the discrete Boltzmann method (DBM). Significantly different from the previous DBM, a bottom-up approach was adopted in this model. The effects of molecular size and repulsion potential were described by the Enskog collision model; the attraction potential was obtained through the mean-field approximation method. The molecular interactions, which result in the non-ideal equation of state and surface tension, were directly introduced as an external force term. Several typical benchmark problems, including Couette flow, two-phase coexistence curve, the Laplace law, phase separation, and the collision of two droplets, were simulated to verify the model. Especially, for two types of droplet collisions, the strengths of two non-equilibrium effects, D ¯ ∗ 2 and D ¯ ∗ 3 , defined through the second and third order non-conserved kinetic moments of (f− f eq ) , are comparatively investigated, where f ( f eq ) is the (equilibrium) distribution function. It is interesting to find that during the collision process, D ¯ ∗ 2 is always significantly larger than D ¯ ∗ 3 , D ¯ ∗ 2 can be used to identify the different stages of the collision process and to distinguish different types of collisions. The modeling method can be directly extended to a higher-order model for the case where the non-equilibrium effect is strong, and the linear constitutive law of viscous stress is no longer valid.

Read more
Computational Physics

Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics

Including prior knowledge is important for effective machine learning models in physics, and is usually achieved by explicitly adding loss terms or constraints on model architectures. Prior knowledge embedded in the physics computation itself rarely draws attention. We show that solving the Kohn-Sham equations when training neural networks for the exchange-correlation functional provides an implicit regularization that greatly improves generalization. Two separations suffice for learning the entire one-dimensional H 2 dissociation curve within chemical accuracy, including the strongly correlated region. Our models also generalize to unseen types of molecules and overcome self-interaction error.

Read more
Computational Physics

Large scale simulation of pressure induced phase-field fracture propagation using Utopia

Non-linear phase field models are increasingly used for the simulation of fracture propagation models. The numerical simulation of fracture networks of realistic size requires the efficient parallel solution of large coupled non-linear systems. Although in principle efficient iterative multi-level methods for these types of problems are available, they are not widely used in practice due to the complexity of their parallel implementation. Here, we present Utopia, which is an open-source C++ library for parallel non-linear multilevel solution strategies. Utopia provides the advantages of high-level programming interfaces while at the same time a framework to access low-level data-structures without breaking code encapsulation. Complex numerical procedures can be expressed with few lines of code, and evaluated by different implementations, libraries, or computing hardware. In this paper, we investigate the parallel performance of our implementation of the recursive multilevel trust-region (RMTR) method based on the Utopia library. RMTR is a globally convergent multilevel solution strategy designed to solve non-convex constrained minimization problems. In particular, we solve pressure-induced phase-field fracture propagation in large and complex fracture networks. Solving such problems is deemed challenging even for a few fractures, however, here we are considering networks of realistic size with up to 1000 fractures.

Read more
Computational Physics

Latent-space time evolution of non-intrusive reduced-order models using Gaussian process emulation

Non-intrusive reduced-order models (ROMs) have recently generated considerable interest for constructing computationally efficient counterparts of nonlinear dynamical systems emerging from various domain sciences. They provide a low-dimensional emulation framework for systems that may be intrinsically high-dimensional. This is accomplished by utilizing a construction algorithm that is purely data-driven. It is no surprise, therefore, that the algorithmic advances of machine learning have led to non-intrusive ROMs with greater accuracy and computational gains. However, in bypassing the utilization of an equation-based evolution, it is often seen that the interpretability of the ROM framework suffers. This becomes more problematic when black-box deep learning methods are used which are notorious for lacking robustness outside the physical regime of the observed data. In this article, we propose the use of a novel latent-space interpolation algorithm based on Gaussian process regression. Notably, this reduced-order evolution of the system is parameterized by control parameters to allow for interpolation in space. The use of this procedure also allows for a continuous interpretation of time which allows for temporal interpolation. The latter aspect provides information, with quantified uncertainty, about full-state evolution at a finer resolution than that utilized for training the ROMs. We assess the viability of this algorithm for an advection-dominated system given by the inviscid shallow water equations.

Read more
Computational Physics

Lateral Drop Rebound on Hydrophobic and Chemically Heterogeneous Surface

A drop rebounding from a hydrophobic and chemically heterogeneous surface is investigated using the multiphase lattice Boltzmann method. The behaviors of drop rebounding are dependent on the degrees of the hydrophobicity and heterogeneity of the surface. When the surface is homogeneous, the drop rebounds vertically and the height is getting higher and higher with increases of the surface hydrophobicity. When the surface consists of two different hydrophobic surfaces, the drop rebounds laterally towards the low hydrophobic side. The asymmetrical rebounding is because the unbalanced Young's force exerted on the contact line by the high hydrophobic side is greater than that by the low hydrophobic surface. A set of contours of momentum distribution illustrate the dynamic process of drop spreading, shrinking and rebounding. This work promotes the understanding of the rebound mechanism of a drop impacting the surface and also provides a guiding strategy for precisely controlling the lateral behavior of rebounding drops by hydrophobic degrees and heterogeneous surfaces.

Read more
Computational Physics

Learning Compact Physics-Aware Delayed Photocurrent Models Using Dynamic Mode Decomposition

Radiation-induced photocurrent in semiconductor devices can be simulated using complex physics-based models, which are accurate, but computationally expensive. This presents a challenge for implementing device characteristics in high-level circuit simulations where it is computationally infeasible to evaluate detailed models for multiple individual circuit elements. In this work we demonstrate a procedure for learning compact delayed photocurrent models that are efficient enough to implement in large-scale circuit simulations, but remain faithful to the underlying physics. Our approach utilizes Dynamic Mode Decomposition (DMD), a system identification technique for learning reduced order discrete-time dynamical systems from time series data based on singular value decomposition. To obtain physics-aware device models, we simulate the excess carrier density induced by radiation pulses by solving numerically the Ambipolar Diffusion Equation, then use the simulated internal state as training data for the DMD algorithm. Our results show that the significantly reduced order delayed photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density -- which can be used to calculate the induced current at the device boundaries -- while remaining compact enough to incorporate into larger circuit simulations.

Read more
Computational Physics

Learning Thermodynamically Stable and Galilean Invariant Partial Differential Equations for Non-equilibrium Flows

In this work, we develop a method for learning interpretable, thermodynamically stable and Galilean invariant partial differential equations (PDEs) based on the Conservation-dissipation Formalism of irreversible thermodynamics. As governing equations for non-equilibrium flows in one dimension, the learned PDEs are parameterized by fully-connected neural networks and satisfy the conservation-dissipation principle automatically. In particular, they are hyperbolic balance laws and Galilean invariant. The training data are generated from a kinetic model with smooth initial data. Numerical results indicate that the learned PDEs can achieve good accuracy in a wide range of Knudsen numbers. Remarkably, the learned dynamics can give satisfactory results with randomly sampled discontinuous initial data and Sod's shock tube problem although it is trained only with smooth initial data.

Read more
Computational Physics

Learning Unknown Physics of non-Newtonian Fluids

We extend the physics-informed neural network (PINN) method to learn viscosity models of two non-Newtonian systems (polymer melts and suspensions of particles) using only velocity measurements. The PINN-inferred viscosity models agree with the empirical models for shear rates with large absolute values but deviate for shear rates near zero where the analytical models have an unphysical singularity. Once a viscosity model is learned, we use the PINN method to solve the momentum conservation equation for non-Newtonian fluid flow using only the boundary conditions.

Read more
Computational Physics

Learning Variational Data Assimilation Models and Solvers

This paper addresses variational data assimilation from a learning point of view. Data assimilation aims to reconstruct the time evolution of some state given a series of observations, possibly noisy and irregularly-sampled. Using automatic differentiation tools embedded in deep learning frameworks, we introduce end-to-end neural network architectures for data assimilation. It comprises two key components: a variational model and a gradient-based solver both implemented as neural networks. A key feature of the proposed end-to-end learning architecture is that we may train the NN models using both supervised and unsupervised strategies. Our numerical experiments on Lorenz-63 and Lorenz-96 systems report significant gain w.r.t. a classic gradient-based minimization of the variational cost both in terms of reconstruction performance and optimization complexity. Intriguingly, we also show that the variational models issued from the true Lorenz-63 and Lorenz-96 ODE representations may not lead to the best reconstruction performance. We believe these results may open new research avenues for the specification of assimilation models in geoscience.

Read more
Computational Physics

Learning the Effective Dynamics of Complex Multiscale Systems

Simulations of complex multiscale systems are essential for science and technology ranging from weather forecasting to aircraft design. The predictive capabilities of simulations hinges on their capacity to capture the governing system dynamics. Large scale simulations, resolving all spatiotemporal scales, provide invaluable insight at a high computational cost. In turn, simulations using reduced order models are affordable but their veracity hinges often on linearisation and/or heuristics. Here we present a novel systematic framework to extract and forecast accurately the effective dynamics (LED) of complex systems with multiple spatio-temporal scales. The framework fuses advanced machine learning algorithms with equation-free approaches. It deploys autoencoders to obtain a mapping between fine and coarse grained representations of the system and learns to forecast the latent space dynamics using recurrent neural networks. We compare the LED framework with existing approaches on a number of benchmark problems and demonstrate reduction in computational efforts by several orders of magnitude without sacrificing the accuracy of the system.

Read more

Ready to get started?

Join us today