Featured Researches

Computational Engineering Finance And Science

A Perspective on Machine Learning Methods in Turbulence Modelling

This work presents a review of the current state of research in data-driven turbulence closure modeling. It offers a perspective on the challenges and open issues, but also on the advantages and promises of machine learning methods applied to parameter estimation, model identification, closure term reconstruction and beyond, mostly from the perspective of Large Eddy Simulation and related techniques. We stress that consistency of the training data, the model, the underlying physics and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy. In order to make the discussion useful for non-experts in either field, we introduce both the modeling problem in turbulence as well as the prominent ML paradigms and methods in a concise and self-consistent manner. Following, we present a survey of the current data-driven model concepts and methods, highlight important developments and put them into the context of the discussed challenges.

Read more
Computational Engineering Finance And Science

A Physics-Constrained Data-Driven Approach Based on Locally Convex Reconstruction for Noisy Database

Physics-constrained data-driven computing is an emerging hybrid approach that integrates universal physical laws with data-driven models of experimental data for scientific computing. A new data-driven simulation approach coupled with a locally convex reconstruction, termed the local convexity data-driven (LCDD) computing, is proposed to enhance accuracy and robustness against noise and outliers in data sets in the data-driven computing. In this approach, for a given state obtained by the physical simulation, the corresponding optimum experimental solution is sought by projecting the state onto the associated local convex manifold reconstructed based on the nearest experimental data. This learning process of local data structure is less sensitive to noisy data and consequently yields better accuracy. A penalty relaxation is also introduced to recast the local learning solver in the context of non-negative least squares that can be solved effectively. The reproducing kernel approximation with stabilized nodal integration is employed for the solution of the physical manifold to allow reduced stress-strain data at the discrete points for enhanced effectiveness in the LCDD learning solver. Due to the inherent manifold learning properties, LCDD performs well for high-dimensional data sets that are relatively sparse in real-world engineering applications. Numerical tests demonstrated that LCDD enhances nearly one order of accuracy compared to the standard distance-minimization data-driven scheme when dealing with noisy database, and a linear exactness is achieved when local stress-strain relation is linear.

Read more
Computational Engineering Finance And Science

A Physics-Guided Neural Network Framework for Elastic Plates: Comparison of Governing Equations-Based and Energy-Based Approaches

One of the obstacles hindering the scaling-up of the initial successes of machine learning in practical engineering applications is the dependence of the accuracy on the size of the database that "drives" the algorithms. Incorporating the already-known physical laws into the training process can significantly reduce the size of the required database. In this study, we establish a neural network-based computational framework to characterize the finite deformation of elastic plates, which in classic theories is described by the Föppl--von Kármán (FvK) equations with a set of boundary conditions (BCs). A neural network is constructed by taking the spatial coordinates as the input and the displacement field as the output to approximate the exact solution of the FvK equations. The physical information (PDEs, BCs, and potential energies) is then incorporated into the loss function, and a pseudo dataset is sampled without knowing the exact solution to finally train the neural network. The prediction accuracy of the modeling framework is carefully examined by applying it to four different loading cases: in-plane tension with non-uniformly distributed stretching forces, in-plane central-hole tension, out-of-plane deflection, and buckling under compression. \hl{Three ways of formulating the loss function are compared: 1) purely data-driven, 2) PDE-based, and 3) energy-based. Through the comparison with the finite element simulations, it is found that all the three approaches can characterize the elastic deformation of plates with a satisfactory accuracy if trained properly. Compared with incorporating the PDEs and BCs in the loss, using the total potential energy shows certain advantage in terms of the simplicity of hyperparameter tuning and the computational efficiency.

Read more
Computational Engineering Finance And Science

A Principled Approach to Design Using High Fidelity Fluid-Structure Interaction Simulations

A high fidelity fluid-structure interaction simulation may require many days to run, on hundreds of cores. This poses a serious burden, both in terms of time and economic considerations, when repetitions of such simulations may be required (e.g. for the purpose of design optimization). In this paper we present strategies based on (constrained) Bayesian optimization (BO) to alleviate this burden. BO is a numerical optimization technique based on Gaussian processes (GP) that is able to efficiently (with minimal calls to the expensive FSI models) converge towards some globally optimal design, as gauged using a black box objective function. In this study we present a principled design evolution that moves from FSI model verification, through a series of Bridge Simulations (bringing the verification case incrementally closer to the application), in order that we may identify material properties for an underwater, unmanned, autonomous vehicle (UUAV) sail plane. We are able to achieve fast convergence towards an optimal design, using a small number of FSI simulations (a dozen at most), even when selecting over several design parameters, and while respecting optimization constraints.

Read more
Computational Engineering Finance And Science

A Probabilistic Graphical Model Foundation for Enabling Predictive Digital Twins at Scale

A unifying mathematical formulation is needed to move from one-off digital twins built through custom implementations to robust digital twin implementations at scale. This work proposes a probabilistic graphical model as a formal mathematical representation of a digital twin and its associated physical asset. We create an abstraction of the asset-twin system as a set of coupled dynamical systems, evolving over time through their respective state-spaces and interacting via observed data and control inputs. The formal definition of this coupled system as a probabilistic graphical model enables us to draw upon well-established theory and methods from Bayesian statistics, dynamical systems, and control theory. The declarative and general nature of the proposed digital twin model make it rigorous yet flexible, enabling its application at scale in a diverse range of application areas. We demonstrate how the model is instantiated to enable a structural digital twin of an unmanned aerial vehicle (UAV). The digital twin is calibrated using experimental data from a physical UAV asset. Its use in dynamic decision making is then illustrated in a synthetic example where the UAV undergoes an in-flight damage event and the digital twin is dynamically updated using sensor data. The graphical model foundation ensures that the digital twin calibration and updating process is principled, unified, and able to scale to an entire fleet of digital twins.

Read more
Computational Engineering Finance And Science

A Smoothed Particle Hydrodynamics Mini-App for Exascale

The Smoothed Particles Hydrodynamics (SPH) is a particle-based, meshfree, Lagrangian method used to simulate multidimensional fluids with arbitrary geometries, most commonly employed in astrophysics, cosmology, and computational fluid-dynamics (CFD). It is expected that these computationally-demanding numerical simulations will significantly benefit from the up-and-coming Exascale computing infrastructures, that will perform 10 18 FLOP/s. In this work, we review the status of a novel SPH-EXA mini-app, which is the result of an interdisciplinary co-design project between the fields of astrophysics, fluid dynamics and computer science, whose goal is to enable SPH simulations to run on Exascale systems. The SPH-EXA mini-app merges the main characteristics of three state-of-the-art parent SPH codes (namely ChaNGa, SPH-flow, SPHYNX) with state-of-the-art (parallel) programming, optimization, and parallelization methods. The proposed SPH-EXA mini-app is a C++14 lightweight and flexible header-only code with no external software dependencies. Parallelism is expressed via multiple programming models, which can be chosen at compilation time with or without accelerator support, for a hybrid process+thread+accelerator configuration. Strong and weak-scaling experiments on a production supercomputer show that the SPH-EXA mini-app can be efficiently executed with up 267 million particles and up to 65 billion particles in total on 2,048 hybrid CPU-GPU nodes.

Read more
Computational Engineering Finance And Science

A Two-Stage Reconstruction of Microstructures with Arbitrarily Shaped Inclusions

The main goal of our research is to develop an effective method with a wide range of applications for the statistical reconstruction of heterogeneous microstructures with compact inclusions of any shape, such as highly irregular grains. The devised approach uses multi-scale extended entropic descriptors (ED) that quantify the degree of spatial non-uniformity of configurations of finite-sized objects. This technique is an innovative development of previously elaborated entropy methods for statistical reconstruction. Here, we discuss the two-dimensional case, but this method can be generalized into three dimensions. At the first stage, the developed procedure creates a set of black synthetic clusters that serve as surrogate inclusions. The clusters have the same individual areas and interfaces as their target counterparts, but random shapes. Then, from a given number of easy-to-generate synthetic cluster configurations, we choose the one with the lowest value of the cost function defined by us using extended ED. At the second stage, we make a significant change in the standard technique of simulated annealing (SA). Instead of swapping pixels of different phases, we randomly move each of the selected synthetic clusters. To demonstrate the accuracy of the method, we reconstruct and analyze two-phase microstructures with irregular inclusions of silica in rubber matrix as well as stones in cement paste. The results show that the two-stage reconstruction (TSR) method provides convincing realizations for these complex microstructures. The advantages of TSR include the ease of obtaining synthetic microstructures, very low computational costs, and satisfactory mapping in the statistical context of inclusion shapes. Finally, its simplicity should greatly facilitate independent applications.

Read more
Computational Engineering Finance And Science

A Unified Finite Strain Theory for Membranes and Ropes

The finite strain theory is reformulated in the frame of the Tangential Differential Calculus (TDC) resulting in a unification in a threefold sense. Firstly, ropes, membranes and three-dimensional continua are treated with one set of governing equations. Secondly, the reformulated boundary value problem applies to parametrized and implicit geometries. Therefore, the formulation is more general than classical ones as it does not rely on parametrizations implying curvilinear coordinate systems and the concept of co- and contravariant base vectors. This leads to the third unification: TDC-based models are applicable to two fundamentally different numerical approaches. On the one hand, one may use the classical Surface FEM where the geometry is discretized by curved one-dimensional elements for ropes and two-dimensional surface elements for membranes. On the other hand, it also applies to recent Trace FEM approaches where the geometry is immersed in a higher-dimensional background mesh. Then, the shape functions of the background mesh are evaluated on the trace of the immersed geometry and used for the approximation. As such, the Trace FEM is a fictitious domain method for partial differential equations on manifolds. The numerical results show that the proposed finite strain theory yields higher-order convergence rates independent of the numerical methodology, the dimension of the manifold, and the geometric representation type.

Read more
Computational Engineering Finance And Science

A block-coupled Finite Volume methodology for problems of large strain and large displacement

A nonlinear block-coupled Finite Volume methodology is developed for large displacement and large strain regime. The new methodology uses the same normal and tangential face derivative discretisations found in the original fully coupled cell-centred Finite Volume solution methodology for linear elasticity, meaning that existing block-coupled implementations may easily be extended to include finite strains. Details are given of the novel approach, including use of the Newton-Raphson procedure on a residual functional defined using the linear momentum equation. A number of 2-D benchmark cases have shown that, compared with a segregated procedure, the new approach exhibits errors with many orders of magnitude smaller and a much higher convergence rate.

Read more
Computational Engineering Finance And Science

A combined XFEM phase-field computational model for crack growth without remeshing

This paper presents an adaptive strategy for phase-field simulations with transition to fracture. The phase-field equations are solved only in small subdomains around crack tips to determine propagation, while an XFEM discretization is used in the rest of the domain to represent sharp cracks, enabling to use a coarser discretization and therefore reducing the computational cost. Crack-tip subdomains move as cracks propagate in a fully automatic process. The same computational mesh is used during all the simulation, with an h -refined approximation in the elements in the crack-tip subdomains. Continuity of the displacement between the refined subdomains and the XFEM region is imposed in weak form via Nitsche's method. The robustness of the strategy is shown for some numerical examples in 2D and 3D, including branching and coalescence tests.

Read more

Ready to get started?

Join us today