Featured Researches

Computational Physics

Assessment of absorbed power density and temperature rise for nonplanar body model under electromagnetic exposure above 6 GHz

The averaged absorbed power density (APD) and temperature rise in body models with nonplanar surfaces were computed for electromagnetic exposure above 6 GHz. Different calculation schemes for the averaged APD were investigated. Additionally, a novel compensation method for correcting the heat convection rate on the air/skin interface in voxel human models was proposed and validated. The compensation method can be easily incorporated into bioheat calculations and does not require information regarding the normal direction of the boundary voxels, in contrast to a previously proposed method. The APD and temperature rise were evaluated using models of a two-dimensional cylinder and a three-dimensional partial forearm. The heating factor, which was defined as the ratio of the temperature rise to the APD, was calculated using different APD averaging schemes. Our computational results revealed different frequency and curvature dependences. For body models with curvature radii of >30 mm and at frequencies of >20 GHz, the differences in the heating factors among the APD schemes were small.

Read more
Computational Physics

Atomic-Level Features for Kinetic Monte Carlo Models of Complex Chemistry from Molecular Dynamics Simulations

The high computational cost of evaluating atomic interactions recently motivated the development of computationally inexpensive kinetic models, which can be parametrized from MD simulations of complex chemistry of thousands of species or other processes and accelerate the prediction of the chemical evolution by up to four order of magnitude. Such models go beyond the commonly employed potential energy surface fitting methods in that they are aimed purely at describing kinetic effects. So far, such kinetic models utilize molecular descriptions of reactions and have been constrained to only reproduce molecules previously observed in MD simulations. Therefore, these descriptions fail to predict the reactivity of unobserved molecules, for example in the case of large molecules or solids. Here we propose a new approach for the extraction of reaction mechanisms and reaction rates from MD simulations, namely the use of atomic-level features. Using the complex chemical network of hydrocarbon pyrolysis as example, it is demonstrated that kinetic models built using atomic features are able to explore chemical reaction pathways never observed in the MD simulations used to parametrize them. Atomic-level features are shown to construct reaction mechanisms and estimate reaction rates of unknown molecular species from elementary atomic events. Through comparisons of the model ability to extrapolate to longer simulation timescales and different chemical compositions than the ones used for parameterization, it is demonstrated that kinetic models employing atomic features retain the same level of accuracy and transferability as the use of features based on molecular species, while being more compact and parametrized with less data. We also find that atomic features can better describe the formation of large molecules enabling the simultaneous description of small molecules and condensed phases.

Read more
Computational Physics

Atomistic and mean-field estimates of effective stiffness tensor of nanocrystalline materials of hexagonal symmetry

Anisotropic core-shell model of a nano-grained polycrystal is extended to estimate the effective elastic stiffness of several metals of hexagonal crystal lattice symmetry. In the approach the bulk nanocrystalline material is described as a two-phase medium with different properties for a grain boundary zone and a grain core. While the grain core is anisotropic, the boundary zone is isotropic and has a thickness defined by the cutoff radius of a corresponding atomistic potential for the considered metal. The predictions of the proposed meanfield model are verified with respect to simulations performed with the use of the Large-scale Atomic/Molecular Massively Parallel Simulator, the Embedded Atom Model, and the molecular statics method. The effect of the grain size on the overall elastic moduli of nanocrystalline material with random distribution of orientations is analysed.

Read more
Computational Physics

Attenuating the fermion sign problem in path integral Monte Carlo simulations using the Bogoliubov inequality and thermodynamic integration

Accurate thermodynamic simulations of correlated fermions using path integral Monte Carlo (PIMC) methods are of paramount importance for many applications such as the description of ultracold atoms, electrons in quantum dots, and warm-dense matter. The main obstacle is the fermion sign problem (FSP), which leads to an exponential increase in computation time both with increasing the system-size and with decreasing temperature. Very recently, Hirshberg et al. [J. Chem. Phys. 152, 171102 (2020)] have proposed to alleviate the FSP based on the Bogoliubov inequality. In the present work, we extend this approach by adding a parameter that controls the perturbation, allowing for an extrapolation to the exact result. In this way, we can also use thermodynamic integration to obtain an improved estimate of the fermionic energy. As a test system, we choose electrons in 2D and 3D quantum dots and find in some cases a speed-up exceeding 10^6 , as compared to standard PIMC, while retaining a relative accuracy of ∼0.1% . Our approach is quite general and can readily be adapted to other simulation methods.

Read more
Computational Physics

Automatic transformation of irreducible representations for efficient contraction of tensors with cyclic group symmetry

Tensor contractions are ubiquitous in computational chemistry and physics, where tensors generally represent states or operators and contractions are transformations. In this context, the states and operators often preserve physical conservation laws, which are manifested as group symmetries in the tensors. These group symmetries imply that each tensor has block sparsity and can be stored in a reduced form. For nontrivial contractions, the memory footprint and cost are lowered, respectively, by a linear and a quadratic factor in the number of symmetry sectors. State-of-the-art tensor contraction software libraries exploit this opportunity by iterating over blocks or using general block-sparse tensor representations. Both approaches entail overhead in performance and code complexity. With intuition aided by tensor diagrams, we present a technique, irreducible representation alignment, which enables efficient handling of Abelian group symmetries via only dense tensors, by using contraction-specific reduced forms. This technique yields a general algorithm for arbitrary group symmetric contractions, which we implement in Python and apply to a variety of representative contractions from quantum chemistry and tensor network methods. As a consequence of relying on only dense tensor contractions, we can easily make use of efficient batched matrix multiplication via Intel's MKL and distributed tensor contraction via the Cyclops library, achieving good efficiency and parallel scalability on up to 4096 Knights Landing cores of a supercomputer.

Read more
Computational Physics

Autonomous Materials Discovery Driven by Gaussian Process Regression with Inhomogeneous Measurement Noise and Anisotropic Kernels

A majority of experimental disciplines face the challenge of exploring large and high-dimensional parameter spaces in search of new scientific discoveries. Materials science is no exception; the wide variety of synthesis, processing, and environmental conditions that influence material properties gives rise to particularly vast parameter spaces. Recent advances have led to an increase in efficiency of materials discovery by increasingly automating the exploration processes. Methods for autonomous experimentation have become more sophisticated recently, allowing for multi-dimensional parameter spaces to be explored efficiently and with minimal human intervention, thereby liberating the scientists to focus on interpretations and big-picture decisions. Gaussian process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments. We have recently demonstrated the positive impact of GPR-driven decision-making algorithms on autonomously steering experiments at a synchrotron beamline. However, due to the complexity of the experiments, GPR often cannot be used in its most basic form, but rather has to be tuned to account for the special requirements of the experiments. Two requirements seem to be of particular importance, namely inhomogeneous measurement noise (input dependent or non-i.i.d.) and anisotropic kernel functions, which are the two concepts that we tackle in this paper. Our synthetic and experimental tests demonstrate the importance of both concepts for experiments in materials science and the benefits that result from including them in the autonomous decision-making process.

Read more
Computational Physics

Bayesian Force Fields from Active Learning for Simulation of Inter-Dimensional Transformation of Stanene

We present a way to dramatically accelerate Gaussian process models for interatomic force fields based on many-body kernels by mapping both forces and uncertainties onto functions of low-dimensional features. This allows for automated active learning of models combining near-quantum accuracy, built-in uncertainty, and constant cost of evaluation that is comparable to classical analytical models, capable of simulating millions of atoms. Using this approach, we perform large scale molecular dynamics simulations of the stability of the stanene monolayer. We discover an unusual phase transformation mechanism of 2D stanene, where ripples lead to nucleation of bilayer defects, densification into a disordered multilayer structure, followed by formation of bulk liquid at high temperature or nucleation and growth of the 3D bcc crystal at low temperature. The presented method opens possibilities for rapid development of fast accurate uncertainty-aware models for simulating long-time large-scale dynamics of complex materials.

Read more
Computational Physics

Bayesian optimization with improved scalability and derivative information for efficient design of nanophotonic structures

We propose the combination of forward shape derivatives and the use of an iterative inversion scheme for Bayesian optimization to find optimal designs of nanophotonic devices. This approach widens the range of applicability of Bayesian optmization to situations where a larger number of iterations is required and where derivative information is available. This was previously impractical because the computational efforts required to identify the next evaluation point in the parameter space became much larger than the actual evaluation of the objective function. We demonstrate an implementation of the method by optimizing a waveguide edge coupler.

Read more
Computational Physics

Benchmark Computation of Morphological Complexity in the Functionalized Cahn-Hilliard Gradient Flow

Reductions of the self-consistent mean field theory model of amphiphilic molecule in solvent leads to a singular family of Functionalized Cahn-Hilliard (FCH) energies. We modify the energy, removing singularities to stabilize the computation of the gradient flows and develop a series of benchmark problems that emulate the "morphological complexity" observed in experiments. These benchmarks investigate the delicate balance between the rate of arrival of amphiphilic materials onto an interface and least energy mechanism to accommodate the arriving mass. The result is a trichotomy of responses in which two-dimensional interfaces grow by a regularized motion against curvature, pearling bifurcations, or curve-splitting directly into networks of interfaces. We evaluate second order predictor-corrector time stepping schemes for spectral spatial discretization. The schemes are based on backward differentiation that are either Fully Implicit, with Preconditioned Steepest Descent (PSD) solves for the nonlinear system, or linearly implicit with standard Implicit-Explicit (IMEX) or Scalar Auxiliary Variable (SAV) approaches to the nonlinearities. All schemes use fixed local truncation error to generate adaptive time-stepping. Each scheme requires proper preconditioning to achieve robust performance that can enhance efficiency by several orders of magnitude. The nonlinear PSD scheme achieves the smallest global discretization error at fixed local truncation error, however the IMEX scheme is the most computationally efficient as measured by the number of Fast Fourier Transform calls required to achieve a desired global error. The performance of the SAV scheme performance mirrors IMEX, at roughly half the computational efficiency.

Read more
Computational Physics

Benchmarking MD systems simulations on the Graphics Processing Unit and Multi-Core Systems

Molecular dynamics facilitates the simulation of a complex system to be analyzed at molecular and atomic levels. Simulations can last a long period of time, even months. Due to this cause the graphics processing units (GPUs) and multi-core systems are used as solutions to overcome this impediment. The current paper describes a comparison done between these two kinds of systems. The first system used implies the graphics processing unit, respectively CUDA with the OpenMM molecular dynamics package and OpenCL that allows the kernels to run on the GPU. This simulation is done on a new thermostat which mixes the Berendsen thermostat with the Langevin dynamics. The second comprises the molecular dynamics simulation and energy minimization package GROMACS which is based on a parallelization through MPI (Message Passing Interface) on multi-core systems. The second simulation uses another new thermostat algorithm related respectively, dissipative particle dynamics - isotropic type (DPD-ISO). Both thermostats are innovative, based on a new theory developed by us. Results show that parallelization on multi-core systems has a performance up to 33 times greater than the one performed on the graphics processing unit. In both cases temperature of the system was maintained close to the one taken as reference. For the simulation using the CUDA GPU, the faster runtime was obtained when the number of processors was equal to four, the simulation speed being 3.67 times faster compared to the case of only one processor.

Read more

Ready to get started?

Join us today