Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sophia Lefantzi is active.

Publication


Featured researches published by Sophia Lefantzi.


Archive | 2011

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

Michael S. Eldred; Dena M. Vigil; Keith R. Dalbey; William J. Bohnhoff; Brian M. Adams; Laura Painton Swiler; Sophia Lefantzi; Patricia Diane Hough; John P. Eddy

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a e xible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for


ieee international conference on high performance computing data and analytics | 2006

A Component Architecture for High-Performance Scientific Computing

Benjamin A. Allan; Robert C. Armstrong; David E. Bernholdt; Felipe Bertrand; Kenneth Chiu; Tamara L. Dahlgren; Kostadin Damevski; Wael R. Elwasif; Thomas Epperly; Madhusudhan Govindaraju; Daniel S. Katz; James Arthur Kohl; Manoj Kumar Krishnan; Gary Kumfert; J. Walter Larson; Sophia Lefantzi; Michael J. Lewis; Allen D. Malony; Lois C. Mclnnes; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Sameer Shende; Theresa L. Windus; Shujia Zhou

The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry.


Archive | 2006

Parallel PDE-Based Simulations Using the Common Component Architecture

Lois Curfman McInnes; Benjamin A. Allan; Robert C. Armstrong; Steven J. Benson; David E. Bernholdt; Tamara L. Dahlgren; Lori Freitag Diachin; Manojkumar Krishnan; James Arthur Kohl; J. Walter Larson; Sophia Lefantzi; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Shujia Zhou

The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component- based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general-purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations.


Combustion Theory and Modelling | 2007

A CSP and tabulation-based adaptive chemistry model

Jeremiah C. Lee; Habib N. Najm; Sophia Lefantzi; Jaideep Ray; Michael Frenklach; Mauro Valorani; Dimitris A. Goussis

We demonstrate the feasibility of a new strategy for the construction of an adaptive chemistry model that is based on an explicit integrator stabilized by an approximation of the Computational Singular Perturbation (CSP)-slow-manifold projector. We examine the effectiveness and accuracy of this technique first using a model problem with variable stiffness. We assess the effect of using an approximation of the CSP-slow-manifold by either reusing the CSP vectors calculated in previous steps or from a pre-built tabulation. We find that while accuracy is preserved, the associated CPU cost was reduced substantially by this method. We used two ignition simulations – hydrogen–air and heptane–air mixtures – to demonstrate the feasibility of using the new method to handle realistic kinetic mechanisms. We test the effect of utilizing an approximation of the CSP-slow-manifold and find that its use preserves the order of the explicit integrator, produces no degradation in accuracy, and results in a scheme that is competitive with traditional implicit integration. Further analysis on the performance data demonstrates that the tabulation of the CSP-slow-manifold provides an increasing level of efficiency as the size of the mechanism increases. From the software engineering perspective, all the machinery developed is Common Component Architecture compliant, giving the software a distinct advantage in the ease of maintainability and flexibility in its utilization. Extension of this algorithm is underway to implement an automated tabulation of the CSP-slow-manifold for a detailed chemical kinetic system either off-line, or on-line with a reactive flow simulation code.


international parallel and distributed processing symposium | 2003

Using the Common Component Architecture to design high performance scientific simulation codes

Sophia Lefantzi; Jaideep Ray; Habib N. Najm

We present a design and proof-of-concept implementation of a component-based scientific simulation toolkit for hydrodynamics. We employed the Common Component Architecture, a minimalist, low-latency component model as our paradigm for developing a set of high-performance parallel components for simulating flows on structured adaptively refined meshes. Our findings demonstrate that the architecture is sufficiently flexible and simple to allow an intuitive and straightforward decomposition of a complex monolithic code into easy-to-implement components. The result is a set of stand-alone independent components from which a simulation code is assembled. Our results show that the component architecture imposes negligible overheads on single processor performance while scaling to multiple processors remains unaffected.


SIAM Journal on Scientific Computing | 2007

Using High-Order Methods on Adaptively Refined Block-Structured Meshes: Derivatives, Interpolations, and Filters

Jaideep Ray; Christopher A. Kennedy; Sophia Lefantzi; Habib N. Najm

Block-structured adaptive mesh refinement (AMR) is used for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order derivative, interpolation, and filter stencils and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by the discretized operators for derivatives, interpolations, and filters. We develop general expressions for high-order derivative, interpolation, and filter stencils applicable in multiple dimensions, using a Fourier approach, to facilitate high-order block-structured AMR implementations. These stencils are derived under the assumption that the fields that they are applied to are smooth. For a given derivative stencil (and thus a given order of accuracy), the necessary interpolation order is found to be dependent on the highest spatial derivative in the PDE being solved. This is demonstrated empirically, using one- and two-dimensional model equations, by observing the increase in delivered accuracy as the order of accuracy of the interpolation stencil is increased. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of derivative, interpolation, and filter stencils. The procedure devised here is modular and its various pieces, i.e., the derivative, interpolation, and filter stencils, can be extended independently to higher orders and dimensions (in the case of interpolation stencils). The application of these methods often requires nontrivial logic, especially near domain boundaries; this logic, along with the stencils used in this study, are available as a freely downloadable software library.


Progress in Computational Fluid Dynamics | 2005

A component-based toolkit for simulating reacting flows with high order spatial discretisations on structured adaptively refined meshes

Sophia Lefantzi; Jaideep Ray; Christopher A. Kennedy; Habib N. Najm

We present an innovative methodology for developing scientific and mathematical codes for computational studies of reacting flow. High-order (>2) spatial discretisations are combined, for the first time, with multi-level block structured adaptively refined meshes (SAMR) to resolve regions of high gradients efficiently. Within the SAMR context, we use 4th order spatial discretisations to achieve the desired numerical accuracy while maintaining a shallow grid hierarchy. We investigate in detail the pairing between the order of the spatial discretisation and the order of the interpolant, and their effect on the overall order of accuracy. These new approaches are implemented in a high performance, component-based architecture (Common Component Architecture) and achieve software re-usability, flexibility and modularity. The high-order approach and the software design are demonstrated and validated on three test cases modelled as reaction-diffusion systems of increasing complexity. We also demonstrate that the 4th order SAMR approach can be computationally more economical compared to second-order approaches.


oceans conference | 2011

Verifying marine-hydro-kinetic energy generation simulations using SNL-EFDC

Scott C. James; Sophia Lefantzi; Janet Barco; Erick Johnson; Jesse D. Roberts

Increasing interest in marine hydrokinetic (MHK) energy has led to significant research regarding optimal placement of emerging technologies to maximize energy capture and minimize effects on the marine environment. Understanding the changes to the near- and far-field hydrodynamics is necessary to assess optimal placement. MHK projects will convert energy (momentum) from the system, altering water velocities and potentially water quality and sediment transport as well. Maximum site efficiency for MHK power projects must balance with the requirement of avoiding environmental harm. This study is based on previous modification to an existing flow, sediment dynamics, and water-quality code (SNL-EFDC) where a simulation of an experimental flume is used to qualify, quantify, and visualize the influence of MHK energy generation. Turbulence and device parameters are calibrated against wake data from a flume experiment out of the University of Southampton (L. Myers and A. S. Bahaj, “Near wake properties of horizontal axis marine current turbines,” in Proceedings of the 8th European Wave and Tidal Energy Conference, 2009, pp. 558–565) to produce verified simulations of MHK-device energy removal. To achieve a realistic velocity deficit within the wake of the device, parametric studies using the nonlinear, model-independent, parameter estimators PEST and DAKOTA were compared to determine parameter sensitivities and optimal values for various constants in the flow and turbulence closure equations. The sensitivity analyses revealed that the Smagorinski subgrid-scale horizontal momentum diffusion constant and the k-ε kinetic energy dissipation rate constant (Cε4) were the two most important parameters influencing wake profile and dissipation at 10 or more device diameters downstream as they strongly influence how the wake mixes with the bulk flow. These results verify the model, which can now be used to perform MHK-array distribution and optimization studies.


Computational Fluid and Solid Mechanics 2003#R##N#Proceedings Second MIT Conference on Compurational Fluid and Solid Mechanics June 17–20, 2003 | 2003

A component-based scientific toolkit for reacting flows

Sophia Lefantzi; Jaideep Ray

Publisher Summary This chapter presents a design and proof-of-concept implementation of a component-based scientific simulation toolkit for reacting flows. The chapter employs the common component architecture (CCA)—as a basis for developing a set of scientific and mathematical components for simulating reacting flow on structured adaptively refined meshes. Experiments show the CCA architecture to be sufficiently flexible and simple to allow a straightforward design and development of components by computational scientists. Decomposition of the code into subsystems was done first coarsely along numerical algorithm lines and then finely along physical models. The component architecture imposes negligible overhead in comparison with a traditional code and does not adversely affect parallel scalability. One of the reacting flow test cases discussed is the 2D reaction-diffusion problem. For this test, the ignition test case was expanded to include spatial terms—to model diffusion.


44th AIAA Fluid Dynamics Conference | 2014

Bayesian calibration of a k-ε turbulence model for predictive jet-in-crossflow simulations

Jaideep Ray; Sophia Lefantzi; Srinivasan Arunajatesan; Lawrence Dechant

‡§ We propose a Bayesian method to calibrate parameters of a RANS model to improve its predictive skill in jet-in-crossflow simulations. The method is based on the hypotheses that (1) informative parameters can be estimated from experiments of flow configurations that display the same, strongly vortical features of jet-in-crossflow interactions and (2) one can construct surrogates of RANS models for certain judiciously chosen RANS outputs which serve as calibration variables (alternatively, experimental observables). We estimate three ke parameters (C∝, C 2 , C 1 ) from Reynolds stress measurements from an incompressible flowover-a-square-cylinder experiment. The k-e parameters are estimated as a joint probability density function. Jet-in-crossflow simulations performed with (C∝, C 2 , C 1 ) samples drawn from this distribution are seen to provide far better predictions than those obtained with nominal parameter values. We also find a (C∝, C 2 , C 1 ) combination which provides < 15% error in a number of performance metrics; in contrast, the errors obtained with nominal parameter values may exceed 60%.

Collaboration


Dive into the Sophia Lefantzi's collaboration.

Top Co-Authors

Avatar

Jaideep Ray

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Lawrence Dechant

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Habib N. Najm

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Benjamin A. Allan

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean Andrew McKenna

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jeremiah C. Lee

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Mauro Valorani

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Brian M. Adams

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge