Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrey Prokopenko is active.

Publication


Featured researches published by Andrey Prokopenko.


Parallel Processing Letters | 2014

TOWARDS EXTREME-SCALE SIMULATIONS FOR LOW MACH FLUIDS WITH SECOND-GENERATION TRILINOS

Paul Lin; Matthew Tyler Bettencourt; Stefan P. Domino; Travis C. Fisher; Mark Hoemmen; Jonathan Joseph Hu; Eric Todd Phipps; Andrey Prokopenko; Sivasankaran Rajamanickam; Christopher Siefert; Stephen Kennon

Trilinos is an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. While Trilinos was originally designed for scalable solutions of large problems, the fidelity needed by many simulations is significantly greater than what one could have envisioned two decades ago. When problem sizes exceed a billion elements even scalable applications and solver stacks require a complete revision. The second-generation Trilinos employs C++ templates in order to solve arbitrarily large problems. We present a case study of the integration of Trilinos with a low Mach fluids engineering application (SIERRA low Mach module/Nalu). Through the use of improved algorithms and better software engineering practices, we demonstrate good weak scaling for up to a nine billion element large eddy simulation (LES) problem on unstructured meshes with a 27 billion row matrix on 524,288 cores of an IBM Blue Gene/Q platform.


international parallel and distributed processing symposium | 2014

Towards Extreme-Scale Simulations with Next-Generation Trilinos: A Low Mach Fluid Application Case Study

Paul Lin; Matthew Tyler Bettencourt; Stefan P. Domino; Travis C. Fisher; Mark Hoemmen; Jonathan Joseph Hu; Eric Todd Phipps; Andrey Prokopenko; Sivasankaran Rajamanickam; Christopher Siefert; Eric C Cyr; Stephen Kennon

Trilinos is an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. While the original version of Trilinos was designed for highly scalable solutions for large problems, the need for increasingly higher fidelity simulations has pushed the problem sizes beyond what could have been envisioned two decades ago. When problem sizes exceed a billion elements even highly scalable applications and solver stacks require a complete revision. The next-generation Trilinos employs C++ templates in order to solve arbitrarily large problems and enable extreme-scale simulations. We present a case study that involves integration of Trilinos with an engineering application (Sierra low Mach module/Nalu), involving the simulation of low Mach fluid flow for problems of size up to nine billion elements. Through the use of improved algorithms and better software engineering practices, we demonstrate good weak scaling for the matrix assembly and solve for the engineering application for up to a nine billion element fluid flow large eddy simulation (LES) problem on unstructured meshes with a 27 billion row matrix on 131,072 cores of a Cray XE6 platform.


Numerical Linear Algebra With Applications | 2017

An algebraic multigrid method for Q2−Q1 mixed discretizations of the Navier–Stokes equations

Andrey Prokopenko; Raymond S. Tuminaro

Summary Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily colocated at mesh points. Specifically, we investigate a Q2−Q1 mixed finite element discretization of the incompressible Navier–Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees of freedom (DOFs) are defined at spatial locations where there are no corresponding pressure DOFs. Thus, AMG approaches leveraging this colocated structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocity DOF relationships of the Q2−Q1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity DOFs resembles that on the finest grid. To define coefficients within the intergrid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier–Stokes problems.


Numerical Linear Algebra With Applications | 2017

An algebraic multigrid method for Q 2−Q 1 mixed discretizations of the Navier-Stokes equations: AMG for Q 2−Q 1 discretization

Andrey Prokopenko; Raymond S. Tuminaro

Summary Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily colocated at mesh points. Specifically, we investigate a Q2−Q1 mixed finite element discretization of the incompressible Navier–Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees of freedom (DOFs) are defined at spatial locations where there are no corresponding pressure DOFs. Thus, AMG approaches leveraging this colocated structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocity DOF relationships of the Q2−Q1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity DOFs resembles that on the finest grid. To define coefficients within the intergrid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier–Stokes problems.


Numerical Linear Algebra with Applications (Online) | 2016

An algebraic multigrid method for Q2-Q1 mixed discretizations of the Navier-Stokes equations

Andrey Prokopenko; Raymond S. Tuminaro

Summary Algebraic multigrid (AMG) preconditioners are considered for discretized systems of partial differential equations (PDEs) where unknowns associated with different physical quantities are not necessarily colocated at mesh points. Specifically, we investigate a Q2−Q1 mixed finite element discretization of the incompressible Navier–Stokes equations where the number of velocity nodes is much greater than the number of pressure nodes. Consequently, some velocity degrees of freedom (DOFs) are defined at spatial locations where there are no corresponding pressure DOFs. Thus, AMG approaches leveraging this colocated structure are not applicable. This paper instead proposes an automatic AMG coarsening that mimics certain pressure/velocity DOF relationships of the Q2−Q1 discretization. The main idea is to first automatically define coarse pressures in a somewhat standard AMG fashion and then to carefully (but automatically) choose coarse velocity unknowns so that the spatial location relationship between pressure and velocity DOFs resembles that on the finest grid. To define coefficients within the intergrid transfers, an energy minimization AMG (EMIN-AMG) is utilized. EMIN-AMG is not tied to specific coarsening schemes and grid transfer sparsity patterns, and so it is applicable to the proposed coarsening. Numerical results highlighting solver performance are given on Stokes and incompressible Navier–Stokes problems.


Archive | 2014

MueLu User's Guid for Trilinos Version 11.12.

Jonathan Joseph Hu; Andrey Prokopenko; Tobias Wiesner; Christopher Siefert; Raymond S. Tuminaro

This is the official user guide for the M UE L U multigrid library in Trilinos version 11.12. This guide provides an overview of M UE L U , its capabilities, and instructions for new users who want to start using M UE L U with a minimum of effort. Detailed information is given on how to drive M UE L U through its XML interface. Links to more advanced use cases are given. This guide gives information on how to achieve good parallel performance, as well as how to introduce new algorithms. Finally, readers will find a comprehensive listing of available M UE L U options. Any options not documented in this manual should be considered strictly experimental.


Archive | 2016

Ifpack2 User's Guide 1.0

Andrey Prokopenko; Christopher Siefert; Jonathan Joseph Hu; Mark Hoemmen; Alicia Marie Klinvex


Archive | 2013

Toward Flexible Scalable Algebraic Multigrid Solvers.

Raymond S. Tuminaro; Erik G. Boman; Jonathan Joseph Hu; Andrey Prokopenko; Christopher Siefert; Paul H. Tsuji; Jeremie Gaidamour; Luke N. Olson; Jacob B. Schroder; Badri Hiriyur; David E. Keyes; Haim Waisman


Archive | 2018

Deploy threading in Nalu solver stack.

Andrey Prokopenko; Stephen R. Thomas; Kasia Swirydowicz; Shreyas Ananthan; Jonathan Joseph Hu; Alan B. Williams; Michael A. Sprague


Archive | 2017

Existing Fortran interfaces to Trilinos in preparation for exascale ForTrilinos development

Katherine J. Evans; Mitchell Young; Benjamin Collins; Seth R. Johnson; Andrey Prokopenko; Michael A. Heroux

Collaboration


Dive into the Andrey Prokopenko's collaboration.

Top Co-Authors

Avatar

Jonathan Joseph Hu

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Raymond S. Tuminaro

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Christopher Siefert

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Lin

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Eric C Cyr

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jeremie Gaidamour

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Mark Hoemmen

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Eric Todd Phipps

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Erik G. Boman

Pacific Northwest National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge