Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Eleftheriou is active.

Publication


Featured researches published by Maria Eleftheriou.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Destruction of long-range interactions by a single mutation in lysozyme

Ruhong Zhou; Maria Eleftheriou; Ajay K. Royyuru; B. J. Berne

We propose a mechanism, based on a ≥10-μs molecular dynamics simulation, for the surprising misfolding of hen egg-white lysozyme caused by a single mutation (W62G). Our simulations of the wild-type and mutant lysozymes in 8 M urea solution at biological temperature (with both pH 2 and 7) reveal that the mutant structure is much less stable than that of the wild type, with the mutant showing larger fluctuations and less native-like contacts. Analysis of local contacts reveals that the Trp-62 residue is the key to a cooperative long-range interaction within the wild type, where it acts like a bridge between two neighboring basic residues. Thus, a native-like cluster or nucleation site can form near these residues in the wild type but not in the mutant. The time evolution of the secondary structure also exhibits a quicker loss of the β-sheets in the mutant than in the wild type, whereas some of the α-helices persist during the entire simulation in both the wild type and the mutant in 8 M urea (even though the tertiary structures are basically all gone). These findings, while supporting the general conclusions of a recent experimental study by Dobson and coworkers [Klein-Seetharam J, Oikama M, Grimshaw SB, Wirmer J, Duchardt E, Ueda T, Imoto T, Smith LJ, Dobson CM, Schwalbe H (2002) Science 295:1719–1722], provide a detailed but different molecular picture of the misfolding mechanism.


Ibm Journal of Research and Development | 2005

Scalable framework for 3D FFTs on the Blue Gene/L supercomputer: implementation and early performance measurements

Maria Eleftheriou; Blake G. Fitch; Aleksandr Rayshubskiy; T. J. C. Ward; Robert S. Germain

This paper presents results on a communications-intensive kernel, the three-dimensional fast Fourier transform (3D FFT), running on the 2,048-node Blue Gene®/L (BG/L) prototype. Two implementations of the volumetric FFT algorithm were characterized, one built on the Message Passing Interface library and another built on an active packet Application Program Interface supported by the hardware bring-up environment, the BG/L advanced diagnostics environment. Preliminary performance experiments on the BG/L prototype indicate that both of our implementations scale well up to 1,024 nodes for 3D FFTs of size 128 × 128 × 128. The performance of the volumetric FFT is also compared with that of the Fastest Fourier Transform in the West (FFTW) library. In general, the volumetric FFT outperforms a port of the FFTW Version 2.1.5 library on large-node-count partitions.


conference on high performance computing (supercomputing) | 2006

Blue matter: approaching the limits of concurrency for classical molecular dynamics

Blake G. Fitch; Aleksandr Rayshubskiy; Maria Eleftheriou; T. J. Christopher Ward; Mark E. Giampapa; Michael C. Pitman; Robert S. Germain

This paper describes a novel spatial-force decomposition for N-body simulations for which we observe O(sqrt(p)) communication scaling. This has enabled Blue Matter to approach the effective limits of concurrency for molecular dynamics using particle-mesh (FFT-based) methods for handling electrostatic interactions. Using this decomposition, Blue Matter running on Blue Gene/L has achieved simulation rates in excess of 1000 time steps per second and demonstrated significant speed-ups to O(1) atom per node. Blue Matter employs a communicating sequential process (CSP) style model with application communication state machines compiled to hardware interfaces. The scalability achieved has enabled methodologically rigorous biomolecular simulations on biologically interesting systems, such as membrane-bound proteins, whose time scales dwarf previous work on those systems. Major scaling improvements require exploration of alternative algorithms for treating the long range electrostatics


International Journal of Parallel Programming | 2002

Demonstrating the Scalability of a Molecular Dynamics Application on a Petaflops Computer

George S. Almasi; Calin Cascaval; José G. Castaños; Monty M. Denneau; Wilm E. Donath; Maria Eleftheriou; Mark E. Giampapa; C. T. Howard Ho; Derek Lieber; José E. Moreira; Dennis M. Newns; Marc Snir; Henry S. Warren

The IBM Blue Gene/C parallel computer aims to demonstrate the feasibility of a cellular architecture computer with millions of concurrent threads of execution. One of the major challenges in this project is showing that applications can successfully scale to this massive amount of parallelism. In this paper we demonstrate that the simulation of protein folding using classical molecular dynamics falls in this category. Starting from the sequential version of a well known molecular dynamics code, we developed a new parallel implementation that exploited the multiple levels of parallelism present in the Blue Gene/C cellular architecture. We performed both analytical and simulation studies of the behavior of this application when executed on a very large number of threads. As a result, we demonstrate that this class of applications can execute efficiently on a large cellular machine.


international conference on computational science | 2006

Blue matter: strong scaling of molecular dynamics on blue gene/l

Blake G. Fitch; Aleksandr Rayshubskiy; Maria Eleftheriou; T. J. Christopher Ward; Mark E. Giampapa; Yuriy Zhestkov; Michael C. Pitman; Frank Suits; Alan Grossfield; Jed W. Pitera; William C. Swope; Ruhong Zhou; Scott E. Feller; Robert S. Germain

This paper presents strong scaling performance data for the Blue Matter molecular dynamics framework using a novel n-body spatial decomposition and a collective communications technique implemented on both MPI and low level hardware interfaces. Using Blue Matter on Blue Gene/L, we have measured scalability through 16,384 nodes with measured time per time-step of under 2.3 milliseconds for a 43,222 atom protein/lipid system. This is equivalent to a simulation rate of over 76 nanoseconds per day and represents an unprecedented time-to-solution for biomolecular simulation as well as continued speed-up to fewer than three atoms per node. On a smaller, solvated lipid system with 13,758 atoms, we have achieved continued speedups through fewer than one atom per node and less than 2 milliseconds/time-step. On a 92,224 atom system, we have achieved floating point performance of over 1.8 TeraFlops/second on 16,384 nodes. Strong scaling of fixed-size classical molecular dynamics of biological systems to large numbers of nodes is necessary to extend the simulation time to the scale required to make contact with experimental data and derive biologically relevant insights.


european conference on parallel processing | 2005

Performance measurements of the 3D FFT on the blue gene/l supercomputer

Maria Eleftheriou; Blake G. Fitch; Aleksandr Rayshubskiy; T. J. Christopher Ward; Robert S. Germain

This paper presents performance characteristics of a communications-intensive kernel, the complex data 3D FFT, running on the Blue Gene/L architecture. Two implementations of the volumetric FFT algorithm were characterized, one built on the MPI library using an optimized collective all-to-all operation [2] and another built on a low-level System Programming Interface (SPI) of the Blue Gene/L Advanced Diagnostics Environment (BG/L ADE) [17]. We compare the current results to those obtained using a reference MPI implementation (MPICH2 ported to BG/L with unoptimized collectives) and to a port of version 2.1.5 the FFTW library [14]. Performance experiments on the Blue Gene/L prototype indicate that both of our implementations scale well and the current MPI-based implementation shows a speedup of 730 on 2048 nodes for 3D FFTs of size 128 × 128 × 128. Moreover, the volumetric FFT outperforms FFTW port by a factor 8 for a 128× 128× 128 complex FFT on 2048 nodes.


ieee international conference on high performance computing, data, and analytics | 2003

A Volumetric FFT for BlueGene/L

Maria Eleftheriou; José E. Moreira; Blake G. Fitch; Robert S. Germain

BlueGene/L is a massively parallel supercomputer organized as a three-dimensional torus of compute nodes. A fundamental challenge in harnessing the new computational capabilities of BlueGene/L is the design and implementation of numerical algorithms that scale effectively on thousands of nodes. A computational kernel of particular importance is the Fast Fourier Transform (FFT) of three-dimensional data. In this paper, we present the approach we are taking in BlueGene/L to produce a scalable FFT implementation. We rely on a volume decomposition of the data to take advantage of the toroidal communication topology. We present experimental results using an MPI-based implementation of our algorithm, in order to test the basic tenets behind our decomposition and to allow experimentation on existing platforms. Our preliminary results indicate that our algorithm scales well on as many as 512 nodes for three-dimensional FFTs of size 128 × 128 × 128.


international conference on supercomputing | 2001

Demonstrating the scalability of a molecular dynamics application on a Petaflop computer

George S. Almasi; Calin Cascaval; José G. Castaños; Monty M. Denneau; Wilm E. Donath; Maria Eleftheriou; Mark E. Giampapa; C. T. Howard Ho; Derek Lieber; José E. Moreira; Dennis M. Newns; Marc Snir; Henry S. Warren

The IBM Blue Gene project has endeavored into the development of a cellular architecture computer with millions of concurrent threads of execution. One of the major challenges of this project is demonstrating that applications can successfully exploit this massive amount of parallelism. Starting from the sequential version of a well known molecular dynamics code, we developed a new application that exploits the multiple levels of parallelism in the Blue Gene cellular architecture. We perform both analytical and simulation studies of the behavior of this application when executed on a very large number of threads. As a result, we demonstrate that this class of applications can execute efficiently on a large cellular machine.


Ibm Journal of Research and Development | 2005

Early performance data on the blue matter molecular simulation framework

Robert S. Germain; Yuriy Zhestkov; Maria Eleftheriou; Aleksandr Rayshubskiy; Frank Suits; T. J. C. Ward; Blake G. Fitch

Blue Matter is the application framework being developed in conjunction with the scientific portion of the IBM Blue Gene® project. We describe the parallel decomposition currently being used to target the Blue Gene/L machine and discuss the application-based trace tools used to analyze the performance of the application. We also present the results of early performance studies, including a comparison of the performance of the Ewald and the particle-particle particle-mesh (P3ME) methods, compare the measured performance of some key collective operations with the limitations imposed by the hardware, and discuss some future directions for research.


Ibm Journal of Research and Development | 2008

Blue matter: scaling of N-body simulations to one atom per node

Blake G. Fitch; Aleksandr Rayshubskiy; Maria Eleftheriou; T. J. C. Ward; Mark E. Giampapa; Mike Pitman; Jed W. Pitera; William C. Swope; Robert S. Germain

N-body simulations present some of the most interesting challenges in the area of massively parallel computing, especially when the object is to improve the time to solution for a fixed-size problem. The Blue Matter molecular simulation framework was developed specifically to address these challenges, to explore programming models for massively parallel machine architectures in a concrete context, and to support the scientific goals of the IBM Blue Gene® Project. This paper reviews the key issues involved in achieving ultrastrong scaling of methodologically correct biomolecular simulations, particularly the treatment of the long-range electrostatic forces present in simulations of proteins in water and membranes. Blue Matter computes these forces using the particle-particle particle-mesh Ewald (P3ME) method, which breaks the problem up into two pieces, one that requires the use of three-dimensional fast Fourier transforms with global data dependencies and another that involves computing interactions between pairs of particles within a cutoff distance. We summarize our exploration of the parallel decompositions used to compute these finite-ranged interactions, describe some of the implementation details involved in these decompositions, and present the evolution of strong-scaling performance achieved over the course of this exploration, along with evidence for the quality of simulation achieved.

Researchain Logo
Decentralizing Knowledge