Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward Seidel is active.

Publication


Featured researches published by Edward Seidel.


ieee international conference on high performance computing data and analytics | 2011

The International Exascale Software Project roadmap

Jack J. Dongarra; Pete Beckman; Terry Moore; Patrick Aerts; Giovanni Aloisio; Jean Claude Andre; David Barkai; Jean Yves Berthou; Taisuke Boku; Bertrand Braunschweig; Franck Cappello; Barbara M. Chapman; Xuebin Chi; Alok N. Choudhary; Sudip S. Dosanjh; Thom H. Dunning; Sandro Fiore; Al Geist; Bill Gropp; Robert J. Harrison; Mark Hereld; Michael A. Heroux; Adolfy Hoisie; Koh Hotta; Zhong Jin; Yutaka Ishikawa; Fred Johnson; Sanjay Kale; R.D. Kenway; David E. Keyes

Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.


ieee international conference on high performance computing data and analytics | 2002

The cactus framework and toolkit: design and applications

Tom Goodale; Gabrielle Allen; Gerd Lanfermann; Joan Masso; Thomas Radke; Edward Seidel; John Shalf

We describe Cactus, a framework for building a variety of computing applications in science and engineering, including astrophysics, relativity and chemical engineering.We first motivate by example the need for such frameworks to support multi-platform, high performance applications across diverse communities. We then describe the design of the latest release of Cactus (Version 4.0) a complete rewrite of earlier versions, which enables highly modular, multi-language, parallel applications to be developed by single researchers and large collaborations alike. Making extensive use of abstractions, we detail how we are able to provide the latest advances in computational science, such as interchangeable parallel data distribution and high performance IO layers, while hiding most details of the underlying computational libraries from the application developer. We survey how Cactus 4.0 is being used by various application communities, and describe how it will also enable these applications to run on the computational Grids of the near future.


ieee international conference on high performance computing data and analytics | 2001

The Cactus Worm: Experiments with Dynamic Resource Discovery and Allocation in a Grid Environment

Gabrielle Allen; David Sigfredo Angulo; Ian T. Foster; Gerd Lanfermann; Chuang Liu; Thomas Radke; Edward Seidel; John Shalf

The ability to harness heterogeneous, dynamically available grid resources is attractive to typically resource-starved computational scientists and engineers, as in principle it can increase, by significant factors, the number of cycles that can be delivered to applications. However, new adaptive application structures and dynamic runtime system mechanisms are required if we are to operate effectively in grid environments. To explore some of these issues in a practical setting, the authors are developing an experimental framework, called Cactus, that incorporates both adaptive application structures for dealing with changing resource characteristics and adaptive resource selection mechanisms that allow applications to change their resource allocations (e.g., via migration) when performance falls outside specified limits. The authors describe the adaptive resource selection mechanisms and describe how they are used to achieve automatic application migration to “better” resources following performance degradation. The results provide insights into the architectural structures required to support adaptive resource selection. In addition, the authors suggest that the Cactus Worm affords many opportunities for grid computing.


conference on high performance computing (supercomputing) | 2001

Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus

Gabrielle Allen; Thomas Dramlitsch; Ian T. Foster; Nicholas T. Karonis; Matei Ripeanu; Edward Seidel; Brian R. Toonen

Improvements in the performance of processors and networks make it both feasible and interesting to treat collections of workstations, servers, clusters, and supercomputers as integrated computational resources, or Grids. However, the highly heterogeneous and dynamic nature of such Grids can make application development di.cult. Here we describe an architecture and prototype implementation for a Grid-enabled computational framework based on Cactus, the MPICH-G2 Grid-enabled message-passing library, and a variety of specialized features to support e.cient execution in Grid environments. We have used this framework to perform record-setting computations in numerical relativity, running across four supercomputers and achieving scaling of 88% (1140 CPU’s) and 63% (1500 CPUs). The problem size we were able to compute was about five times larger than any other previous run. Further, we introduce and demonstrate adaptive methods that automatically adjust computational parameters during run time, to increase dramatically the efficiency of a distributed Grid simulation, without modification of the application and without any knowledge of the underlying network connecting the distributed computers.


ieee international conference on high performance computing data and analytics | 2003

Enabling Applications on the Grid: A Gridlab Overview

Gabrielle Allen; Tom Goodale; Thomas Radke; Michael Russell; Edward Seidel; Kelly Davis; Konstantinos Dolkas; Nikolaos D. Doulamis; Thilo Kielmann; Andre Merzky; Jarek Nabrzyski; Juliusz Pukacki; John Shalf; Ian J. Taylor

Grid technology is widely emerging. Still, there is an eminent shortage of real Grid users, mostly due to the lack of a “critical mass” of widely deployed and reliable higher-level Grid services, tailored to application needs. The GridLab project aims to provide fundamentally new capabilities for applications to exploit the power of Grid computing, thus bridging the gap between application needs and existing Grid middleware. We present an overview of GridLab, a large-scale, EU-funded Grid project spanning over a dozen groups in Europe and the US. We first outline our vision of Grid-empowered applications and then discuss GridLab’s general architecture and its Grid Application Toolkit (GAT). We illustrate how applications can be Grid-enabled with the GAT and discuss GridLab’s scheduler as an example of GAT services.


high performance distributed computing | 2000

The Cactus Code: a problem solving environment for the grid

Gabrielle Allen; Werner Benger; Tom Goodale; Hans-Christian Hege; Gerd Lanfermann; Andre Merzky; Thomas Radke; Edward Seidel; John Shalf

Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, describe the development and experiments which have already been performed with Cactus, and detail how its design makes it an ideal application test-bed for Grid computing.


Physical Review D | 2003

Gauge conditions for long term numerical black hole evolutions without excision

Miguel Alcubierre; Bernd Brügmann; Denis Pollney; Edward Seidel; Ryoji Takahashi

We extend previous work on 3D black hole excision to the case of distorted black holes, with a variety of dynamic gauge conditions that are able to respond naturally to the spacetime dynamics. We show that the combination of excision and gauge conditions we use is able to drive highly distorted, rotating black holes to an almost static state at late times, with well behaved metric functions, without the need for any special initial conditions or analytically prescribed gauge functions. Further, we show for the first time that one can extract accurate waveforms from these simulations, with the full machinery of excision or no excision and dynamic gauge conditions. The evolutions can be carried out for long times, far exceeding the longevity and accuracy of even better resolved 2D codes. While traditional 2D codes show errors in quantities such as apparent horizon mass of over 100% by t ≈ 100M, and crash by t ≈ 150M, with our new techniques the same systems can be evolved for more than hundreds of M’s in full 3D with errors of only a few percent.


Physical Review D | 2005

Three-dimensional relativistic simulations of rotating neutron-star collapse to a Kerr black hole

Luca Baiotti; Ian Hawke; Pedro J. Montero; Frank Löffler; Luciano Rezzolla; Nikolaos Stergioulas; José A. Font; Edward Seidel

We present a new three-dimensional fully general-relativistic hydrodynamics code using high-resolution shock-capturing techniques and a conformal traceless formulation of the Einstein equations. Besides presenting a thorough set of tests which the code passes with very high accuracy, we discuss its application to the study of the gravitational collapse of uniformly rotating neutron stars to Kerr black holes. The initial stellar models are modeled as relativistic polytropes which are either secularly or dynamically unstable and with angular velocities which range from slow rotation to the mass-shedding limit. We investigate the gravitational collapse by carefully studying not only the dynamics of the matter, but also that of the trapped surfaces, i.e., of both the apparent and event horizons formed during the collapse. The use of these surfaces, together with the dynamical horizon framework, allows for a precise measurement of the black-hole mass and spin. The ability to successfully perform these simulations for sufficiently long times relies on excising a region of the computational domain which includes the singularity and is within the apparent horizon. The dynamics of the collapsing matter is strongly influenced by the initial amount of angular momentum in the progenitor star and, for initial models with sufficiently high angular velocities, the collapse can lead to the formation of an unstable disc in differential rotation. All of the simulations performed with uniformly rotating initial data and a polytropic or ideal-fluid equation of state show no evidence of shocks or of the presence of matter on stable orbits outside the black hole.


Physical Review Letters | 1995

New formalism for numerical relativity.

Carles Bona; Joan Masso; Edward Seidel; J. Stela

We present a new formulation of the Einstein equations that casts them in an explicitly first order, flux-conservative, hyperbolic form. We show that this now can be done for a wide class of time slicing conditions, including maximal slicing, making it potentially very useful for numerical relativity. This development permits the application to the Einstein equations of advanced numerical methods developed to solve the fluid dynamic equations, {\em without} overly restricting the time slicing, for the first time. The full set of characteristic fields and speeds is explicitly given.


high performance distributed computing | 2001

The Astrophysics Simulation Collaboratory: a science portal enabling community software development

Michael Russell; Gabrielle Allen; Gregory Daues; Ian T. Foster; Edward Seidel; J. Novotny; John Shalf; G. von Laszewski

Grid Portals, based on standard web technologies, are emerging as important and useful user interfaces to computational and data Grids. Grid Portals enable Virtual Organizations, comprised of distributed researchers to collaborate and access resources more efficiently and seamlessly. The Astrophysics Simulation Collaboratory (ASC) Grid Portal provides a framework to enable researchers in the field of numerical relativity to study astrophysical phenomenon by making use of the Cactus computational toolkit. We examine user requirements and describe the design and implementation of the ASC Grid Portal.

Collaboration


Dive into the Edward Seidel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Anninos

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Wai Mo Suen

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

John Shalf

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andre Merzky

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Miguel Alcubierre

National Autonomous University of Mexico

View shared research outputs
Researchain Logo
Decentralizing Knowledge