Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jean-Pierre Prost.
conference on high performance computing (supercomputing) | 1993
Dror G. Feitelson; Peter F. Corbett; Jean-Pierre Prost; Sandra Johnson Baylor
The Vesta parallel file system is intended to solve the I/O problems of massively parallel multicomputers executing numerically intensive scientific applications. It provides parallel access from the applications to files distributed across multiple storage nodes in the multicomputer, thereby exposing an opportunity for high-bandwidth data transfer across the multicomputers low-latency network. The Vesta interface provides a user-defined parallel view of file data, which gives users some control over the layout of data. This is useful for tailoring data layout to much common access patterns. The interface also allows user-defined partitioning and repartitioning of files without moving data among storage nodes. Libraries with higher-level interfaces that hide the layout details, while exploiting the power of parallel access, may be implemented above the basic interface. It is shown how collective I/O operations can be implemented, and six parallel access modes to Vesta files are defined. Each mode has unique characteristics in terms of how the processes share the file and how their accesses are interleaved. The combination of user-defined file partitioning and the six access modes gives users very versatile parallel file access.
conference on high performance computing (supercomputing) | 2001
Jean-Pierre Prost; Richard R. Treumann; Richard Hedges; Bin Jia; Alice Koniges
MPI-IO/GPFS is an optimized prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS) Release 3 as the underlying file system. This paper describes optimization features of the prototype that take advantage of new GPFS programming interfaces. It also details how collective data access operations have been optimized by minimizing the number of messages exchanged in sparse accesses and by increasing the overlap of communication with file access. Experimental results show a performance gain. A study of the impact of varying the number of tasks running on the same node is also presented.
Ibm Systems Journal | 1995
P. F. Corbett; D. G. Feltelson; Jean-Pierre Prost; George S. Almasi; Sandra Johnson Baylor; A. S. Bolmarcich; Y. Hsu; Julian Satran; Marc Snir; R. Colao; B. D. Herr; J. Kavaky; T. R. Morgan; A. Ziotek
Parallel computer architectures require innovative software solutions to utilize their capabilities. This statement is true for system software no less than for application programs. File system development for the IBM SP product line of computers started with the Vesta research project, which introduced the ideas of parallel access to partitioned files. This technology was then integrated with a conventional Advanced Interactive Executive™ (AIX™) environment to create the IBM AIX Parallel I/O File System product. We describe the design and implementation of Vesta, including user interfaces and enhancements to the control environment needed to run the system. Changes to the basic design that were made as part of the AIX Parallel I/O File System are identified and justified.
european conference on parallel processing | 2000
Jean-Pierre Prost; Richard R. Treumann; Richard Hedges; Alice Koniges; Alison B. White
MPI-IO/GPFS is a prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS) as the underlying file system. This paper describes the features of this prototype which support its high performance. The use of hints allows tailoring the use of the file system to the application needs.
international parallel processing symposium | 1995
Dror G. Feitelson; Peter F. Corbett; Jean-Pierre Prost
Vesta is an experimental parallel file system implemented on the IBM SPI. Its main features are support for parallel access from multiple application processes to file, and the ability to partition and re-partition the file data among these processes. This paper reports on a set of experiments designed to evaluate Vestas performance. This includes basic single-node performance, and performance using parallel access with different file partitioning schemes. Results are that bandwidth scales with the number of I/O nodes accessed, and that orthogonal partitioning schemes achieve essentially the same performance. In many cases performance equals the disk hardware limit. This is often attributed to prefetching and write-behind in the I/O nodes.<<ETX>>
Ibm Systems Journal | 2004
Dikran S. Meliksetian; Jean-Pierre Prost; Amarjit S. Bahl; Irwin Boutboul; David P. Currier; Sebastien Fibra; Jean-Yves Girard; Kevin M. Kassab; Jean-Luc Lepesant; Colm Malone; Paul Manesco
This paper presents the design and implementation of intraGrid, an experimental grid based on the Globus ToolkitTM and deployed on the IBM intranet. The architecture and the main components of intraGrid are described. Then, the major technical challenges and their solutions are reviewed, including software packaging and distribution, the interface for administrative tasks, and the design and implementation of the three major services: information services, management services, and job submission services. The paper also describes the extensions and modifications to intraGrid that were required to create the ISD grid, a grid that is used for joint projects with customers and thus requires access by external users. The paper reviews the use of intraGrid by various teams of IBM researchers to date and outlines the plans for future applications. The work in progress to migrate the intraGrid to an OGSA-based (Open Grid Services Architecture-based) grid is also described.
software product lines | 1994
Hubertus Franke; Peter H. Hochschild; Pratap Pattnaik; Jean-Pierre Prost; Marc Snir
A complete prototype implementation of MPI on the IBM Scalable Power PARALLEL 1 and 2 (SP1, SP2) is discussed. This implementation achieves essentially the same performance as the native EUI library, although MPI is much larger. The paper describes the implementation of EUI on SP1/SP2, and the modifications required to to implement MPI, initial performance measurements, and directions for future work.<<ETX>>
european conference on parallel processing | 2001
Nicholas K. Allsopp; John F. Hague; Jean-Pierre Prost
The Integrated Forecast System (IFS) code is a parallel MPI application running on multiple tasks, a specified number of which, during its execution, writes output to a single global file at the end of several output time intervals. It can therefore write output multiple times during a given run. With the appropriate choice of parallel writing routine the overhead of writing to disk can be effectively hidden from the computation. We shall show how this is possible with careful use of MPI-IO routines on top of the IBM General Parallel File System (GPFS).
parallel computing | 2005
Jean-Pierre Prost; L. Berman; R. Chang; Murthy V. Devarakonda; M. Haynos; Wen-Syan Li; Y. Li; I. Narang; J. Unger; Dinesh C. Verma
Grid computing, in the commercial space, builds upon a set of management disciplines, which aims at mapping available resource capabilities to application workloads, according to requirements these workloads depend upon and to business goals they must fulfill. This paper illustrates innovative technologies, developed at IBM Research, that address key issues found in commercial grid environments. These technologies fall into four main areas, workload virtualization, information virtualization, provisioning and orchestration, and application development.
annual simulation symposium | 1993
Jean-Pierre Prost; Shlomo Kipnis
Simulation of program traces is widely used in studying the impact of a system design on the beliavior of application programs. In this paper, we present a multilevel approach for simulating program traces. This approach provides the flexibility of selecting a desirable balance between tlie accuracy and the cost of simulation. We describe a case study of applying this approach to simulating traces of distributed-memory programs. The case study involves defining a hierarchical model of a parallel computer, defining a corresponding hierarchy of simulation levels, and defining the interface between successive simulation levels. The described case study resulted in a tool aimed at assisting designers of algorithms and applications for the IBM Vulcan distributed-memory parallel computer.