Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul C. Messina is active.

Publication


Featured researches published by Paul C. Messina.


ieee international conference on high performance computing data and analytics | 1989

The Perfect Club Benchmarks: Effective Performance Evaluation of Supercomputers

Michael W. Berry; Da-Ren Chen; Peter F. Koss; David J. Kuck; Sy-Shin Lo; Yingxin Pang; Lynn Pointer; R. Roloff; Ahmed H. Sameh; E. Clementi; Shaoan Chin; David J. Schneider; Geoffrey C. Fox; Paul C. Messina; David Walker; C. Hsiung; Jim Schwarzmeier; K. Lue; Steven A. Orszag; F. Seidl; O. Johnson; R. Goodrum; Joanne L. Martin

This report presents a methodology for measuring the performance of supercomputers. It includes 13 Fortran programs that total over 50,000 lines of source code. They represent applications in several areas of engi neering and scientific computing, and in many cases the codes are currently being used by computational re search and development groups. We also present the PERFECT Fortran standard, a set of guidelines that allow portability to several types of machines. Furthermore, we present some performance measures and a method ology for recording and sharing results among diverse users on different machines. The results presented in this paper should not be used to compare machines, except in a preliminary sense. Rather, they are presented to show how the methodology has been applied, and to encourage others to join us in this effort. The results should be regarded as the first step toward our objec tive, which is to develop a publicly accessible data base of performance information of this type.


Proceedings Seventh Heterogeneous Computing Workshop (HCW'98) | 1998

Implementing distributed synthetic forces simulations in metacomputing environments

Sharon Brunett; Dan M. Davis; Thomas D. Gottschalk; Paul C. Messina; Carl Kesselman

A distributed, parallel implementation of the widely used Modular Semi-Automated Forces (ModSAF) Distributed Interactive Simulation (DIS) is presented, with scalable parallel processors (SPPs) used to simulate more than 50,000 individual vehicles. The single-SPP code is portable and has been used on a variety of different SPP architectures for simulations with up to 15,000 vehicles. A general metacomputing framework for DIS on multiple SPPs is discussed and results are presented for an initial system using explicit Gateway processes to manage communications among the SPPs. These 50K-vehicle simulations utilized 1,904 processors at six sites across seven time zones, including platforms from three manufacturers. Ongoing activities to both simplify and enhance the metacomputing system using Globus are described.


Applied Physics Letters | 1998

Multimillion-atom molecular dynamics simulation of atomic level stresses in Si(111)/Si3N4(0001) nanopixels

Martina E. Bachlechner; Andrey Omeltchenko; Aiichiro Nakano; Rajiv K. Kalia; Priya Vashishta; Ingvar Ebbsjö; A. Madhukar; Paul C. Messina

Ten million atom multiresolution molecular-dynamics simulations are performed on parallel computers to determine atomic-level stress distributions in a 54 nm nanopixel on a 0.1 µm silicon substrate. Effects of surfaces, edges, and lattice mismatch at the Si(111)/Si3N4(0001) interface on the stress distributions are investigated. Stresses are found to be highly inhomogeneous in the nanopixel. The top surface of silicon nitride has a compressive stress of +3 GPa and the stress is tensile, –1 GPa, in silicon below the interface.


Concurrency and Computation: Practice and Experience | 1990

Benchmarking advanced architecture computers

Paul C. Messina; Clive F. Baillie; Edward W. Felten; Paul G. Hipes; Ray Williams; Arnold Alagar; Anke Kamrath; Robert H. Leary; Wayne Pfeiffer; Jack M. Rogers; David W. Walker

Recently, a number of advanced architecture machines have become commercially available. These new machines promise better cost performance than traditional computers, and some of them have the potential of competing with current supercomputers, such as the CRAY X-MP, in terms of maximum performance. This paper describes the methodology and results of a pilot study of the performance of a broad range of advanced architecture computers using a number of complete scientific application programs. The computers evaluated include: 1shared-memory bus architecture machines such as the Alliant FX/8, the Encore Multimax, and the Sequent Balance and Symmetry 2shared-memory network-connected machines such as the Butterfly 3distributed-memory machines such as the NCUBE, Intel and Jet Propulsion Laboratory (JPL)/Caltech hypercubes 4very long instruction word machines such as the Cydrome Cydra-5 5SIMD machines such as the Connection Machine 6‘traditional’ supercomputers such as the CRAY X-MP, CRAY-2 and SCS-40. Seven application codes from a number of scientific disciplines have been used in the study, although not all the codes were run on every machine. The methodology and guidelines for establishing a standard set of benchmark programs for advanced architecture computers are discussed. The CRAYs offer the best performance on the benchmark suite; the shared memory multiprocessor machines generally permitted some parallelism, and when coupled with substantial floating point capabilities (as in the Alliant FX/8 and Sequent Symmetry), provided an order of magnitude less speed than the CRAYs. Likewise, the early generation hypercubes studied here generally ran slower than the CRAYs, but permitted substantial parallelism from each of the application codes.


ieee international conference on high performance computing data and analytics | 1997

A Distributed Web-Based Metacomputing Environment

Giovanni Aloisio; Massimo Cafaro; Roy Williams; Paul C. Messina

Remote-sensing data about the Earths environment is being created and stored at an ever-increasing rate. To disseminate these valuable data, they must be delivered in a usable form to those who can interpret them and to those who would learn to interpret them. The aim of the paper is to demonstrate how remote-sensing data can be delivered using a web-browser as a front end, with distributed high performance computing services and replicated data archives. In particular, we show the implementation of supervised on-the-fly SAR processing: this goal can be achieved by an heterogeneous distributed computing environment in which high performance technologies and web interfaces are effectively utilized to provide animated, supervised, custom processed data. The architecture is designed to support both high and low-speed networking, both supercomputers and workstations, the result supporting both professional and casual users.


The Journal of Supercomputing | 1996

A quantitative study of parallel scientific applications with explicit communication

Robert Cypher; Alex Ho; Smaragda Konstantinidou; Paul C. Messina

This paper studies the behavior of scientific applications running on distributed memory parallel computers. Our goal is to quantify the floating point, memory, I/O, and communication requirements of highly parallel scientific applications that perform explicit communication. In addition to quantifying these requirements for fixed problem sizes and numbers of processors, we develop analytical models for the effects of changing the problem size and the degree of parallelism for several of the applications.The contribution of our paper is that it provides quantitative data about real parallel scientific applications in a manner that is largely independent of the specific machine on which the application was run. Such data, which are clearly very valuable to an architect who is designing a new parallel computer, were not previously available. For example, the majority of research papers in interconnection networks have used simulated communication loads consisting of fixed-size messages. Our data, which show that using such simulated loads is unrealistic, can be used to generate more realistic communication loads.


Concurrency and Computation: Practice and Experience | 1991

Parallel computing in the 1980s—one person's view

Paul C. Messina

This paper is a survey of activities related to parallel computing that took place primarily during the 1980s. The major areas covered are hardware, software, and performance measurement and characterization. Emphasis is on identifying the major milestones of the decade and on commercial computers rather than present a comprehensive survey of the field. The material is treated from the users point of view.


Archive | 1998

Some Perspectives on High-Performance Mathematical Software

Daniela di Serafino; Lucia Maddalena; Paul C. Messina; Almerico Murli

In this paper we trace the state of the art of high-performance mathematical software, that is, mathematical software for high-performance computing environments. Our overview is not meant to be exaustive; rather, we provide examples of software products and related projects, that are representative of the evolution aimed at exploiting the new features of advanced computing environments. We also discuss some issues concerning the design and implementation of mathematical software, that are introduced by the complex and highly varied nature of advanced computer architectures. Special attention is given to high-performance software for nonlinear optimization.


Concurrency and Computation: Practice and Experience | 1995

Pipeline optimizations of the prime factor algorithm

Rosa Albrizio; Albino Mazzone; Nicola Veneziani; Paul C. Messina; Giovanni Aloisio; Mario A. Bochicchio

The prime factor algorithm (PFA) is an efficient discrete Fourier transform (DFT) computation algorithm used when the sequence length can be decomposed into mutually prime factors. Following our previous results on PFA decomposition carried out at Caltech on hypercube machines, we present in the paper a pipeline PFA implementation suitable for multiprocessor systems with distributed memory. This implementation achieves high values of efficiency and speed-up when processing multiple sequences of data. The paper shows how an optimized structure can be obtained when the concurrency among computation and communications is exploited at each node of the pipe. Experimental results obtained on transputer-based structures and on the Intel Touchstone Delta system are also reported.


Lecture Notes in Computer Science | 1997

Metacomputing and Data-Intensive Applications

Paul C. Messina

Metacomputing-the concurrent use of multiple networklinked computers for solving application problems-is gaining increasing popularity. Much of the early work in metacomputing focused on harnessing greater processing power than could be found at a single site or on combining heterogeneous computer architectures to exploit the best features of each for a given problem. Another compelling rationale for using metacomputing is the acquisition, manipulation, and analysis of data that are stored remotely or acquired by distant instruments. When the data to be accessed and processed are voluminous, we refer to the applications as data intensive.

Collaboration


Dive into the Paul C. Messina's collaboration.

Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Roy Williams

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Almerico Murli

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Sharon Brunett

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A. Madhukar

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aiichiro Nakano

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Amir Fijany

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan M. Davis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David W. Walker

University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge