Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomasz Haupt is active.

Publication


Featured researches published by Tomasz Haupt.


Journal of Parallel and Distributed Computing | 1994

Compiling Fortran 90D/HPF for distributed memory MIMD computers

Zeki Bozkus; Alok N. Choudhary; Geoffrey C. Fox; Tomasz Haupt; Sanjay Ranka; Min-You Wu

Distributed memory multiprocessors are increasingly being used to provide high performance for advanced calculations with scientific applications. Distributed memory machines offer significant advantages over their shared memory counterparts in terms of cost and scalability, though it is widely accepted that they are difficult to program given the current status of software technology. Currently, distributed memory machines are programmed using a node language and a message passing library. This process is tedious and error prone because the user must perform the task of data distribution and communication for non-local data access. This thesis describes an advanced compiler that can generate efficient parallel programs when the source programming language naturally represents an applications parallelism. Fortran 90D/HPF described in this thesis is such a language. Using Fortran 90D/HPF, parallelism is represented with parallel constructs, such as array operations, where statements, forall statements, and intrinsic functions. The language provides directives for data distribution. Fortran 90D/HPF gives the programmer powerful tools to express a problem with natural data parallelism. To validate this hypothesis, a prototype of Fortran 90D/HPF was implemented. The compiler is organized around several major units: language parsing, partitioning data and computation, detecting communication and generating code. The compiler recognizes the presence of communication patterns in the computations in order to generate appropriate communication calls. Specifically, this involves a number of tests on the relationships among subscripts of various arrays in a statement. The compiler includes a specially designed algorithm to detect communications and to generate appropriate collective communication calls to execute array assignments and forall statements. The Fortran 90D/HPF compiler performs several types of communication and computation optimizations to improve the performance of the generated code. Empirical measurements show that the performance of the output of the Fortran 90D/HPF compiler is comparable to that of corresponding hand-written codes on several systems. We hope that this thesis assists in the widespread adoption of parallel computing technology and leads to a more attractive and powerful software development environment to support application parallelism that many users need.


conference on high performance computing (supercomputing) | 1993

Fortran 90D/HPF compiler for distributed memory MIMD computers: Design, implementation, and performance results

Zeki Bozkus; Alok N. Choudhary; Geoffrey C. Fox; Tomasz Haupt; Sanjay Ranka

Fortran 90D/HPF is a data parallel language with special directives to enable users to specify data alignment and distributions. The authors describe the design and implementation of a Fortran 90D/HPF compiler. Techniques for data and computation partitioning, communication detection and generation, and the run-time support for the compiler are discussed. Initial performance results for the compiler are presented. It is believed that the methodology to process data distribution, computation partitioning, communication system design and the overall compiler design can be used by the implementors of HPF compilers.


conference on high performance computing (supercomputing) | 1998

WebFlow - High-Level Programming Environment and Visual Authoring Toolkit for High Performance Distributed Computing

Erol Akarsu; Geoffrey C. Fox; Wojtek Furmanski; Tomasz Haupt

We developed a platform independent, three-tier system, called WebFlow. The visual authoring tools implemented in the front end integrated with the middle tier network of servers based on the industry standards and following distributed object paradigm, facilitate seamless integration of commodity software components. We add high performance to commodity systems using GLOBUS metacomputing toolkit as the backend. We have explained these ideas in general before, and here for the first time we describe a fully operational example which is expected to be deployed in an NCSA Alliance Grand Challenge.


Proceedings of the ACM 1999 conference on Java Grande | 1999

The gateway system: uniform Web based access to remote resources

Geoffrey C. Fox; Tomasz Haupt; Erol Akarsu; Alexey Kalinichenko; Kang-Seok Kim; Praveen Sheethalnath; Choon-Han Youn

Exploiting our experience developing the WebFlow system, we designed the Gateway system to provide seamless and secure access to computational resources at ASC MSRC. The Gateway follows our commodity components strategy, and it is implemented as a modern three-tier system. Tier 1 is a high-level front-end for visual programming, steering, run-time data analysis and visualization, built on top of the Web and OO commodity standards. Distributed object-based, scalable, and reusable Web server and Object broker middleware forms Tier 2. Back-end services comprise Tier 3. In particular, access to high performance computational resources is provided by implementing the emerging standard for metacomputing API.


Physical Review Letters | 1998

Boosted three-dimensional black-hole evolutions with singularity excision

Gregory B. Cook; M. F. Huq; Scott Klasky; Mark A. Scheel; A. M. Abrahams; Arlen Anderson; Peter Anninos; Thomas W. Baumgarte; Nigel T. Bishop; Steven Brandt; James C. Browne; K. Camarda; Matthew W. Choptuik; R. R. Correll; Charles R. Evans; L. S. Finn; Geoffrey C. Fox; R. Gomez; Tomasz Haupt; L. E. Kidder; Pablo Laguna; W. Landry; Luis Lehner; J. Lenaghan; R. L. Marsa; Joan Masso; Richard A. Matzner; S. Mitra; P. Papadopoulos; Manish Parashar

Binary black-hole interactions provide potentially the strongest source of gravitational radiation for detectors currently under development. We present some results from the Binary Black Hole Grand Challenge Alliance three-dimensional Cauchy evolution module. These constitute essential steps towards modeling such interactions and predicting gravitational radiation waveforms. We report on single black-hole evolutions and the first successful demonstration of a black hole moving freely through a three-dimensional computational grid via a Cauchy evolution: a hole moving near 6M at 0.1c during a total evolution of duration near 60M. [S0031-9007(98)05652-X]


ieee international conference on high performance computing data and analytics | 1999

WebFlow: a framework for Web based metacomputing

Tomasz Haupt; Erol Akarsu; Geoffrey C. Fox

We developed a platform independent, three-tier system, called WebFlow. The visual authoring tools implemented in the front end integrated with the middle tier network of servers based on CORBA and following distributed object paradigm, facilitate seamless integration of commodity software components. We add high performance to commodity systems using GLOBUS metacomputing toolkit as the backend.


Physical Review Letters | 1998

GRAVITATIONAL WAVE EXTRACTION AND OUTER BOUNDARY CONDITIONS BY PERTURBATIVE MATCHING

Andrew Abrahams; Luciano Rezzolla; M. E. Rupright; Arlen Anderson; Peter Anninos; Thomas W. Baumgarte; Nigel T. Bishop; Steven Brandt; James C. Browne; K. Camarda; Matthew W. Choptuik; Gregory B. Cook; R. R. Correll; Charles R. Evans; L. S. Finn; Geoffrey C. Fox; R. Gomez; Tomasz Haupt; M. F. Huq; L. E. Kidder; Scott Klasky; Pablo Laguna; W. Landry; Luis Lehner; J. Lenaghan; R. L. Marsa; Joan Masso; Richard A. Matzner; S. Mitra; P. Papadopoulos

We present a method for extracting gravitational radiation from a three-dimensional numerical relativity simulation and, using the extracted data, to provide outer boundary conditions. The method treats dynamical gravitational variables as nonspherical perturbations of Schwarzschild geometry. We discuss a code which implements this method and present results of tests which have been performed with a three-dimensional numerical relativity code.


conference on high performance computing (supercomputing) | 1996

Particle-in-Cell Simulation Codes in High Performance Fortran

Erol Akarsu; Kivanc Dincer; Tomasz Haupt; Geoffrey C. Fox

Particle-in-Cell (PIC) plasma simulation codes model the interaction of charged particles with surrounding electrostatic and magnetic fields. PICs computational requirements are classified at as one of the grand-challenge problems facing the high-performance community. In this paper we present the implementation of 1-D and 2-D electrostatic PIC codes in High Performance Fortran (HPF) on an IBM SP-2. We used one of the most successful commercial HPF compilers currently available and augmented the compilers missing HPF functions with extrinsic routines when necessary. We obtained a near linear speed-up in execution time and a performance comparable to the native message-passing implementations on the same platform.


Future Generation Computer Systems | 1999

Web based metacomputing

Tomasz Haupt; Erol Akarsu; Geoffrey C. Fox; Wojtek Furmanski

Abstract Programming tools that are simultaneously sustainable, highly functional, robust and easy to use have been hard to come by in the HPCC arena. This is partially due to the difficulty in developing sophisticated customized systems for what is a relatively small part of the worldwide computing enterprise. Thus, we have developed a new strategy – termed High Performance Commodity Computing (HPCC) [G. Fox, W. Furmanski, HPCC as high performance commodity computing, in: I. Foster, C. Kesselman (Eds.), Building National Grid, http://www.npac.syr.edu/users/gcf/HPcc/HPcc.html ] – which builds HPCC programming tools on top of the remarkable new software infrastructure being built for the commercial web and distributed object areas. We add high performance to commodity systems using multi-tier architecture with Globus metacomputing toolkit as the backend of a middle-tier of commodity web and object servers. We have demonstrated the fully functional prototype of WebFlow during Alliance’98 meeting.


conference on high performance computing (supercomputing) | 1993

An interactive remote visualization environment for an electromagnetic scattering simulation on a high performance computing system

Gang Cheng; Yinghua Lu; Geoffrey C. Fox; Kim Mills; Tomasz Haupt

An integrated interactive visualization environment was created for an electromagnetic scattering (EMS) simulation, coupling a graphical user interface (GUI) for runtime simulation parameters input and 3-D rendering output on a graphical workstation, with computational modules running on a parallel supercomputer and two workstations. Application Visualization System (AVS) was used as integrating software to facilitate both networking and scientific data visualization. Using the EMS simulation as a case study, the authors explore the AVS dataflow methodology to naturally integrate data visualization, parallel systems and heterogeneous computing. Major issues in integrating this remote visualization system are discussed, including task decomposition, system integration, concurrent control, and a high level data-visualization-environment distributed programming model.

Collaboration


Dive into the Tomasz Haupt's collaboration.

Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory Henley

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Igor Zhuk

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael S. Mazzola

Mississippi State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge