Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert T. Hood is active.

Publication


Featured researches published by Robert T. Hood.


Proceedings of the IEEE | 1993

The ParaScope parallel programming environment

Keith D. Cooper; Mary W. Hall; Robert T. Hood; Ken Kennedy; Kathryn S. McKinley; John M. Mellor-Crummey; Linda Torczon; Scott K. Warren

The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, is described. It includes a collection of tools that use global program analysis to help users develop and debug parallel programs. The focus is on ParaScopes compilation system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. A project aimed at extending ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language for use with both distributed-memory and shared-memory parallel computers, is described. >


conference on high performance computing (supercomputing) | 1990

Parallel program debugging with on-the-fly anomaly detection

Robert T. Hood; Ken Kennedy; John M. Mellor-Crummey

An approach for parallel debugging that coordinates static analysis with efficient on-the-fly access anomaly detection is described. On-the-fly instrumentation mechanisms are being developed for the structured synchronization primitives of Parallel Computing Forum (PCF) Fortran, the emerging standard for parallel Fortran. The proposed instrumentation techniques guarantee that one can isolate schedule-dependent behavior in a schedule-independent fashion. The result is that a single-instrumented execution will either report sources of schedule-dependent behavior, or it will validate that all executions of the program on the same data compute the same result. When an instrumented execution is being used solely to find sources of schedule-dependent behavior, its cost can be reduced by slicing out computations that do not contribute to race conditions. Ongoing efforts to incorporate the proposed debugging approach in the ParaScope environment are described.<<ETX>>


ieee international conference on high performance computing data and analytics | 1988

Parascope: a Parallel Programming Environment

C. David Callahan; Keith D. Cooper; Robert T. Hood; Ken Kennedy; Linda Torczon

The ParaScope programming environment, under de velopment at Rice University, has been designed to as sist in the formulation, implementation, and debugging of parallel Fortran programs. In its implementation, ParaScope merges the technologies of automatic paral lelism detection and integrated programming environ ments. This paper discusses the issues that underlie the design of ParaScopes editor, compiler, and debugger. The editor includes mechanisms for viewing and manip ulating the programs dependence structure. The compi lation system uses information from the various tools in the programming environment to optimize entire pro grams for specific parallel architectures. The debugging system includes facilities for remote debugging on par allel machines, using information from the editor and compiler to help isolate schedule-dependent errors.


software engineering symposium on practical software development environments | 1987

Efficient recompilation of module interfaces in a software development environment

Hausi A. Müller; Robert T. Hood; Ken Kennedy

This paper presents global interface analysis algorithms that analyze and limit the effects of an editing change to a basic interface of a software system. The algorithms improve on the deficiencies of the traditional compilation rule found in strongly-typed, separately-compiled programming languages, which often forces the recompilation of modules that are not at all affected by a change to a basic interface. The algorithms assume a software development environment that provides efficient access to the compilation dependencies and the module interfaces of the components being implemented. The algorithms are designed to operate on recursive compilation dependencies since separate compilation of recursive inter-module dependencies can easily be implemented in such an environment.


Bulletin of Mathematical Biology | 1986

Optimized homology searches of the gene and protein sequence data banks

Charles B. Lawrence; Daniel A. Goldman; Robert T. Hood

Abstract A strategy is presented for searching the gene and protein sequence data banks which combines the use of two previously described algorithms. The implementation of this strategy is thoroughly evaluated with respect to sensitivity, specificity and speed. The establishment of standard benchmarks for comparing programs that search the sequence data banks for homology is proposed.


parallel computing | 1988

Parallel Programming Support in ParaScope

David Callahan; Keith D. Cooper; Robert T. Hood; Ken Kennedy; Linda Torczon; Scott K. Warren

The first vector supercomputers appeared on the market in the early to mid seventies. Yet, because of the lag in developing supporting software, it is only recently that vectorizing compilers powerful enough to effectively utilize vector hardware have been developed.


programming language design and implementation | 1987

Selective interpretation as a technique for debugging computationally intensive programs

Benjamin B. Chase; Robert T. Hood

As part of Rice Universitys project to build a programming environment for scientific software, we have built a facility for program execution that solves some of the problems inherent in debugging large, computationally intensive programs. By their very nature such programs do not lend themselves to full-scale interpretation. In moderation however, interpretation can be extremely useful during the debugging process. In addition to discussing the particular benefits that we expect from interpretation, this paper addresses how interpretive techniques can be effectively used in conjunction with the execution of compiled code. The same implementation technique that permits interpretation to be incorporated as part of execution will also permit the execution facility to be used for debugging parallel programs running on a remote machine.


Sigplan Notices | 1985

Efficient abstractions for the implementation of structured editors

Robert T. Hood

This paper investigates the use of abstract recursive data structures and operations in the implementation of a structured program editor. The value-oriented semantics of the proposed constructs simplify the implementation of important features such as version control and an unbounded undo operation. Since the constructs can be implemented efficiently, their use in the structured program editor does not significantly affect its performance.


high performance computing and communications | 2016

Performance Evaluation of an Intel Haswell-and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

Subhash Saini; Robert T. Hood; Johnny Chang; John Baron

We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5-2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.


ieee international conference on high performance computing data and analytics | 2008

Benchmarking the Columbia Supercluster

Robert T. Hood; Rupak Biswas; Johnny Chang; M. Jahed Djomehri; Haoqiang Jin

Columbia, NASAs 10,240-processor supercluster, has been ranked as one of the fastest computers in the world since November 2004. In this paper we examine the performance characteristics of its production subclusters, which are typically configurations ranging in size from 512 to 2048 processors. We evaluate floating-point performance, memory bandwidth, and message passing communication speeds using a subset of the HPC Challenge benchmarks, the NAS Parallel Benchmarks, and a computational fluid dynamics application. Our experimental results quantify the performance improvement resulting from changes in interconnect bandwidth, processor speed, and cache size across the different types of SGI Altix 3700s that constitute Columbia. We also report on our experiments that investigate the performance impact of processors sharing a path to memory. Finally, our tests of the different interconnect fabrics available indicate substantial promise for scaling applications to run on configurations of more than 512 CPUs.

Collaboration


Dive into the Robert T. Hood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge