Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Flynn Hummel is active.

Publication


Featured researches published by Susan Flynn Hummel.


conference on object-oriented programming systems, languages, and applications | 1999

Implementing jalapeño in Java

Bowen Alpern; Clement Richard Attanasio; Anthony Cocchi; Derek Lieber; Stephen Edwin Smith; Ton Ngo; John J. Barton; Susan Flynn Hummel; Janice C. Sheperd; Mark F. Mergen

Jalapeño is a virtual machine for Java#8482; servers written in Java. A running Java program involves four layers of functionality: the user code, the virtual-machine, the operating system, and the hardware. By drawing the Java / non-Java boundary below the virtual machine rather than above it, Jalapeño reduces the boundary-crossing overhead and opens up more opportunities for optimization. To get Jalapeño started, a boot image of a working Jalapeño virtual machine is concocted and written to a file. Later, this file can be loaded into memory and executed. Because the boot image consists entirely of Java objects, it can be concocted by a Java program that runs in any JVM. This program uses reflection to convert the boot image into Jalapeños object format. A special MAGIC class allows unsafe casts and direct access to the hardware. Methods of this class are recognized by Jalapeños three compilers, which ignore their bytecodes and emit special-purpose machine code. User code will not be allowed to call MAGIC methods so Javas integrity is preserved. A small non-Java program is used to start up a boot image and as an interface to the operating system. Javas programming features — object orientation, type safety, automatic memory management — greatly facilitated development of Jalapeño. However, we also discovered some of the languages limitations.


Concurrency and Computation: Practice and Experience | 1998

High‐performance parallel programming in Java: exploiting native libraries

Vladimir Getov; Susan Flynn Hummel; Sava Mintchev

With most of todays fast scientific software written in Fortran and C, Java has a lot of catching up to do. In this paper we discuss how new Java programs can capitalize on high-performance libraries for other languages. With the help of a tool we have automatically created Java bindings for several standard libraries: MPI, BLAS, BLACS, PBLAS and ScaLAPACK. The purpose of the additional software layer introduced by the bindings is to resolve the interface problems between different programming languages such as data type mapping, pointers, multidimensional arrays, etc. For evaluation, performance results are presented for Java versions of two benchmarks from the NPB and PARKBENCH suites on the IBM SP2 using JDK and IBMs high-performance Java compiler, and on the Fujitsu AP3000 using Toba - a Java-to-C translator. The results confirm that fast parallel computing in Java is indeed possible.


conference on high performance computing (supercomputing) | 1995

Balancing Processor Loads and Exploiting Data Locality in N-Body Simulations

Ioana Banicescu; Susan Flynn Hummel

Although N-body simulation algorithms are amenable to parallelization, performance gains from execution on parallel machines are difficult to obtain due to load imbalances caused by irregular distributions of bodies. In general, there is a tension between balancing processor loads and maintaining locality, as the dynamic re-assignment of work necessitates access to remote data. Fractiling is a dynamic scheduling scheme that simultaneously balances processor loads and maintains locality by exploiting the self-similarity properties of fractals. Fractiling is based on a probabilistic analysis, and thus, accommodates load imbalances caused by predictable phenomena, such as irregular data, and unpredictable phenomena, such as data-access latencies. In experiments on a KSR1, performance of N-body simulation codes were improved by as much as 53% by fractiling. Performance improvements were obtained on uniform and nonuniform distributions of bodies, underscoring the need for a scheduling scheme that accommodates system induced variance. As the fractiling scheme is orthogonal to the N-body algorithm, we could use simple codes that discretize space into equal-size subrectangles (2-d) or subcubes (3-d) as the base algorithms.


Concurrency and Computation: Practice and Experience | 1997

SPMD programming in Java

Susan Flynn Hummel; Ton Ngo; Harini Srinivasan

We consider the suitability of the Java concurrent constructs for writing high-performance SPMD code for parallel machines. More specifically, we investigate implementing a financial application in Java on a distributed-memory parallel machine. Despite the fact that Java was not expressly targeted to such applications and architectures per se, we conclude that efficient implementations are feasible. Finally, we propose a library of Java methods to facilitate SPMD programming.


LCR | 1996

Load Balancing and Data Locality Via Fractiling: An Experimental Study

Susan Flynn Hummel; Ioana Banicescu; Chui-Tzu Wang; Joel Wein

In order to fully exploit the power of a parallel computer, an application must be distributed onto processors so that, as much as possible, each has an equal-sized, independent portion of the work. There is a tension between balancing processor loads and maximizing locality, as the dynamic re-assignment of work necessitates access to remote data. Fractiling is a dynamic scheduling scheme that simultaneously balances processor loads and maintains locality by exploiting the self-similarity properties of fractals.


Ibm Systems Journal | 2000

The Jalapeño virtual machine

Bowen Alpern; C. R. Attanasio; John J. Barton; Michael G. Burke; Perry Cheng; Jong-Deok Choi; Anthony Cocchi; Stephen J. Fink; David Grove; Michael Hind; Susan Flynn Hummel; Derek Lieber; Vassily Litvinov; Mark F. Mergen; Ton Ngo; James R. Russell; Vivek Sarkar; Mauricio J. Serrano; Janice C. Shepherd; S. E. Smith; Vugranam C. Sreedhar; Harini Srinivasan; John Whaley


Ibm Systems Journal | 2001

Blue Gene: a vision for protein science using a petaflop supercomputer

Frances E. Allen; George S. Almasi; Wanda Andreoni; D. Beece; B. J. Berne; Arthur A. Bright; José R. Brunheroto; Călin Caşcaval; José G. Castaños; Paul W. Coteus; Paul G. Crumley; Alessandro Curioni; Monty M. Denneau; Wilm E. Donath; Maria Eleftheriou; Blake G. Fitch; B. Fleischer; C. J. Georgiou; Robert S. Germain; Mark E. Giampapa; Donna L. Gresh; Manish Gupta; Ruud A. Haring; H. Ho; Peter H. Hochschild; Susan Flynn Hummel; T. Jonas; Derek Lieber; G. Martyna; K. Maturu


Ibm Systems Journal | 2000

Implementing jalapeno in java

Bowen Alpern; C. Richard Attanasio; John J. Barton; Michael G. Burke; Perry Cheng; Jin-ho Choi; Anthony Cocchi; Stephen J. Fink; David Grove; Michael Hind; Susan Flynn Hummel; Derek Lieber; Vassily Litvinov; Mark F. Mergen; Ton Ngo; James R. Russell; Vivek Sarkar; Mauricio J. Serrano; Janice C. Shepherd; Stephen P. Smith; Vugranam C. Sreedhar; Harini Srinivasan; John Whaley


Archive | 1996

Hierarchical Tiling: A Methodology for High Performance

Lawrence E. Carter; Jeanne Ferrante; Susan Flynn Hummel; Bowen Alpern; Kamg-su Gatlin


PPSC | 1995

Efficient Parallelism via Hierarchical Tiling.

Larry Carter; Jeanne Ferrante; Susan Flynn Hummel

Collaboration


Dive into the Susan Flynn Hummel's collaboration.

Researchain Logo
Decentralizing Knowledge