Philip J. Hatcher
University of New Hampshire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip J. Hatcher.
PLOS Computational Biology | 2010
Vaughn S. Cooper; Samuel H. Vohr; Sarah C. Wrocklage; Philip J. Hatcher
In bacterial genomes composed of more than one chromosome, one replicon is typically larger, harbors more essential genes than the others, and is considered primary. The greater variability of secondary chromosomes among related taxa has led to the theory that they serve as an accessory genome for specific niches or conditions. By this rationale, purifying selection should be weaker on genes on secondary chromosomes because of their reduced necessity or usage. To test this hypothesis we selected bacterial genomes composed of multiple chromosomes from two genera, Burkholderia and Vibrio, and quantified the evolutionary rates (dN and dS) of all orthologs within each genus. Both evolutionary rate parameters were faster among orthologs found on secondary chromosomes than those on the primary chromosome. Further, in every bacterial genome with multiple chromosomes that we studied, genes on secondary chromosomes exhibited significantly weaker codon usage bias than those on primary chromosomes. Faster evolution and reduced codon bias could in turn result from global effects of chromosome position, as genes on secondary chromosomes experience reduced dosage and expression due to their delayed replication, or selection on specific gene attributes. These alternatives were evaluated using orthologs common to genomes with multiple chromosomes and genomes with single chromosomes. Analysis of these ortholog sets suggested that inherently fast-evolving genes tend to be sorted to secondary chromosomes when they arise; however, prolonged evolution on a secondary chromosome further accelerated substitution rates. In summary, secondary chromosomes in bacteria are evolutionary test beds where genes are weakly preserved and evolve more rapidly, likely because they are used less frequently.
parallel computing | 2001
Gabriel Antoniu; Luc Bougé; Philip J. Hatcher; Mark MacBeth; Keith McGuigan; Raymond Namyst
Our work combines Java compilation to native code with a runtime library that executes Java threads in a distributed memory environment. This allows a Java programmer to view a cluster of processors as executing a single JAVA virtual machine. The separate processors are simply resources for executing Java threads with true parallelism, and the run-time system provides the illusion of a shared memory on top of the private memories of the processors. The environment we present is available on top of several UNIX systems and can use a large variety of communication interfaces thanks to the high portability of its run time system. To evaluate our approach, we compare serial C, serial Java, and multithreaded Java implementations of a branch and-bound solution to the minimal-cost map-coloring problem. All measurements have been carried out on two platforms using two different communication interfaces: SISCI/SCI and MPI BIP/Myrinet.
acm sigplan symposium on principles and practice of parallel programming | 1988
Michael J. Quinn; Philip J. Hatcher; Karen C. Jourdenais
A data parallel language such as C * has a number of advantages over conventional hypercube programming languages. The algorithm design process is simpler, because (1) message passing is invisible, (2) race conditions are nonexistent, and (3) the data can be put into a one-to-one correspondence with the virtual processors. Since data are mapped to virtual processors, rather than physical processors, it is easier to move algorithms implemented on one size hypercube to a larger or smaller system. We outline the design of a C * compiler for a hypercube multicomputer. Our design goals are to minimize the amount of time spent synchronizing, limit the number of interprocessor communications, and make each physical processors emulation of a set of virtual processors as efficient as possible. We have hand translated three benchmark programs and compared their performance with that of ordinary C programs. All three programs—matrix multiplication, LU decomposition, and hyperquicksort—achieve reasonable speedup on a commercial hypercube, even when solving problems of modest size. On a 64-processor NCUBE/7, the C * matrix multiplication program achieves a speedup of 27 when multiplying two 64 × 64 matrices, the hyperquicksort program achieves a speedup of 10 when sorting 16,384 integers, and LU decomposition attains a speedup of 7 when decomposing a 256 × 256 system of linear equations. We believe the degradation in machine performance resulting from the use of a data parallel language will be more than compensated for by the increase in programmer productivity.
cluster computing and the grid | 2005
Gabriel Antoniu; Philip J. Hatcher; Mathieu Jan; David Noblet
The arrival of the P2P model has opened many new avenues for research within the field of distributed computing. This is mainly due to important practical features (such as support for volatility, high scalability). Several generic P2P libraries have been proposed for building higher-level services. In order to judge the appropriateness of using a generic P2P library for a given application type, an experimental performance evaluation of the provided functionalities is unavoidable. Very few analyses of this kind have been reported, as most evaluations are limited to complexity analyses and to simulations. Such experimental analyses are important, especially when using P2P software in a grid computing context, where applications may have precise efficiency requirements. In this paper, we focus on JXTA, which provides generic building blocks and protocols intended to serve as a basis for specialized P2P services and applications. We perform a performance evaluation of the three communication layers (endpoint, pipe and socket) over a fast Ethernet local-area network, for recent versions of the J2SE and C bindings of JXTA. We provide a detailed analysis explaining the behavior of these three layers and we give hints showing how to efficiently use them.
acm sigplan symposium on principles and practice of parallel programming | 1991
Philip J. Hatcher; Anthony J. Lapadula; Robert R. Jones; Michael J. Quinn; Ray J. Anderson
We describe our third-generation C* compiler for hy percube multicomputers. This compiler generates code suitable for execution on both the nC;UBE 3200 and the Intel iPSC/2. The compiler incorporates new optimization and utilizes an improved set of comnlunication primitives. It supports a variety of standard clomain clecomposition primitives, and it also allows the programmer to specify a custom mapping of data to the distributed memories of the hypercube. The performance of this compiler on benchmark programs clenlonstrates that high efficiency can be achieved executing SIMD code on multicomputer architectures.
IEEE Transactions on Parallel and Distributed Systems | 1991
Philip J. Hatcher; Michael J. Quinn; Anthony J. Lapadula; Bradley K. Seevers; Ray J. Anderson; Robert R. Jones
The inadequacies of conventional parallel languages for programming multicomputers are identified. The C* language is briefly reviewed, and a compiler that translates C* programs into C programs suitable for compilation and execution on a hypercube multicomputer is presented. Results illustrating the efficiency of executing data-parallel programs on a hypercube multicomputer are reported. They show the speedup achieved by three hand-compiled C* programs executing on an N-Cube 3200 multicomputer. The first two programs, Mandelbrot set calculation and matrix multiplication, have a high degree of parallelism and a simple control structure. The C* compiler can generate relatively straightforward code with performance comparable to hand-written C code. Results for a C* program that performs Gaussian elimination with partial pivoting are also presented and discussed. >
symposium on principles of programming languages | 1986
Philip J. Hatcher; Thomas W. Christopher
High-quality local code generation is one of the most difficult tasks the compiler-writer faces. Even if register allocation decisions are postponed and common subexpressions are ignored, instruction selection on machines with complex addressing can be quite difficult. Efficient and general algorithms have been developed to do instruction selection, but these algorithms fail to always find optimal solutions. Instruction selection algorithms based on dynamic programming or complete enumeration always find optimal solutions, but seem to be too costly to be practical. This paper describes a new instruction selection algorithm, and its prototype implementation, based on bottom-up tree pattern-matching. This algorithm is both time and space efficient, and is capable of doing optimal instruction selection for the DEC VAX-11 with its rich set of addressing modes.
Genome Biology and Evolution | 2010
Kenneth M. Flynn; Samuel H. Vohr; Philip J. Hatcher; Vaughn S. Cooper
In bacterial chromosomes, the position of a gene relative to the single origin of replication generally reflects its replication timing, how often it is expressed, and consequently, its rate of evolution. However, because some archaeal genomes contain multiple origins of replication, bias in gene dosage caused by delayed replication should be minimized and hence the substitution rate of genes should associate less with chromosome position. To test this hypothesis, six archaeal genomes from the genus Sulfolobus containing three origins of replication were selected, conserved orthologs were identified, and the evolutionary rates (dN and dS) of these orthologs were quantified. Ortholog families were grouped by their consensus position and designated by their proximity to one of the three origins (O1, O2, O3). Conserved orthologs were concentrated near the origins and most variation in genome content occurred distant from the origins. Linear regressions of both synonymous and nonsynonymous substitution rates on distance from replication origins were significantly positive, the rates being greatest in the region furthest from any of the origins and slowest among genes near the origins. Genes near O1 also evolved faster than those near O2 and O3, which suggest that this origin may fire later in the cell cycle. Increased evolutionary rates and gene dispensability are strongly associated with reduced gene expression caused in part by reduced gene dosage during the cell cycle. Therefore, in this genus of Archaea as well as in many Bacteria, evolutionary rates and variation in genome content associate with replication timing.
Communications of The ACM | 2001
Thilo Kielmann; Philip J. Hatcher; Luc Bougé; Henri E. Bal
ava has become increasingly popular as a general-purpose programming language. Current Java implementations focus mainly on the portability and interoperability required for Internet-centric client/server computing. Key to Java’s success is its intermediate “bytecode’’ representation, which can be exchanged and executed by Java Virtual Machines (JVMs) on almost any computing platform. However, along with that popularity has come an increasing need for an efficient execution mode. For sequential execution, just-in-time compilers improve application performance [4]. But high-performance computing applications typically require multipleprocessor systems, so efficient interprocessor communication is also needed, in addition to efficient sequential execution. As an OO language, Java uses method invocation as its main communication concept; for example, inside a single JVM, concurrent threads of control can communicate through synchronized method invocations. On a multiprocessor system with shared memory (SMP), this approach allows for some limited form of parallelism by mapping threads to different physical processors. For distributed-memory systems, Java offers the concept of a remote method invocation (RMI). With RMI, the method invocation, along with its parameters and results, is transferred across a network to and from the serving object on a remote JVM (see the sidebar “Remote Method Invocation”). With these built-in concepts for concurrency and distributed-memory communication, Java provides a unique opportunity for a widely accepted general-purpose language with a large base of existing code and programmers to also suit the needs of parallel (high-performance) computing. Unfortunately, Java is not yet widely perceived by programmers as such, due to the
european conference on parallel processing | 2000
Gabriel Antoniu; Luc Bougé; Philip J. Hatcher; Mark MacBeth; Keith McGuigan; Raymond Namyst
Our work combines Java compilation to native code with a run-time library that executes Java threads in a distributed-memory environment. This allows a Java programmer to view a cluster of processors as executing a single Java virtual machine. The separate processors are simply resources for executing Java threads with true concurrency and the run-time system provides the illusion of a shared memory on top of the private memories of the processors. The environment we present is available on top of several UNIX systems and can use a large variety of network protocols thanks to the high portability of its run-time system. To evaluate our approach, we compare serial C, serial Java, and multithreaded Java implementations of a branch-and-bound solution to the minimal-cost map-coloring problem. All measurements have been carried out on two platforms using two different network protocols: SISCI/SCI and MPI-BIP/Myrinet.