Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seth Abraham is active.

Publication


Featured researches published by Seth Abraham.


Journal of Parallel and Distributed Computing | 1991

The twisted cube topology for multiprocessors: a study in network asymmetry

Seth Abraham; Krishnan Padmanabhan

Abstract The twisted cube topology is a variant of the binary hypercube structure for multiprocessors, with the same amount of hardware but a diameter of only ( d + 1)/2 in a cube of dimension d . It has a distributed routing algorithm that is slightly more complex than that for the hypercube. However, we demonstrate in this paper that the main drawback of the network is that it is asymmetric, and this fact has significant consequences for the dynamic performance of the system. We examine the effects of these asymmetries as well as the overall performance of this new structure as a case study in the architecture of better topologies for direct connected mutliprocessors. We find that the twisted cube delivers an improvement in performance over the hypercube, but not nearly as much as the reduction in diameter.


Journal of Parallel and Distributed Computing | 1991

Performance of multicomputer networks under Pin-out constraints

Seth Abraham; Krishnan Padmanabhan

Abstract When the performance of multiprocessor or multicomputer interconnection networks is modeled in the literature, it is usually assumed that communication links have a channel wide enough to accommodate the entire message so that switch-to-switch transfers can take place in one cycle. In practice, however, the channel width is constrained by real-world considerations. These constraints impact on the choice of network topology and mandate more sophisticated switching technologies. In this paper, we evaluate the performance of the family of multidimensional mesh topologies (which includes the hypercube) under the constant pin-out constraint. We assign a channel width to each system so that the product of the node degree and the channel width is constant and then analyze the stochastic performance of these systems using three schemes: message switching, virtual cut-through switching, and partial cut-through switching. Our analyses are more accurate than those appearing in the literature, and the results show that under the constant pin-out constraint, higher dimensionality is more important than wider channel width.


international symposium on performance analysis of systems and software | 2005

Architectural Characterization of Processor Affinity in Network Processing

Annie P. Foong; Jason M. Fung; Donald Newell; Seth Abraham; Peggy Irelan; Alex Lopez-Estrada

Network protocol stacks, in particular TCP/IP software implementations, are known for its inability to scale well in general-purpose monolithic operating systems (OS) for SMP. Previous researchers have experimented with affinitizing processes/thread, as well as interrupts from devices, to specific processors in a SMP system. However, general purpose operating systems have minimal consideration of user-defined affinity in their schedulers. Our goal is to expose the full potential of affinity by in-depth characterization of the reasons behind performance gains. We conducted an experimental study of TCP performance under various affinity modes on IA-based servers. Results showed that interrupt affinity alone provided a throughput gain of up to 25%, and combined thread/process and interrupt affinity can achieve gains of 30%. In particular, calling out the impact of affinity on machine clears (in addition to cache misses) is characterization that has not been done before


IEEE Transactions on Parallel and Distributed Systems | 1995

Using a multipath network for reducing the effects of hot spots

Mu-Cheng Wang; Howard Jay Siegel; Mark A. Nichols; Seth Abraham

One type of interconnection network for a medium to large-scale parallel processing system (i.e., a system with 2/sup 6/ to 2/sup 16/ processors) is a buffered packet-switched multistage interconnection network (MIN). It has been shown that the performance of these networks is satisfactory for uniform network traffic. More recently, several studies have indicated that the performance of MINs is degraded significantly when there is hot spot traffic, that is, a large fraction of the messages are routed to one particular destination. A multipath MIN is a MIN with two or more paths between all source and destination pairs. This research investigates how the Extra Stage Cube multipath MIN can reduce the detrimental effects of tree saturation caused by hot spots. Simulation is used to evaluate the performance of the proposed approaches. The objective of this evaluation is to show that, under certain conditions, the performance of the network with the usual routing scheme is severely degraded by the presence of hot spots. With the proposed approaches, although the delay time of hot spot traffic may be increased, the performance of the background traffic, which constitutes the majority of the network traffic, can be significantly improved. >


Journal of Parallel and Distributed Computing | 1992

Report of the Purdue Workshop on Grand Challenges in Computer Architecture! for the Support of High Performance Computing

Howard Jay Siegel; Seth Abraham; William L. Bain; Kenneth E. Batcher; Thomas L. Casavant; Doug DeGroot; Jack B. Dennis; David C. Douglas; Tse Yun Feng; James R. Goodman; Alan Huang; Harry F. Jordan; J. Robert Jump; Yale N. Patt; Alan Jay Smith; James E. Smith; Lawrence Snyder; Harold S. Stone; Russ Tuck; Benjamin W. Wah

Abstract The “Purdue Workshop on Grand Challenges in Computer Architecture for the Support of High Performance Computing” was sponsored by the National Science Foundation to identify critical research topics in computer architecture as they relate to high performance computing. Following a wide-ranging discussion of the computational characteristics and requirements of the grand challenge applications, the workshop identified four major computer architecture grand challenges as crucial to advancing the state of the art of high performance computation in the coming decade. These are: (1) idealized parallel computer models; (2) usable peta-ops (1015 ops) performance; (3) computers in an era of HDTV, gigabyte networks, and visualization; and (4) infrastructure for prototyping architectures. This report overviews some of the demands of the grand challenge applications and presents the above four grand challenges for computer architecture.


IEEE Transactions on Parallel and Distributed Systems | 2000

Performance-based constraints for multidimensional networks

James R. Anderson; Seth Abraham

A stochastic analysis of multidimensional networks with unidirectional or bidirectional links between nodes is presented. The analysis allows the development of an accurate model for examining the performance and cost trade-offs of different network configurations. The model is validated through simulation and does not rely on the simplifying assumptions of previous models. In addition, the model is valid for the hypercube network. Two new performance-based design constraints are introduced: constant maximum throughput and constant unity queue. These new constraints are fundamentally different than previous constraints, which are based on some characterization of hardware implementation costs. Both of the new constraints allow performance and cost comparisons of different network configurations to be made on the basis of an equal ability to handle a range of traffic load. Results under the new constraints clearly show that a low dimensional network, while offering the lowest message latency, must be significantly more expensive than a comparable high dimensional network and, in some cases, may be impractical to implement. In addition, the constraints demonstrate that performance is highly dependent on offered load.


Applied Optics | 1993

Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography

Daniel M. Newman; Robert W. Hawley; Dennis L. Goeckel; Richard D. Crawford; Seth Abraham; Neal C. Gallagher

An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the holograms tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machines front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.


acm sigplan symposium on principles and practice of parallel programming | 2014

Vector seeker: a tool for finding vector potential

G. Carl Evans; Seth Abraham; Bob Kuhn; David A. Padua

The importance of vector instructions is growing in modern computers. Almost all architectures include some form of vector instructions and the tendency is for the size of the instructions to grow with newer designs. To take advantage of the performance that these systems offer, it is imperative that programs use these instructions, and yet they do not always do so. The tools to take advantage of these extensions require programmer assistance either by hand coding or providing hints to the compiler. We present Vector Seeker, a tool to help investigate vector parallelism in existing codes. Vector Seeker runs with the execution of a program to optimistically measure the vector parallelism that is present. Besides describing Vector Seeker, the paper also evaluates its effectiveness using two applications from Petascale Application Collaboration Teams (PACT) and eight applications from Media Bench II. These results are compared to known results from manual vectorization studies. Finally, we use the tool to automatically analyze codes from Numerical Recipes and TSVC and then compare the results with the automatic vectorization algorithms of Intels ICC.


international conference on parallel processing | 1997

Multidimensional network performance with unidirectional links

James R. Anderson; Seth Abraham

A stochastic analysis of multidimensional networks with unidirectional links between nodes is presented, which is more accurate than previous models and valid for the hypercube. The results are reconciled with those of previous researchers who have reported conflicting conclusions. In addition to the classic constraints of constant link width, pin-out, and bisection width, a new constraint, constant maximum throughput, is introduced. This constraint dramatizes the performance and cost trade-offs between different network topologies.


international conference on parallel processing | 1993

Reducing the Effect of Hot Spots by Using a Multipath Network

Mu-Cheng Wang; Howard Jay Siegel; Mark A. Nichols; Seth Abraham

One type of interconnection network for a medium to large-scale parallel processing system (i.e., a system with 26 to 216 processors) is a buffered packet-switched multistage interconnection network (MIN). It has been shown that the per formance of these networks is satisfactory for uniform network traffic. More recently, several studies have indicated that the performance of MINs is degraded significantly when there is hot spot traffic, that is, a large fraction of the messages are routed to one particular destination. A multipath MIN is a MIN with two or more paths between all source and destination pairs. This research investigates how the Extra Stage Cube multipath MIN can reduce the detrimental effects of tree satura tion caused by hot spots. Simulation is used to evaluate the per formance of the proposed approach. The objective of this evaluation is to show that, under certain conditions, the perfor mance of the network with the usual routing scheme is severely degraded by the presence of hot spots. With the proposed approach, although the delay time of hot spot traffic may be increased, the performance of the background traffic, which constitutes the majority of the network traffic, can be significantly improved.

Collaboration


Dive into the Seth Abraham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge