Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jon A. Webb is active.

Publication


Featured researches published by Jon A. Webb.


conference on high performance computing (supercomputing) | 1988

iWarp: an integrated solution to high-speed parallel computing

Shekhar Borkar; Robert Cohn; George W. Cox; Sha Gleason; Thomas Gross; H. T. Kung; Monica S. Lam; Brian E. Moore; Craig B. Peterson; John Samuel Pieper; Linda J. Rankin; P. S. Tseng; Jim Sutton; John Urbanski; Jon A. Webb

A description is given of the iWarp architecture and how it supports various communication models and system configurations. The heart of an iWarp system is the iWarp component: a single-chip processor that requires only the addition of memory chips to form a complete system building block, called the iWarp cell. Each iWarp component contains both a powerful computation engine that runs at 20 MFLOPS (million floating-point operations per second) and a high-throughput (320 Mb/s), low-latency (100-150-ns) communication engine for interfacing with other iWarp cells. Because of their strong computation and communication capabilities, the iWarp components provide a versatile building block for high-performance parallel systems ranging from special-purpose systolic arrays to general-purpose distributed memory computers. They can support both fine-grain parallel and coarse-grain distributed computation models simultaneously in the same system. The initial iWarp demonstration system consists of an 8*8 torus of iWarp cells, delivering more than 1.2 GFLOP (billions of FLOPS). It can be expanded to include up to 1024 cells.<<ETX>>


IEEE Transactions on Computers | 1987

The Warp Computer: Architecture, Implementation, and Performance

Marco Annaratone; E. Arnould; Thomas R. Gross; H. T. Kung; Monica S. Lam; Onat Menzilcioglu; Jon A. Webb

The Warp machine is a systolic array computer of linearly connected cells, each of which is a programmable processor capable of performing 10 million floating-point operations per second (10 MFLOPS). A typical Warp array includes ten cells, thus having a peak computation rate of 100 MFLOPS. The Warp array can be extended to include more cells to accommodate applications capable of using the increased computational bandwidth. Warp is integrated as an attached processor into a Unix host system. Programs for Warp are written in a high-level language supported by an optimizing compiler. The first ten-cell prototype was completed in February 1986; delivery of production machines started in April 1987. Extensive experimentation with both the prototype and production machines has demonstrated that the Warp architecture is effective in the application domain of robot navigation as well as in other fields such as signal processing, scientific computation, and computer vision research. For these applications, Warp is typically several hundred times faster than a VAX 11/780 class computer. This paper describes the architecture, implementation, and performance of the Warp machine. Each major architectural decision is discussed and evaluated with system, software, and application considerations. The programming model and tools developed for the machine are also described. The paper concludes with performance data for a large number of applications.


Artificial Intelligence | 1982

Structure from motion of rigid and jointed objects

Jon A. Webb; Jake K. Aggarwal

A method for structure from motion is presented. The method makes a motion assumption about the objects being viewed. The motion assumption is that all motion consists of translations and rotations about a fixed axis. Parallel projection is also assumed. This makes it possible to interpret the motion of as few as two rigidly connected points. The method works for both rigid and jointed objects. Results of a test of this method on Johanssons data are presented.


international symposium on computer architecture | 1990

Supporting systolic and memory communication in iWarp

Shekhar Borkar; Robert Cohn; George W. Cox; Thomas R. Gross; H. T. Kung; Monica S. Lam; Margie Levine; Brian E. Moore; Wire Moore; Craig B. Peterson; Jim Susman; Jim Sutton; John Urbanski; Jon A. Webb

iWarp is a parallel architecture developed jointly by Carnegie Mellon University and Intel Corporation. The iWarp communication system supports two widely used interprocessor communication styles: memory communication and systolic communication. This paper describes the rationale, architecture, and implementation for the iWarp communication system. The sending or receiving processor of a message can perform either memory or systolic communication. In memory communication, the entire message is buffered in the local memory of the processor before it is transmitted or after it is received. Therefore communication begins or terminates at the local memory. For conventional message passing methods, both sending and receiving processors use memory communication. In systolic communication, individual data items are transferred as they are produced, or are used as they are received, by the program running at the processor. Memory communication is flexible and well suited for general computing; whereas systolic communication is efficient and well suited for speed critical applications. A major achievement of the iWarp effort is the derivation of a common design to satisfy the requirements of both systolic and memory communication styles. This is made possible by two important innovations in communication: (1) program access to communication and (2) logical channels. The former allows programs to access data as they are transmitted and to redirect portions of messages to different destinations efficiently. The latter increases the connectivity between the processors and guarantees communication bandwidth for classes of messages. These innovations have provided a focus for the iWarp architecture. The result is a communication system that provides a total bandwidth of 320 MBytes/sec and that is integrated on a single VLSI component with a 20 MFLOPS plus 20 MIPS long instruction word computation engine.


international conference on computer vision | 1995

A multibaseline stereo system with active illumination and real-time image acquisition

Sing Bing Kang; Jon A. Webb; Charles Lawrence Zitnick; Takeo Kanade

We describe our implementation of a parallel depth recovery scheme for a four-camera multibaseline stereo in a convergent configuration. Our system is capable of image capture at video rate. This is critical in applications that require three-dimensional tracking. We obtain dense stereo depth data by projecting a light pattern of frequency modulated sinusoidally varying intensity onto the scene, thus increasing the local discriminability at each pixel and facilitating matches. In addition, we make most of the camera view areas by converging them at a volume of interest. Results show that we are able to extract stereo depth data that are, on the average, less than 1 mm in error at distances between 1.5 to 3.5 m away from the cameras.<<ETX>>


international conference on robotics and automation | 1986

Progress in robot road-following

Richard S. Wallace; K. Matsuzaki; Yoshimasa Goto; Jill D. Crisman; Jon A. Webb; Takeo Kanade

We report progress in visual road following by autonomous robot vehicles. We present results and work in progress in the areas of system architecture, image rectification and camera calibration, oriented edge tracking, color classification and road-region segmentation, extracting geometric structure, and the use of a map. In test runs of an outdoor robot vehicle, the Terregator, under control of the Warp computer, we have demonstrated continuous motion vision-guided road-following at speeds up to 1.08 km/hour with image processing and steering servo loop times of 3 sec.


international symposium on computer architecture | 1986

Warp architecture and implementation

Marco Annaratone; E. Arnould; Thomas R. Gross; H. T. Kung; Monica S. Lam; Onat Menzilcioglu; Ken Sarocky; Jon A. Webb

This paper describes the scan line array processor (SLAP), a new architecture designed for high-performance yet low-cost image computation. A SLAP is a SIMD linear array of processors, and hence is easy to build and scales well with VLSI technology; yet appropriate special features and programming techniques make it efficient for a surprisingly wide variety of low and medium level computer vision tasks. We describe the basic SLAP concept and some of its variants, discuss a particular planned implementation, and indicate its performance on computer vision and other applications.


IEEE Computer | 1981

Visually Interpreting the Motion of Objects in Space

Jon A. Webb; Jake K. Aggarwal

The human visual systems ability to extract three-dimensional structure from a two-dimensional source is the key to automatic interpretation of structure from motion.


IEEE Computer | 1992

Steps toward architecture-independent image processing

Jon A. Webb

Adapt, an architecture-independent language based on the split-and-merge model, is presented. It is specialized for efficient, parallel computation of image-processing algorithms. The split-and-merge programming model is first examined. The Adapt language and its implementation are then described, and its performance is considered.<<ETX>>


international conference on robotics and automation | 1985

Warp as a machine for low-level vision

Thomas R. Gross; H. T. Kung; Monica S. Lam; Jon A. Webb

Warp is a programmable systolic array processor. One of its objectives is to support computer vision research. This paper shows how the Warp architecture can be used to fulfill the computational needs of low-level vision. We study the characteristics of low-level vision algorithms and show how they lead to requirements for computer architecture. These requirements are met by Warp. We then describe how the Warp system can be used. Warp programs can be classified in two ways: chained versus severed, and heterogeneous versus homogeneous. Chained and severed characterize the degree of interprocessor dependency, while heterogeneous and homogeneous characterize the degree of similarity between programs on individual processors. Taken in combination, these classes give four user models. Sophisticated programming tools are needed to support these user models.

Collaboration


Dive into the Jon A. Webb's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Marco Annaratone

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Thomas R. Gross

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jake K. Aggarwal

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

E. Arnould

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Onat Menzilcioglu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge