Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John G. Eyles is active.

Publication


Featured researches published by John G. Eyles.


international conference on computer graphics and interactive techniques | 1989

Pixel-planes 5: a heterogeneous multiprocessor graphics system using processor-enhanced memories

Henry Fuchs; John W. Poulton; John G. Eyles; Trey Greer; Jack Goldfeather; David Ellsworth; Steven Molnar; Greg Turk; Brice Tebbs; Laura Israel

This paper introduces the architecture and initial algorithms for Pixel-Planes 5, a heterogeneous multi-computer designed both for high-speed polygon and sphere rendering (1M Phong-shaded triangles/second) and for supporting algorithm and application research in interactive 3D graphics. Techniques are described for volume rendering at multiple frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form-factors. The hardware consists of up to 32 math-oriented processors, up to 16 rendering units, and a conventional 1280 × 1024-pixel frame buffer, interconnected by a 5 gigabit ring network. Each rendering unit consists of a 128 × 128-pixel array of processors-with-memory with parallel quadratic expression evaluation for every pixel. Implemented on 1.6 micron CMOS chips designed to run at 40MHz, this array has 208 bits/pixel on-chip and is connected to a video RAM memory system that provides 4,096 bits of off-chip memory. Rendering units can be independently reasigned to any part of the screen or to non-screen-oriented computation. As of April 1989, both hardware and software are still under construction, with initial system operation scheduled for fall 1989.


international conference on computer graphics and interactive techniques | 1992

PixelFlow: high-speed rendering using image composition

Steven Molnar; John G. Eyles; John W. Poulton

We describe PixelFlow, an architecture for high-speed image generation that overcomes the transformationand frame-buffer– access bottlenecks of conventional hardware rendering architectures. PixelFlow uses the technique of image composition: it distributes the rendering task over an array of identical renderers, each of which computes a fill-screen image of a fraction of the primitives. A high-performance image-composition network composites these images in real time to produce an image of the entire scene. Image-composition architectures offer performance that scales linearly with the number of renderers; there is no fundamental limit to the maximum performance achievable using this approach. A single PixelFlow renderer rasterizes up to 1.4 million triangles per second, and an n-renderer system can rasterize at up to n times this basic rate. PixelFlow performs antialiasing by supersampling. It supports defemed shading with separate hardware shaders that operate on composite images containing intermediate pixel data. PixelFlow shaders compute complex shading algorithms and procedural and image-based textures in real-time. The shading rate is independent of scene complexity. A Pixel Flow system can be coupled to a parallel supercomputer to serve as an immediatemode graphics server, or it can maintain a display list for retainedmode rendering. The PixelFlow design has been simulated extensively at high level. Custom chip design is underway. We anticipate a working system by late 1993. CR


international conference on computer graphics and interactive techniques | 1985

Fast spheres, shadows, textures, transparencies, and imgage enhancements in pixel-planes

Henry Fuchs; Jack Goldfeather; Jeff P. Hultquist; Susan Spach; John D. Austin; Frederick P. Brooks; John G. Eyles; John W. Poulton

Pixel-planes is a logic-enhanced memory system for raster graphics and imaging. Although each pixel-memory is enhanced with a one-bit ALU, the systems real power comes from a tree of one-bit adders that can evaluate linear expressions Ax+By+C for every pixel (x,y) simultaneously, as fast as the ALUs and the memory circuits can accept the results. We and others have begun to develop a variety of algorithms that exploit this fast linear expression evaluation capability. In this paper we report some of those results. Illustrated in this paper is a sample image from a small working prototype of the Pixel-planes hardware and a variety of images from simulations of a full-scale system. Timing estimates indicate that 30,000 smooth shaded triangles can be generated per second, or 21,000 smooth-shaded and shadowed triangles can be generated per second, or over 25,000 shaded spheres can be generated per second. Image-enhancement by adaptive histogram equalization can be performed within 4 seconds on a 512x512 image.


international conference on computer graphics and interactive techniques | 1997

PixelFlow: the realization

John G. Eyles; Steven Molnar; John W. Poulton; Trey Greer; Anselmo Lastra; Nick England; Lee Westover

PlxelFlow is an architecture for high-speed, highly realistic image generation, based on the techniques of object-parallelism and image composition, Its initial architecture was described in [MOLN92]. After development by the original team of researchers at the University of North Carolina, and codevelopment with industry partners, Division Ltd. and HcwlettPackard, PixelFlow now is a much more capable system than initially conceived and its hardware and software systems have evolved considerably. This paper describes the final realization of PixelFlow, along with hardware and software enhancements heretofore unpublished. CR Cntcgorics and Subject Descriptors: C.5.4 [Computer System Implementation]: VLSI Systems; 1.3.1 [Computer Graphics]: Hardware Architecture; 1.3.3 [Computer Graphics]: Picture/Image Generation; 1.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism. Additlonnl


IEEE Transactions on Biomedical Engineering | 1981

Estimating Respiratory Mechanical Parameters in Parallel Compartment Models

John G. Eyles; R. L. Pimmel

Four iterative parameter estimation algorithms were used to obtain estimates in three parallel compartment models of the respiratory system. The stability of the parameter estimates and the agreement between the forced random noise impedance data and the models response were evaluated for each algorithm-model combination. The combination of a two-stage simplex algorithm with a five element model provided the most stable parameter estimates and the second best fit to the data.


international conference on computer graphics and interactive techniques | 2000

The WarpEngine: an architecture for the post-polygonal age

Voicu Popescu; John G. Eyles; Anselmo Lastra; Joshua Steinhurst; Nick England; Lars S. Nyland

We present the WarpEngine, an architecture designed for real-time imaged-based rendering of natural scenes from arbitrary viewpoints. The modeling primitives are real-world images with per-pixel depth. Currently they are acquired and stored off-line; in the near future real-time depth-image acquisition will be possible, the WarpEngine is designed to render in immediate mode from such data sources. The depth-image resolution is locally adapted by interpolation to match the resolution of the output image. 3D warping can occur either before or after the interpolation; the resulting warped/interpolated samples are forward-mapped into a warp buffer, with the precise locations recorded using an offset. Warping processors are integrated on-chip with the warp buffer, allowing efficient, scalable implementation of very high performance systems. Each chip will be able to process 100 million samples per second and provide 4.8GigaBytes per second of bandwidth to the warp buffer. The WarpEngine is significantly less complex than our previous efforts, incorporating only a single ASIC design. Small configurations can be packaged as a PC add-in card, while larger deskside configurations will provide HDTV resolutions at 50 Hz, enabling radical new applications such as 3D television. WarpEngine will be highly programmable, facilitating use as a test-bed for experimental IBR algorithms.


Helmet-Mounted Displays II | 1990

Tracking a head-mounted display in a room-sized environment with head-mounted cameras

Jih-fang Wang; Ronald Azuma; Gary Bishop; Vernon L. Chi; John G. Eyles; Henry Fuchs

This paper presents our efforts to accurately track a Head-Mounted Display (HMD) in a large environment. We review our current benchtop prototype (introduced in {WCF9O]), then describe our plans for building the full-scale system. Both systems use an inside-oui optical tracking scheme, where lateraleffect photodiodes mounted on the users helmet view flashing infrared beacons placed in the environment. Churchs method uses the measured 2D image positions and the known 3D beacon locations to recover the 3D position and orientation of the helmet in real-time. We discuss the implementation and performance of the benchtop prototype. The full-scale system design includes ceiling panels that hold the infrared beacons and a new sensor arrangement of two photodiodes with holographic lenses. In the full-scale system, the user can walk almost anywhere under the grid of ceiling panels, making the working volume nearly as large as the room.


international solid-state circuits conference | 2007

A 14mW 6.25Gb/s Transceiver in 90nm CMOS for Serial Chip-to-Chip Communications

Robert E. Palmer; John W. Poulton; William J. Dally; John G. Eyles; Andrew M. Fuller; Trey Greer; Mark Horowitz; Mark D. Kellam; F. Quan; F. Zarkeshvari

A power-efficient 6.25Gb/s transceiver in 90nm CMOS for chip-to-chip communication is presented, it dissipates 2.2mW/Gb/s operating at a BER of <10-15 over a channel with -15dB attenuation at 3.125GHz. A shared LC-PLL, resonant clock distribution, a low-swing voltage-mode transmitter, a low-power phase rotator, and a software-based CDR and an adaptive equalizer are used to reduce power


IEEE Computer Graphics and Applications | 1992

Breaking the frame-buffer bottleneck with logic-enhanced memories

John W. Poulton; John G. Eyles; Steven Molnar; Henry Fuchs

Logic-enhanced memory chips that can remove the rasterizer/frame buffer bottleneck which limits the performance of current image-generation architectures are discussed. Putting pixel memory on-chip with rasterizing processors provides the two to three orders of magnitude improvement in access rates needed to support realistic shading models and aliasing in interactive systems. Current high-performance graphics systems and logic-enhanced memory architectural issues are reviewed. The design of the PixelFlow Enhanced Memory Chip (EMC), which exploits advances in semiconductor technology and circuit techniques to build compact, high-performance rasterizers, is described.<<ETX>>


international solid-state circuits conference | 2013

A 0.54 pJ/b 20 Gb/s Ground-Referenced Single-Ended Short-Reach Serial Link in 28 nm CMOS for Advanced Packaging Applications

John W. Poulton; William J. Dally; Xi Chen; John G. Eyles; Thomas Hastings Greer; Stephen G. Tell; John M. Wilson; C. Thomas Gray

Laminated packages, silicon interposer substrates, and special-purpose package-to-package interconnect [1,5], together with 3D stacking of silicon components [2] enable systems with greatly improved computational power, memory capacity and bandwidth. These package options offer very high-bandwidth channels between chips on the same substrate. We employ ground-referenced single-ended signaling, a charge-pump transmitter, and a co-designed channel to provide communication between multiple chips on one package with high bandwidth per pin and low energy per bit.

Collaboration


Dive into the John G. Eyles's collaboration.

Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Steven Molnar

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John D. Austin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Susan Spach

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Frederick P. Brooks

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jeff P. Hultquist

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anselmo Lastra

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge