Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henry Fuchs is active.

Publication


Featured researches published by Henry Fuchs.


international conference on computer graphics and interactive techniques | 1998

The office of the future: a unified approach to image-based modeling and spatially immersive displays

Ramesh Raskar; Greg Welch; Matt Cutts; Adam Lake; Lev Stesin; Henry Fuchs

Apparatus for dispensing pasty flowable substances, and particularly for dispensing substances such as mustard and catsup for deposit on buns. The apparatus comprises a closed container having a plurality of compartments for containing mustard and catsup. A valve arrangement is associated with the container to uncover selected openings in compartments, and air under slight pressure is introduced into the container to assist in ejecting the mustard or catsup. A control is included to selectively provide for ejection of mustard, or catsup, or both.


IEEE Computer Graphics and Applications | 1994

A sorting classification of parallel rendering

Steven Molnar; Michael Cox; David Ellsworth; Henry Fuchs

We describe a classification scheme that we believe provides a more structured framework for reasoning about parallel rendering. The scheme is based on where the sort from object coordinates to screen coordinates occurs, which we believe is fundamental whenever both geometry processing and rasterization are performed in parallel. This classification scheme supports the analysis of computational and communication costs, and encompasses the bulk of current and proposed highly parallel renderers - both hardware and software. We begin by reviewing the standard feed-forward rendering pipeline, showing how different ways of parallelizing it lead to three classes of rendering algorithms. Next, we consider each of these classes in detail, analyzing their aggregate processing and communication costs, possible variations, and constraints they may impose on rendering applications. Finally, we use these analyses to compare the classes and identify when each is likely to be preferable.<<ETX>>


international conference on computer graphics and interactive techniques | 1992

Merging virtual objects with the real world: seeing ultrasound imagery within the patient

Michael Bajura; Henry Fuchs; Ryutarou Ohbuchi

We describe initial results which show “live” ultrasound echography data visualized within a pregnant human subject. The visualization is achieved by using a small video camera mounted in front of a conventional head-mounted display worn by an observer. The camera’s video images are composite with computer-generated ones that contain one or more 2D ultrasound images properly transformed to the observer’s current viewing position. As the observer walks around the subject. the ultrasound images appear stationary in 3-space within the subject. This kind of enhancement of the observer’s vision may have many other applications, e.g., image guided surgical procedures and on location 3D interactive architecture preview. CR Categories: 1.3.7 [Three-Dimensional Graphics and Realism] Virtual Reality, 1,3.I [Hardware architecture]: Three-dimensional displays, 1.3.6 [Methodology and Techniques]: Interaction techniques, J.3 ILife and Medical Sciences]: Medical information systems. Additional


international conference on computer graphics and interactive techniques | 1989

Pixel-planes 5: a heterogeneous multiprocessor graphics system using processor-enhanced memories

Henry Fuchs; John W. Poulton; John G. Eyles; Trey Greer; Jack Goldfeather; David Ellsworth; Steven Molnar; Greg Turk; Brice Tebbs; Laura Israel

This paper introduces the architecture and initial algorithms for Pixel-Planes 5, a heterogeneous multi-computer designed both for high-speed polygon and sphere rendering (1M Phong-shaded triangles/second) and for supporting algorithm and application research in interactive 3D graphics. Techniques are described for volume rendering at multiple frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form-factors. The hardware consists of up to 32 math-oriented processors, up to 16 rendering units, and a conventional 1280 &times; 1024-pixel frame buffer, interconnected by a 5 gigabit ring network. Each rendering unit consists of a 128 &times; 128-pixel array of processors-with-memory with parallel quadratic expression evaluation for every pixel. Implemented on 1.6 micron CMOS chips designed to run at 40MHz, this array has 208 bits/pixel on-chip and is connected to a video RAM memory system that provides 4,096 bits of off-chip memory. Rendering units can be independently reasigned to any part of the screen or to non-screen-oriented computation. As of April 1989, both hardware and software are still under construction, with initial system operation scheduled for fall 1989.


ieee visualization | 1999

Multi-projector displays using camera-based registration

Ramesh Raskar; Michael S. Brown; Ruigang Yang; Wei-Chao Chen; Greg Welch; Herman Towles; B. Scales; Henry Fuchs

Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.


medical image computing and computer assisted intervention | 1998

Augmented Reality Visualization for Laparoscopic Surgery

Henry Fuchs; Mark A. Livingston; Ramesh Raskar; D`nardo Colucci; Kurtis Keller; Andrei State; Jessica R. Crawford; Paul Rademacher; Samuel Drake; Anthony A. Meyer

We present the design and a prototype implementation of a three-dimensional visualization system to assist with laparoscopic surgical procedures. The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon’s video-see-through head-mounted display. We also introduce a custom design for this display. A digital light projector, a camera, and a conventional laparoscope create a prototype 3D laparoscope that can extract depth and video imagery.


international conference on computer graphics and interactive techniques | 1996

Technologies for augmented reality systems: realizing ultrasound-guided needle biopsies

Andrei State; Mark A. Livingston; William F. Garrett; Gentaro Hirota; Etta D. Pisano; Henry Fuchs

We present a real-time stereoscopic video-see-through augmented reality (AR) system applied to the medical procedure known as ultrasound-guided needle biopsy of the breast. The AR system was used by a physician during procedures on breast models and during non-invasive examinations of human subjects. The system merges rendered live ultrasound data and geometric elements with stereo images of the patient acquired through head-mounted video cameras and presents these merged images to the physician in a head-mounted display. The physician sees a volume visualization of the ultrasound data directly under the ultrasound probe, properly registered within the patient and with the biopsy needle. Using this system, a physician successfully guided a needle into an artificial tumor within a training phantom of a human breast. We discuss the construction of the AR system and the issues and decisions which led to the system architecture and the design of the video see-through head-mounted display. We designed methods to properly resolve occlusion of the real and synthetic image elements. We developed techniques for realtime volume visualization of timeand position-varying ultrasound data. We devised a hybrid tracking system which achieves improved registration of synthetic and real imagery and we improved on previous techniques for calibration of a magnetic tracker. CR


Presence: Teleoperators & Virtual Environments | 2000

Optical Versus Video See-Through Head-Mounted Displays in Medical Visualization

Jannick P. Rolland; Henry Fuchs

We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.


international conference on computer graphics and interactive techniques | 1985

Fast spheres, shadows, textures, transparencies, and imgage enhancements in pixel-planes

Henry Fuchs; Jack Goldfeather; Jeff P. Hultquist; Susan Spach; John D. Austin; Frederick P. Brooks; John G. Eyles; John W. Poulton

Pixel-planes is a logic-enhanced memory system for raster graphics and imaging. Although each pixel-memory is enhanced with a one-bit ALU, the systems real power comes from a tree of one-bit adders that can evaluate linear expressions Ax+By+C for every pixel (x,y) simultaneously, as fast as the ALUs and the memory circuits can accept the results. We and others have begun to develop a variety of algorithms that exploit this fast linear expression evaluation capability. In this paper we report some of those results. Illustrated in this paper is a sample image from a small working prototype of the Pixel-planes hardware and a variety of images from simulations of a full-scale system. Timing estimates indicate that 30,000 smooth shaded triangles can be generated per second, or 21,000 smooth-shaded and shadowed triangles can be generated per second, or over 25,000 shaded spheres can be generated per second. Image-enhancement by adaptive histogram equalization can be performed within 4 seconds on a 512x512 image.


international conference on computer graphics and interactive techniques | 1980

On visible surface generation by a priori tree structures

Henry Fuchs; Zvi M. Kedem; Bruce F. Naylor

This paper describes a new algorithm for solving the hidden surface (or line) problem, to more rapidly generate realistic images of 3-D scenes composed of polygons, and presents the development of theoretical foundations in the area as well as additional related algorithms. As in many applications the environment to be displayed consists of polygons many of whose relative geometric relations are static, we attempt to capitalize on this by preprocessing the environments database so as to decrease the run-time computations required to generate a scene. This preprocessing is based on generating a “binary space partitioning” tree whose in order traversal of visibility priority at run-time will produce a linear order, dependent upon the viewing position, on (parts of) the polygons, which can then be used to easily solve the hidden surface problem. In the application where the entire environment is static with only the viewing-position changing, as is common in simulation, the results presented will be sufficient to solve completely the hidden surface problem.

Collaboration


Dive into the Henry Fuchs's collaboration.

Top Co-Authors

Avatar

Andrei State

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Pizer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Greg Welch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Herman Towles

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Anselmo Lastra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce A. Cairns

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Kurtis Keller

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

John G. Eyles

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge