Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David T. Chen is active.

Publication


Featured researches published by David T. Chen.


international conference on computer graphics and interactive techniques | 1996

Superior augmented reality registration by integrating landmark tracking and magnetic tracking

Andrei State; Gentaro Hirota; David T. Chen; William F. Garrett; Mark A. Livingston

Accurate registration between real and virtual objects is crucial for augmented reality applications. Existing tracking methods are individually inadequate: magnetic trackers are inaccurate, mechanical trackers are cumbersome, and vision-based trackers are computationally problematic. We present a hybrid tracking method that combines the accuracy of vision-based tracking with the robustness of magnetic tracking without compromising real-time performance or usability. We demonstrate excellent registration in three sample applications. CR


Visualization in Biomedical Computing '92 | 1992

Incremental Volume Reconstruction and Rendering for 3D Ultrasound Imaging

Ryutarou Ohbuchi; David T. Chen; Henry Fuchs

In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.


interactive 3d graphics and games | 1995

Interactive shape metamorphosis

David T. Chen; Andrei State; David C. Banks

Image metamorphoses (morphing) is a powerful and easy-to-use tool for generating new 2D images from existing 2D images. In recent years morphing has become popular as an artistic tool and is used extensively in the entertainment industry. In this paper we describe a new technique for controlled, feature-based metamorphosis of certain types of surfaces in 3-space; it applies well-understood 2D methods to produce shape metamorphosis between 3D models in a 2D parametric space. We also describe an interactive implementation on a parallel graphics multicomputer, which allows the user to define, modify and examine the 3D morphing process in real time.


interactive 3d graphics and games | 1995

Interactive volume visualization on a heterogeneous message-passing multicomputer

Andrei State; Jonathan McAllister; Ulrich Neumann; Hong Chen; Tim J. Cullip; David T. Chen; Henry Fuchs

This paper describes VOL2, an interactive general-purpose volume renderer based on ray casting and implemented on Pixel-Planes 5, a distributed-memory, message-passing multicomputer. VOL2 is a pipelined renderer using image-space task parallelism and object-space data partitioning. We describe the parallelization and load balancing techniques used in order to achieve interactive response and near-real-time frame rates. We also present a number of applications for our system and derive some general conclusions about operation of image-order rendering algorithms on message-passing multicomputers.


Medical Imaging 1994: Image Processing | 1994

Core-based boundary claiming

Stephen M. Pizer; Shobha Murthy; David T. Chen

The core (defined in the accompanying paper by Morse) provides a means for characterizing the middle/width behavior of a figure, that is, an object or component thereof, directly from the image intensities and in a way insensitive to detail. The figures in question are either complete objects, contained subobjects, object protrusions, or object intrusions. The core provides the ability to claim regions of the image as including either the boundary information of an object or its protrusion or intrusion cores. The angulation in scale space, spatial position, and scale of a figures core allows one to move from the core to a boundary at the scale of the figure. The core of protrusion and intrusion subfigures of the figure in question will intersect this boundary at the scale of the core. Moreover, if each point on the boundary at the scale of the core is blurred in proportion to the scale of the corresponding point on the core, a collar is formed within which the boundary of the figure can be found. We show how to find the collar and how stably to find the boundary, given the collar, even in noisy or blurred objects. This ability leads to an accurate, robust, automatic method of object area computation, and the generalization of this approach to 3D also provides the basis for efficient surface rendering and volume rendering.


IEEE Computer Graphics and Applications | 2012

Visualizing Cells and Humans in 3D: Biomedical Image Analysis at Nanometer and Meter Scales

Terry S. Yoo; Donald Bliss; Bradley C. Lowekamp; David T. Chen; Gavin E. Murphy; Kedar Narayan; Lisa M. Hartnell; Thao Do; Sriram Subramaniam

Researchers analyzed and presented volume data from the Visible Human Project (VHP) and data from high-resolution 3D ion-abrasion scanning electron microscopy (IA-SEM). They acquired the VHP data using cryosectioning, a destructive approach to 3D human anatomical imaging resulting in whole-body images with a field of view approaching 2 meters and a minimum resolvable feature size of 300 microns. IA-SEM is a type of block-face imaging microscopy, a destructive approach to microscopic 3D imaging of cells. The field of view of IA-SEM data is on the order of 10 microns (whole cell) with a minimum resolvable feature size of 15 nanometers (single-slice thickness). Despite the difference in subject and scale, the analysis and modeling methods were remarkably similar. They are derived from image processing, computer vision, and computer graphics techniques. Moreover, together we are employing medical illustration, visualization, and rapid prototyping to inform and inspire biomedical science. By combining graphics and biology, we are imaging across nine orders of magnitude of space to better promote public health through research.


international conference on computer graphics and interactive techniques | 2006

Animated embroidery: a teapot in modern blackwork

Terry Yoo; Penny Rheingans; David T. Chen; Marc Olano; Bradley C. Lowekamp

We set out to explore new domains of computer graphics through uncommon media, pushing the bounds of non-photorealistic rendering (NPR). We use computer graphics NPR methods to move beyond the common pen-and-ink or impressionist oil painting styles, and we have implemented computer generated Blackwork, a decorative art of embroidery originating in Elizabethan times to add detail to clothing. The basic techniques of blackwork were later used as an illustrative form, revived in mid 20th century. We adopt these methods, to computer graphics and show how iconic objects such as the Utah teapot (and the less well known teacup) can be rendered anew using vintage media. We add a very uncommon twist, animating embroidery, a media that does not lend itself to moving images, perhaps for the first time. Using PLAWARe, our layered software architecture for NPR, we implemented a system to place artistic embroidery primitives according the principles of blackwork, an embroidery style popular in the sixteenth and seventeenth centuries. A 20-century revival of blackwork generates variation in tones, contrasting lights and darks, through elaborate fill patterns. This more modern blackwork resembles that of a pen-and-ink drawings. These techniques are designed specifically with embroidered primitives in mind. They are not simply rendered images, converted to rastered TIFF files, then digitized in the same fashion as trademarked logos for promotional garments. We instead derive the tone of each region of the scene and place individual stitches using directives for automated embroidery tools. In this direct process, we generate no output image files, but rather a control file for the stitching machine itself. Our modern interpretation renders polygonal objects using computer graphics techniques, and using computer controlled embroidery machines, translates them to embroidered panels. We choose the venerable Utah Teapot as our subject for a still life, connecting a tradition in the field of computer graphics to this centuries-old art form. Beyond the casting of an old art form into computer graphics, we employ the power of computer control to animate what is traditionally a hand craft. Our exhibit is a proposed installation, a zoetrope, a simple mechanical animation tool where we expect to mount twelve 9-inch embroidered panels around the inside of a slotted, rotating 3-foot diameter cylinder creating a moving animation loop of blackwork embroidered renderings. The use of real panels, rather than photographs is considered essential to convey the tangible aspects of this technique.


ieee visualization | 1994

Case study: observing a volume rendered fetus within a pregnant patient

Andrei State; David T. Chen; Chris Tector; Andrew Brandt; Hong Chen; Ryutarou Ohbuchi; Michael Bajura; Henry Fuchs


ieee visualization | 1994

Observing a volume rendered fetus within a pregnant patient

Andrei State; David T. Chen; Chris Tector; Andrew Brandt; Hong Chen; Ryutarou Ohbuchi; Michael Bajura; Henry Fuchs


Archive | 1999

M-Reps: A New Object Representation for Graphics

Stephen M. Pizer; Andrew Thall; David T. Chen

Collaboration


Dive into the David T. Chen's collaboration.

Top Co-Authors

Avatar

Andrei State

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Andrew Brandt

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Tector

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Hong Chen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Michael Bajura

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Pizer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Terry S. Yoo

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge