Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Livingston is active.

Publication


Featured researches published by Mark A. Livingston.


international conference on computer graphics and interactive techniques | 1996

Superior augmented reality registration by integrating landmark tracking and magnetic tracking

Andrei State; Gentaro Hirota; David T. Chen; William F. Garrett; Mark A. Livingston

Accurate registration between real and virtual objects is crucial for augmented reality applications. Existing tracking methods are individually inadequate: magnetic trackers are inaccurate, mechanical trackers are cumbersome, and vision-based trackers are computationally problematic. We present a hybrid tracking method that combines the accuracy of vision-based tracking with the robustness of magnetic tracking without compromising real-time performance or usability. We demonstrate excellent registration in three sample applications. CR


medical image computing and computer assisted intervention | 1998

Augmented Reality Visualization for Laparoscopic Surgery

Henry Fuchs; Mark A. Livingston; Ramesh Raskar; D`nardo Colucci; Kurtis Keller; Andrei State; Jessica R. Crawford; Paul Rademacher; Samuel Drake; Anthony A. Meyer

We present the design and a prototype implementation of a three-dimensional visualization system to assist with laparoscopic surgical procedures. The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon’s video-see-through head-mounted display. We also introduce a custom design for this display. A digital light projector, a camera, and a conventional laparoscope create a prototype 3D laparoscope that can extract depth and video imagery.


international conference on computer graphics and interactive techniques | 1996

Technologies for augmented reality systems: realizing ultrasound-guided needle biopsies

Andrei State; Mark A. Livingston; William F. Garrett; Gentaro Hirota; Etta D. Pisano; Henry Fuchs

We present a real-time stereoscopic video-see-through augmented reality (AR) system applied to the medical procedure known as ultrasound-guided needle biopsy of the breast. The AR system was used by a physician during procedures on breast models and during non-invasive examinations of human subjects. The system merges rendered live ultrasound data and geometric elements with stereo images of the patient acquired through head-mounted video cameras and presents these merged images to the physician in a head-mounted display. The physician sees a volume visualization of the ultrasound data directly under the ultrasound probe, properly registered within the patient and with the biopsy needle. Using this system, a physician successfully guided a needle into an artificial tumor within a training phantom of a human breast. We discuss the construction of the AR system and the issues and decisions which led to the system architecture and the design of the video see-through head-mounted display. We designed methods to properly resolve occlusion of the real and synthetic image elements. We developed techniques for realtime volume visualization of timeand position-varying ultrasound data. We devised a hybrid tracking system which achieves improved registration of synthetic and real imagery and we improved on previous techniques for calibration of a magnetic tracker. CR


ieee virtual reality conference | 2006

A Survey of Large High-Resolution Display Technologies, Techniques, and Applications

Tao Ni; Greg S. Schmidt; Oliver G. Staadt; Mark A. Livingston; Robert Ball; Richard May

Continued advances in display hardware, computing power, networking, and rendering algorithms have all converged to dramatically improve large high-resolution display capabilities. We present a survey on prior research with large high-resolution displays. In the hardware configurations section we examine systems including multi-monitor workstations, reconfigurable projector arrays, and others. Rendering and the data pipeline are addressed with an overview of current technologies. We discuss many applications for large high-resolution displays such as automotive design, scientific visualization, control centers, and others. Quantifying the effects of large high-resolution displays on human performance and other aspects is important as we look toward future advances in display technology and how it is applied in different situations. Interacting with these displays brings a different set of challenges for HCI professionals, so an overview of some of this work is provided. Finally, we present our view of the top ten greatest challenges in large highresolution displays.


ieee virtual reality conference | 2012

Performance measurements for the Microsoft Kinect skeleton

Mark A. Livingston; Jay Sebastian; Zhuming Ai; Jonathan W. Decker

The Microsoft Kinect for Xbox 360 (“Kinect”) provides a convenient and inexpensive depth sensor and, with the Microsoft software development kit, a skeleton tracker (Figure 2). These have great potential to be useful as virtual environment (VE) control interfaces for avatars or for viewpoint control. In order to determine its suitability for our applications, we devised and conducted tests to measure standard performance specifications for tracking systems. We evaluated the noise, accuracy, resolution, and latency of the skeleton tracking software. We also measured the range in which the person being tracked must be in order to achieve these values.


Presence: Teleoperators & Virtual Environments | 1997

Magnetic tracker calibration for improved augmented reality registration

Mark A. Livingston; Andrei State

We apply a look-up table technique to calibrate both position and orientation readings from a magnetic tracker for use in virtual environments within a defined working volume. In a test volume of 2.4 cubic meters, the method reduced the trackers average position error by 79% and its average orientation error by 40%. We test the correction table against the trackers performance outdoors (a metal-poor environment) and show that readings taken in our lab and corrected by our method exhibit less error than uncorrected readings taken outdoors. We demonstrate that such reduction in position error visibly improves registration in an augmented reality system, whereas the (lesser) reduction in orientation error does not visibly improve registration. We show that the model we used for the orientation error function was incorrect, preventing our method from achieving better correction of orientation error, We discuss future directions for correction of orientation error.


The Visual Computer | 1999

Interactive surface decomposition for polyhedral morphing

Arthur D. Gregory; Andrei State; Ming C. Lin; Dinesh Manocha; Mark A. Livingston

We present a new approach for establishing correspondence for morphing between two homeomorphic polyhedral models. The user can specify corresponding feature pairs on the polyhedra with a simple and intuitive interface. Based on these features, our algorithm decomposes the boundary of each polyhedron into the same number of morphing patches. A 2D mapping for each morphing patch is computed in order to merge the topologies of the polyhedra one patch at a time. We create a morph by defining morphing trajectories between the feature pairs and by interpolating them across the merged polyhedron. The user interface provides high-level control, as well as local refinement to improve the morph. The implementation has been applied to several polyhedra composed of thousands of polygons. The system can also handle homeomorphic non-simple polyhedra that are not genuszero (or have holes).


interactive 3d graphics and games | 1997

Managing latency in complex augmented reality systems

Marco C. Jacobs; Mark A. Livingston; Andrei State

Managing Latency in Complex Augmented Reality Marco C. Jacobs* Mark A. Livingston Andrei State University of North Carolina at Chapel Hill *Delft University of Technology, the Netherlands {jacobsllivingst [state} @cs.unc.edu Systems Registration (or alignment) of the synthetic imagery with the real world is crucial in augmented reality (AR) systems. The data from user-input devices, tracking devices, and imaging devices need to be registered spatially and temporally with the user’s view of the surroundings. Each device haa an associated delay between its observations of the world and the moment when the AR display presented to the user appeto be afkted by a change in the data. We call the differences in delay the relative Iatencies. Relative latency is a source of misregistration and should be reduced. We give general methods for handling multiple data streams with different latency values associated with them in a working AR system. We measure the latency differences (part of the system dependent set of calibrations), time-stamp on-host, adjust the moment of sampling, and interpolate or extrapolate data streams. By using these schemes, a more accurate and consistent view is computed and presented to the user. CR


IEEE Transactions on Visualization and Computer Graphics | 2007

Egocentric depth judgments in optical, see-through augmented reality

J. Edward Swan; Adam Thomas Jones; Eric Kolstad; Mark A. Livingston; Harvey S. Smallman

A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the x-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at ~ 23 meters, as well as a quantification of how much more difficult the x-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to scan the ground plane in a near-to-far direction, as explanations for the observed depth underestimation.


VBC '96 Proceedings of the 4th International Conference on Visualization in Biomedical Computing | 1996

Towards Performing Ultrasound-Guided Needle Biopsies from within a Head-Mounted Display

Henry Fuchs; Andrei State; Etta D. Pisano; William F. Garrett; Gentaro Hirota; Mark A. Livingston; Stephen M. Pizer

Augmented reality is applied to ultrasound-guided needle biopsy of the human breast. In a tracked stereoscopic head-mounted display, a physician sees the ultrasound imagery “emanating” from the transducer, properly registered with the patient and the biopsy needle. A physician has successfully used the system to guide a needle into a synthetic tumor within a breast phantom and examine a human patient in preparation for a cyst aspiration.

Collaboration


Dive into the Mark A. Livingston's collaboration.

Top Co-Authors

Avatar

Simon J. Julier

University College London

View shared research outputs
Top Co-Authors

Avatar

Jonathan W. Decker

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis G. Brown

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Yohan Baillot

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Zhuming Ai

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andrei State

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

J. Edward Swan

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lawrence J. Rosenblum

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge