Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zachary Wartell is active.

Publication


Featured researches published by Zachary Wartell.


ieee virtual reality conference | 2000

The Perceptive Workbench: toward spontaneous and natural interaction in semi-immersive virtual environments

Bastian Leibe; Thad Starner; William Ribarsky; Zachary Wartell; David M. Krum; Bradley A. Singletary; Larry F. Hodges

The Perceptive Workbench enables a spontaneous, natural and unimpeded interface between the physical and virtual worlds. It uses vision-based methods for interaction that eliminate the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the objects 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity since either preloaded objects or those objects selected on-the-spot by the user can become physical icons. Integrated into the same vision-based interface is the ability to identify 3D hand position, pointing direction and sweeping arm gestures. Such gestures can enhance selection, manipulation and navigation tasks. In this paper, the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.


IEEE Computer Graphics and Applications | 2000

Toward spontaneous interaction with the Perceptive Workbench

Bastian Leibe; Thad Starner; William Ribarsky; Zachary Wartell; David M. Krum; Justin Weeks; Bradley A. Singletary; L. Hedges

Until now, we have interacted with computers mostly by using wire-based devices. Typically, the wires limit the distance of movement and inhibit freedom of orientation. In addition, most interactions are indirect. The user moves a device as an analog for the action created in the display space. We envision an untethered interface that accepts gestures directly and can accept any objects we choose as interactors. We discuss methods for producing more seamless interaction between the physical and virtual environments through the Perceptive Workbench. We applied the system to an augmented reality game and a terrain navigating system. The Perceptive Workbench can reconstruct 3D virtual representations of previously unseen real-world objects placed on its surface. In addition, the Perceptive Workbench identifies and tracks such objects as they are manipulated on the desks surface and allows the user to interact with the augmented environment through 2D and 3D gestures.


ieee visualization | 1998

Battlefield visualization on the responsive workbench

Jim Durbin; J. Edward Swan; Brad Colbert; John Crowe; Rob King; Tony King; Christopher Scannell; Zachary Wartell; Terry Welsh

In this paper we describe a battlefield visualization system, called Dragon, which we have implemented on a virtual reality responsive workbench. The Dragon system has been successfully deployed as part of two large military exercises: the Hunter Warrior advanced warfighting experiment, in March 1997, and the Joint Counter Mine advanced concept tactical demonstration, in August and September 1997. We describe battlefield visualization, the Dragon system, and the workbench, and we describe our experiences as part of these two real-world deployments, with an emphasis on lessons learned and needed future work.


international conference on computer graphics and interactive techniques | 1999

Balancing fusion, image depth and distortion in stereoscopic head-tracked displays

Zachary Wartell; Larry F. Hodges; William Ribarsky

Abstract Stereoscopic display is a fundamental part of virtual reality HMDsystems and HTD (head-tracked display) systems such as thevirtual workbench and the CAVE. A common practice instereoscopic systems is deliberate incorrect modeling of user eyeseparation. Underestimating eye separation is frequentlynecessary for the human visual system to fuse stereo image pairsinto single 3D images, while overestimating eye separationenhances image depth. Unfortunately, false eye separationmodeling also distorts the perceived 3D image in undesirableways. This paper makes three fundamental contributions tounderstanding and controlling this stereo distortion. (1) Weanalyze the distortion using a new analytic description. Thisanalysis shows that even with perfect head tracking, a user willperceive virtual objects to warp and shift as she moves her head.(2) We present a new technique for counteracting the shearingcomponent of the distortion. (3) We present improved methodsfor managing image fusion problems for distant objects and forenhancing the depth of flat scenes.CR Categories and Subject Descriptions: I.3.7 [ComputerGraphics] Three-Dimensional Graphics and Realism - VirtualReality; I.3.6 [Computer Graphics] Methodology and Techniques– Ergonomics; I.3.6 [Computer Graphics] Methodology andTechniques – Interaction Techniques; I.3.3 [Computer Graphics]Picture/Image Generation – Viewing AlgorithmsAdditional Keywords: virtual reality, stereoscopic display,head-tracking, image distortion


IEEE Transactions on Visualization and Computer Graphics | 2008

Multi-Focused Geospatial Analysis Using Probes

Thomas Butkiewicz; Wenwen Dou; Zachary Wartell; William Ribarsky; Remco Chang

Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.


ieee virtual reality conference | 1999

Third-person navigation of whole-planet terrain in a head-tracked stereoscopic environment

Zachary Wartell; William Ribarsky; Larry F. Hodges

Navigation and interaction in virtual environments that use stereoscopic head-tracked displays and have very large data sets present several challenges beyond those encountered with smaller data sets and simpler displays. First, zooming by approaching or retreating from a target must be augmented by integrating scale as a seventh degree of freedom. Second, in order to maintain good stereoscopic imagery, the interface must: maintain stereo image pairs that the user perceives as a single 3D image, minimize loss of perceived depth since stereoscopic imagery cannot properly occlude the screens frame, provide maximum depth information, and place objects at distances where they are best manipulated. Finally, the navigation interface must work when the environment is displayed at any scale. This paper addresses these problems for gods-eye-view or third person navigation of a specific large-scale virtual environment: a high-resolution terrain database covering an entire planet.


IEEE Computer Graphics and Applications | 2008

Legible Simplification of Textured Urban Models

Remco Chang; Thomas Butkiewicz; Caroline Ziemkiewicz; Zachary Wartell; William Ribarsky; Nancy S. Pollard

Most of the algorithms used for research in mesh simplification and discrete levels of detail (LOD) work well for simplifying single objects with a large number of polygons. For a city-sized collection of simple buildings, using these traditional algorithms could mean the disappearance of an entire residential area in which the buildings tend to be smaller than those in commercial regions. To solve this problem, we developed a mesh-simplification algorithm that incorporates concepts from architecture and city planning. Specifically, we rely on the concept of urban legibility, which segments a city into paths, edges, districts, nodes, and landmarks. If we preserve these elements of legibility during the simplification process, we can maintain the citys image and create urban models that users can understand more effectively. To accomplish this goal, we divide our algorithm into five steps. During preprocessing, it performs hierarchical clustering, cluster merging, model simplification, and hierarchical texturing, at runtime, it employs LOD to select the appropriate models for rendering.


IEEE Transactions on Visualization and Computer Graphics | 2002

A geometric comparison of algorithms for fusion control in stereoscopic HTDs

Zachary Wartell; Larry F. Hodges; William Ribarsky

This paper concerns stereoscopic virtual reality displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. These are called stereoscopic HTDs (head-tracked displays). Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally, the users natural visual system combines the stereo image pair into a single, 3D perceived image. Unfortunately, users often have difficulty fusing the stereo image pair. Researchers use a number of software techniques to reduce fusion problems. This paper geometrically examines and compares a number of these techniques and reaches the following conclusions: In interactive stereoscopic applications, the combination of view placement, scale, and either false eye separation or /spl alpha/-false eye separation can provide fusion control that is geometrically similar to image shifting and image scaling. However, in stereo HTDs, image shifting and image scaling also generate additional geometric artifacts that are not generated by the other methods. We anecdotally link some of these artifacts to exceeding the perceptual limitations of human vision. While formal perceptual studies are still needed, geometric analysis suggests that image shifting and image scaling may be less appropriate than the other methods for interactive, stereo HTDs.


symposium on 3d user interfaces | 2007

Two Handed Selection Techniques for Volumetric Data

Amy Catherine Ulinski; Catherine A. Zanbaka; Zachary Wartell; Paula Goolkasian; Larry F. Hodges

We developed three distinct two-handed selection techniques for volumetric data visualizations that use splat-based rendering. Two techniques are bimanual asymmetric, where each hand has a different task. One technique is bimanual symmetric, where each hand has the same task. These techniques were then evaluated based on accuracy, completion times, TLX workload assessment, overall comfort and fatigue, ease of use, and ease of learning. Our results suggest that the bimanual asymmetric selection techniques are best used when performing gross selection for potentially long periods of time and for cognitively demanding tasks. However when optimum accuracy is needed, the bimanual symmetric technique was best for selection


ieee vgtc conference on visualization | 2008

Visual analysis and semantic exploration of urban LIDAR change detection

Thomas Butkiewicz; Remco Chang; Zachary Wartell; William Ribarsky

Many previous approaches to detecting urban change from LIDAR point clouds interpolate the points into rasters, perform pixel‐based image processing to detect changes, and produce 2D images as output. We present a method of LIDAR change detection that maintains accuracy by only using the raw, irregularly spaced LIDAR points, and extracts relevant changes as individual 3D models. We then utilize these models, alongside existing GIS data, within an interactive application that allows the chronological exploration of the changes to an urban environment. A three‐tiered level‐of‐detail system maintains a scale‐appropriate, legible visual representation across the entire range of view scales, from individual changes such as buildings and trees, to groups of changes such as new residential developments, deforestation, and construction sites, and finally to larger regions such as neighborhoods and districts of a city that are emerging or undergoing revitalization. Tools are provided to assist the visual analysis by urban planners and historians through semantic categorization and filtering of the changes presented.

Collaboration


Dive into the Zachary Wartell's collaboration.

Top Co-Authors

Avatar

William Ribarsky

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Larry F. Hodges

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Isaac Cho

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Thomas Butkiewicz

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evan A. Suma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jialei Li

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenwen Dou

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Xiaoyu Wang

University of North Carolina at Charlotte

View shared research outputs
Researchain Logo
Decentralizing Knowledge