Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Ponto is active.

Publication


Featured researches published by Kevin Ponto.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


IEEE Transactions on Visualization and Computer Graphics | 2013

Perceptual Calibration for Immersive Display Environments

Kevin Ponto; Michael Gleicher; Robert G. Radwin; Hyun Joon Shin

The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape.


Future Generation Computer Systems | 2011

CGLXTouch: A multi-user multi-touch approach for ultra-high-resolution collaborative workspaces

Kevin Ponto; Kai Doerr; Tom Wypych; John Kooker; Falko Kuester

This paper presents an approach for empowering collaborative workspaces through ultra-high resolution tiled display environments concurrently interfaced with multiple multi-touch devices. Multi-touch table devices are supported along with portable multi-touch tablet and phone devices, which can be added to and removed from the system on the fly. Events from these devices are tagged with a device identifier and are synchronized with the distributed display environment, enabling multi-user support. As many portable devices are not equipped to render content directly, a remotely scene is streamed in. The presented approach scales for large numbers of devices, providing access to a multitude of hands-on techniques for collaborative data analysis.


Future Generation Computer Systems | 2010

Giga-stack: A method for visualizing giga-pixel layered imagery on massively tiled displays

Kevin Ponto; Kai Doerr; Falko Kuester

In this paper, we present a technique for the interactive visualization and interrogation of multi-dimensional giga-pixel imagery. Co-registered image layers representing discrete spectral wavelengths or temporal information can be seamlessly displayed and fused. Users can freely pan and zoom, while swiftly transitioning through data layers, enabling intuitive analysis of massive multi-spectral or time-varying records. A data resource aware display paradigm is introduced which progressively and adaptively loads data from remote network attached storage devices. The technique is specifically designed to work with scalable, high-resolution, massively tiled display environments. By displaying hundreds of mega-pixels worth of visual information all at once, several users can simultaneously compare and contrast complex data layers in a collaborative environment.


Virtual Reality | 2006

Virtual Bounds: a teleoperated mixed reality

Kevin Ponto; Falko Kuester; Robert Nideffer; Simon Penny

This paper introduces a mixed reality workspace that allows users to combine physical and computer-generated artifacts, and to control and simulate them within one fused world. All interactions are captured, monitored, modeled and represented with pseudo-real world physics. The objective of the presented research is to create a novel system in which the virtual and physical world would have a symbiotic relationship. In this type of system, virtual objects can impose forces on the physical world and physical world objects can impose forces on the virtual world. Virtual Bounds is an exploratory study allowing a physical probe to navigate a virtual world while observing constraints, forces, and interactions from both worlds. This scenario provides the user with the ability to create a virtual environment and to learn to operate real-life probes through its virtual terrain.


ieee virtual reality conference | 2017

Uni-CAVE: A Unity3D plugin for non-head mounted VR display systems

Ross Tredinnick; Brady Boettcher; Simon Smith; Sam Solovy; Kevin Ponto

Unity3D has become a popular, freely available 3D game engine for design and construction of virtual environments. Unfortunately, the few options that currently exist for adapting Unity3D to distributed immersive tiled or projection-based VR display systems rely on closed commercial products. Uni-CAVE aims to solve this problem by creating a freely-available and easy to use Unity3D extension package for cluster-based VR display systems. This extension provides support for head and device tracking, stereo rendering and display synchronization. Furthermore, Uni-CAVE enables configuration within the Unity environment enabling researchers to get quickly up and running.


Journal of Biomedical Informatics | 2015

Virtualizing living and working spaces

Patricia Flatley Brennan; Kevin Ponto; Gail R. Casper; Ross Tredinnick; Markus Broecker

The physical spaces within which the work of health occurs - the home, the intensive care unit, the emergency room, even the bedroom - influence the manner in which behaviors unfold, and may contribute to efficacy and effectiveness of health interventions. Yet the study of such complex workspaces is difficult. Health care environments are complex, chaotic workspaces that do not lend themselves to the typical assessment approaches used in other industrial settings. This paper provides two methodological advances for studying internal health care environments: a strategy to capture salient aspects of the physical environment and a suite of approaches to visualize and analyze that physical environment. We used a Faro™ laser scanner to obtain point cloud data sets of the internal aspects of home environments. The point cloud enables precise measurement, including the location of physical boundaries and object perimeters, color, and light, in an interior space that can be translated later for visualization on a variety of platforms. The work was motivated by vizHOME, a multi-year program to intensively examine the home context of personal health information management in a way that minimizes repeated, intrusive, and potentially disruptive in vivo assessments. Thus, we illustrate how to capture, process, display, and analyze point clouds using the home as a specific example of a health care environment. Our work presages a time when emerging technologies facilitate inexpensive capture and efficient management of point cloud data, thus enabling visual and analytical tools for enhanced discharge planning, new insights for designers of consumer-facing clinical informatics solutions, and a robust approach to context-based studies of health-related work environments.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Influence of altered visual feedback on neck movement for a virtual reality rehabilitative system

Karen B. Chen; Kevin Ponto; Mary E. Sesto; Robert G. Radwin

This paper investigates altering visual feedback during neck movement through control-display (C-D) gain for a head-mounted display, for the purpose of determining the just noticeable difference (JND) for encouraging individuals with kinesiophobia (i.e. fear avoidance of movement due to chronic pain) to effectively perform therapeutic neck exercises. The JND was defined as .25 probability of detecting a difference from unity C-D gain (gain=1). A target-aiming task with two consecutive neck moves per trial was presented; one neck move had varying C-D gain and the other had unity gain. The VR system was able to influence neck moves without changing locations of the target. Participants indicated whether the two neck movements were the same or different. Logistic regression revealed that the JND gains were 0.903 (lower bound) and 1.159 (upper bound) as the participants could not discriminate a 55° turn, ranging from 49.7° to 63.7°. This preliminary study shows that immersive VR with altered visual feedback influenced movement. The feasibility for rehabilitation of individuals with kinesiophobia will next be assessed.


Human Factors | 2014

Manually locating physical and virtual reality objects.

Karen B. Chen; Ryan A. Kimmel; Aaron Bartholomew; Kevin Ponto; Michael Gleicher; Robert G. Radwin

Objective: In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Background: Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Method: Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Results: Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Conclusion: Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Application: Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.


IEEE Transactions on Visualization and Computer Graphics | 2012

Effective Replays and Summarization of Virtual Experiences

Kevin Ponto; Joe Kohlmann; Michael Gleicher

Direct replay of the experience of a user in a virtual environment is difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content-dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods.

Collaboration


Dive into the Kevin Ponto's collaboration.

Top Co-Authors

Avatar

Falko Kuester

University of California

View shared research outputs
Top Co-Authors

Avatar

Ross Tredinnick

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Robert G. Radwin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Karen B. Chen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Alex Peer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gail R. Casper

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Joe Kohlmann

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Eric Hoyt

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kai Doerr

University of California

View shared research outputs
Top Co-Authors

Avatar

Markus Broecker

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge