Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ross Tredinnick is active.

Publication


Featured researches published by Ross Tredinnick.


ieee virtual reality conference | 2017

Uni-CAVE: A Unity3D plugin for non-head mounted VR display systems

Ross Tredinnick; Brady Boettcher; Simon Smith; Sam Solovy; Kevin Ponto

Unity3D has become a popular, freely available 3D game engine for design and construction of virtual environments. Unfortunately, the few options that currently exist for adapting Unity3D to distributed immersive tiled or projection-based VR display systems rely on closed commercial products. Uni-CAVE aims to solve this problem by creating a freely-available and easy to use Unity3D extension package for cluster-based VR display systems. This extension provides support for head and device tracking, stereo rendering and display synchronization. Furthermore, Uni-CAVE enables configuration within the Unity environment enabling researchers to get quickly up and running.


Journal of Biomedical Informatics | 2015

Virtualizing living and working spaces

Patricia Flatley Brennan; Kevin Ponto; Gail R. Casper; Ross Tredinnick; Markus Broecker

The physical spaces within which the work of health occurs - the home, the intensive care unit, the emergency room, even the bedroom - influence the manner in which behaviors unfold, and may contribute to efficacy and effectiveness of health interventions. Yet the study of such complex workspaces is difficult. Health care environments are complex, chaotic workspaces that do not lend themselves to the typical assessment approaches used in other industrial settings. This paper provides two methodological advances for studying internal health care environments: a strategy to capture salient aspects of the physical environment and a suite of approaches to visualize and analyze that physical environment. We used a Faro™ laser scanner to obtain point cloud data sets of the internal aspects of home environments. The point cloud enables precise measurement, including the location of physical boundaries and object perimeters, color, and light, in an interior space that can be translated later for visualization on a variety of platforms. The work was motivated by vizHOME, a multi-year program to intensively examine the home context of personal health information management in a way that minimizes repeated, intrusive, and potentially disruptive in vivo assessments. Thus, we illustrate how to capture, process, display, and analyze point clouds using the home as a specific example of a health care environment. Our work presages a time when emerging technologies facilitate inexpensive capture and efficient management of point cloud data, thus enabling visual and analytical tools for enhanced discharge planning, new insights for designers of consumer-facing clinical informatics solutions, and a robust approach to context-based studies of health-related work environments.


Virtual Reality | 2015

DSCVR: designing a commodity hybrid virtual reality system

Kevin Ponto; Joe Kohlmann; Ross Tredinnick

This paper presents the design considerations, specifications, and lessons learned while building DSCVR, a commodity hybrid reality environment. Consumer technology has enabled a reduced cost for both 3D tracking and screens, enabling a new means for the creation of immersive display environments. However, this technology also presents many challenges, which need to be designed for and around. We compare the DSCVR System to other existing VR environments to analyze the trade-offs being made.


symposium on 3d user interfaces | 2013

SculptUp: A rapid, immersive 3D modeling environment

Kevin Ponto; Ross Tredinnick; Aaron Bartholomew; Carrie Roy; Daniel Szafir; Daniel Greenheck; Joe Kohlmann

The SculptUp system enables the rapid creation of 3D models. All of the models in Figure 3 were created in under five minutes. Scenes such as Figure 1 and 4 could be easily created in ways that would be extremely difficult in traditional modeling systems.


Human Factors | 2015

Virtual exertions: evoking the sense of exerting forces in virtual reality using gestures and muscle activity.

Karen B. Chen; Kevin Ponto; Ross Tredinnick; Robert G. Radwin

Objective: This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Background: Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Method: Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Results: Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Conclusions: Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Application: Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene.


ieee virtual reality conference | 2016

Progressive feedback point cloud rendering for virtual reality display

Ross Tredinnick; Markus Broecker; Kevin Ponto

Previous approaches to rendering large point clouds on immersive displays have generally created a trade-off between interactivity and quality. While these approaches have been quite successful for desktop environments when interaction is limited, virtual reality systems are continuously interactive, which forces users to suffer through either low frame rates or low image quality. This paper presents a novel approach to this problem through a progressive feedback-driven rendering algorithm. This algorithm uses reprojections of past views to accelerate the reconstruction of the current view. The presented method is tested against previous methods, showing improvements in both rendering quality and interactivity.


ieee virtual reality conference | 2015

Experiencing interior environments: New approaches for the immersive display of large-scale point cloud data

Ross Tredinnick; Markus Broecker; Kevin Ponto

This document introduces a new application for rendering massive LiDAR point cloud data sets of interior environments within highresolution immersive VR display systems. Overall contributions are: to create an application which is able to visualize large-scale point clouds at interactive rates in immersive display environments, to develop a flexible pipeline for processing LiDAR data sets that allows display of both minimally processed and more rigorously processed point clouds, and to provide visualization mechanisms that produce accurate rendering of interior environments to better understand physical aspects of interior spaces. The work introduces three problems with producing accurate immersive rendering of Li-DAR point cloud data sets of interiors and presents solutions to these problems. Rendering performance is compared between the developed application and a previous immersive LiDAR viewer.


international conference on virtual rehabilitation | 2017

Simulating the experience of home environments

Kevin Ponto; Ross Tredinnick; Gail R. Casper

Growing evidence indicates that transitioning patients are often unprepared for the self-management role they must assume when they return home. Over the past twenty five years, LiDAR scanning has emerged as a fascinating technology that allows for the rapid acquisition of three dimensional data of real world environments while new virtual reality (VR) technology allows users to experience simulated environments. However, combining these two technologies can be difficult as previous approaches to interactively rendering large point clouds have generally created a trade-off between interactivity and quality. For instance, many techniques used in commercially available software have utilized methods to sub-sample data during interaction, only showing a high-quality render when the viewpoint is kept static. Unfortunately, for displays in which viewpoints are rarely static, such as virtual reality systems, these methods are not useful. This paper presents a novel approach to the problem of quality-interactivity trade-off through a progressive feedback-driven rendering algorithm. This technique uses reprojections of past views to accelerate the reconstruction of the current view and can be used to extend existing point cloud viewing algorithms. The presented method is tested against previous methods, demonstrating marked improvements in both rendering quality and interactivity. This algorithm and rendering application could serve as a tool to enable virtual rehabilitation within 3D models of ones own home from a remote location.


symposium on 3d user interfaces | 2013

Poster: Say it to see it: A speech based immersive model retrieval system

Ross Tredinnick; Kevin Ponto

Virtual spaces have proven to be a valuable means to visualize and inspect 3D environments. Unfortunately, adding objects to 3D scenes while inside of an immersive environment is often difficult as the means used to acquire models from repositories are built for standard computer interfaces and are generally not available during a users session. We develop a novel interface for the insertion of models into a virtual scene through the use of voice, 3D visuals, and a 3D input device. Our interface seamlessly communicates with external model repositories (Trimble 3D Warehouse) enabling models to be acquired and inserted in the scene during a users virtual session. We see the benefits of our pipeline and interface in the fields of design, architecture, and simulation.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Virtual Exertions Physical Interactions in a Virtual Reality CAVE for Simulating Forceful Tasks

Robert G. Radwin; Karen B. Chen; Kevin Ponto; Ross Tredinnick

This paper introduces the concept of virtual exertions, which utilizes real-time feedback from electromyograms (EMG), combined with tracked body movements, to simulate forceful exertions (e.g. lifting, pushing, and pulling) against projections of virtual reality objects. The user acts as if there is a real object and moves and contracts the same muscles normally used for the desired activities to suggest exerting forces against virtual objects actually viewed in their own hands as they are grasped and moved. In order to create virtual exertions, EMG muscle activity is monitored during rehearsed co-contractions of agonist/antagonist muscles used for specific exertions, and contraction patterns and levels are combined with tracked motion of the user’s body and hands for identifying when the participant is exerting sufficient force to displace the intended object. Continuous 3D visual feedback to the participant displays mechanical work against virtual objects with simulated inertial properties. A pilot study, where four participants performed both actual and virtual dumbbell lifting tasks, observed that ratings of perceived exertions (RPE), biceps EMG recruitment, and localized muscle fatigue (mean power frequency) were consistent with the actual task. Biceps and triceps EMG co-contractions were proportionally greater for the virtual case.

Collaboration


Dive into the Ross Tredinnick's collaboration.

Top Co-Authors

Avatar

Kevin Ponto

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gail R. Casper

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Markus Broecker

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Karen B. Chen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Robert G. Radwin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Joe Kohlmann

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Patricia Flatley Brennan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Aaron Bartholomew

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Alex Peer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Brady Boettcher

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge