Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David M. Krum is active.

Publication


Featured researches published by David M. Krum.


ieee virtual reality conference | 2012

A taxonomy for deploying redirection techniques in immersive virtual environments

Evan A. Suma; Gerd Bruder; Frank Steinicke; David M. Krum; Mark T. Bolas

Natural walking can provide a compelling experience in immersive virtual environments, but it remains an implementation challenge due to the physical space constraints imposed on the size of the virtual world. The use of redirection techniques is a promising approach that relaxes the space requirements of natural walking by manipulating the users route in the virtual environment, causing the real world path to remain within the boundaries of the physical workspace. In this paper, we present and apply a novel taxonomy that separates redirection techniques according to their geometric flexibility versus the likelihood that they will be noticed by users. Additionally, we conducted a user study of three reorientation techniques, which confirmed that participants were less likely to experience a break in presence when reoriented using the techniques classified as subtle in our taxonomy. Our results also suggest that reorientation with change blindness illusions may give the impression of exploring a more expansive environment than continuous rotation techniques, but at the cost of negatively impacting spatial knowledge acquisition.


IEEE Transactions on Visualization and Computer Graphics | 2012

Impossible Spaces: Maximizing Natural Walking in Virtual Environments with Self-Overlapping Architecture

Evan A. Suma; Zachary Lipps; Samantha L. Finkelstein; David M. Krum; Mark T. Bolas

Walking is only possible within immersive virtual environments that fit inside the boundaries of the users physical workspace. To reduce the severity of the restrictions imposed by limited physical area, we introduce impossible spaces, a new design mechanic for virtual environments that wish to maximize the size of the virtual environment that can be explored with natural locomotion. Such environments make use of self-overlapping architectural layouts, effectively compressing comparatively large interior environments into smaller physical areas. We conducted two formal user studies to explore the perception and experience of impossible spaces. In the first experiment, we showed that reasonably small virtual rooms may overlap by as much as 56% before users begin to detect that they are in an impossible space, and that the larger virtual rooms that expanded to maximally fill our available 9.14m × 9.14m workspace may overlap by up to 31%. Our results also demonstrate that users perceive distances to objects in adjacent overlapping rooms as if the overall space was uncompressed, even at overlap levels that were overtly noticeable. In our second experiment, we combined several well-known redirection techniques to string together a chain of impossible spaces in an expansive outdoor scene. We then conducted an exploratory analysis of users verbal feedback during exploration, which indicated that impossible spaces provide an even more powerful illusion when users are naive to the manipulation.


Computers & Graphics | 2013

Special Section on Touching the 3rd Dimension: Adapting user interfaces for gestural interaction with the flexible action and articulated skeleton toolkit

Evan A. Suma; David M. Krum; Belinda Lange; Sebastian Koenig; Albert A. Rizzo; Mark T. Bolas

We present the Flexible Action and Articulated Skeleton Toolkit (FAAST), a middleware software framework for integrating full-body interaction with virtual environments, video games, and other user interfaces. This toolkit provides a complete end-to-end solution that includes a graphical user interface for custom gesture creation, sensor configuration, skeletal tracking, action recognition, and a variety of output mechanisms to control third party applications, allowing virtually any PC application to be repurposed for gestural control even if it does not explicit support input from motion sensors. To facilitate intuitive and transparent gesture design, we define a syntax for representing human gestures using rule sets that correspond to the basic spatial and temporal components of an action. These individual rules form primitives that, although conceptually simple on their own, can be combined both simultaneously and in sequence to form sophisticated gestural interactions. In addition to presenting the system architecture and our approach for representing and designing gestural interactions, we also describe two case studies that evaluated the use of FAAST for controlling first-person video games and improving the accessibility of computing interfaces for individuals with motor impairments. Thus, this work represents an important step toward making gestural interaction more accessible for practitioners, researchers, and hobbyists alike.


ieee virtual reality conference | 2012

Unobtrusive measurement of subtle nonverbal behaviors with the Microsoft Kinect

Nathan Burba; Mark T. Bolas; David M. Krum; Evan A. Suma

We describe two approaches for unobtrusively sensing subtle nonverbal behaviors using a consumer-level depth sensing camera. The first signal, respiratory rate, is estimated by measuring the visual expansion and contraction of the users chest cavity during inhalation and exhalation. Additionally, we detect a specific type of fidgeting behavior, known as “leg jiggling,” by measuring high-frequency vertical oscillations of the users knees. Both of these techniques rely on the combination of skeletal tracking information with raw depth readings from the sensor to identify the cyclical patterns in jittery, low-resolution data. Such subtle nonverbal signals may be useful for informing models of users psychological states during communication with virtual human agents, thereby improving interactions that address important societal challenges in domains including education, training, and medicine.


ieee virtual reality conference | 2012

Immersive training games for smartphone-based head mounted displays

Perry Hoberman; David M. Krum; Evan A. Suma; Mark T. Bolas

Thin computing clients, such as smartphones and tablets, have exhibited recent growth in display resolutions, processing power, and graphical rendering speeds. In this poster, we show how we leveraged these trends to create virtual reality (VR) training games which run entirely on a commodity mobile computing platform. This platform consists of a commercial off-the-shelf game engine, commodity smartphones, and mass produced optics. The games utilize the strengths of this platform to provide immersive features like 360 degree photo panoramas and interactive 3D virtual scenes. By sharing information about building such applications, we hope to enable others to develop new types of mobile VR applications. In particular, we feel this system is ideally suited for casual “pick up and use” VR applications for collaborative classroom learning, design reviews, and other multi-user immersive experiences.


acm symposium on applied perception | 2012

Comparability of narrow and wide field-of-view head-mounted displays for medium-field distance judgments

J. Adam Jones; Evan A. Suma; David M. Krum; Mark T. Bolas

As wider field-of-view displays become more common, the question arises as to whether or not data collected on these displays are comparable to those collected with smaller field-of-view displays. This document describes a pilot study that aimed to address these concerns by comparing medium-field distance judgments in a 60° FOV display, a 150° FOV display, and a simulated 60° FOV within the 150° FOV display. The results indicate that participants performed similarly in both the actual and simulated 60° FOV displays. On average, participants in the 150° FOV display improved distance judgments by 13% over the 60° FOV displays.


tests and proofs | 2017

Vertical Field-of-View Extension and Walking Characteristics in Head-Worn Virtual Environments

J. Adam Jones; David M. Krum; Mark T. Bolas

In this article, we detail a series of experiments that examines the effect of vertical field-of-view extension and the addition of non-specific peripheral visual stimulation on gait characteristics and distance judgments in a head-worn virtual environment. Specifically, we examined four field-of-view configurations: a common 60° diagonal field of view (48° × 40°), a 60° diagonal field of view with the addition of a luminous white frame in the far periphery, a field of view with an extended upper edge, and a field of view with an extended lower edge. We found that extension of the field of view, either with spatially congruent or spatially non-informative visuals, resulted in improved distance judgments and changes in observed posture. However, these effects were not equal across all field-of-view configurations, suggesting that some configurations may be more appropriate than others when balancing performance, cost, and ergonomics.


ieee virtual reality conference | 2017

NIVR: Neuro imaging in virtual reality

Tyler Ard; David M. Krum; Thai Phan; Dominique Duncan; Ryan Essex; Mark T. Bolas; Arthur W. Toga

Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.


ieee virtual reality conference | 2017

REINVENT: A low-cost, virtual reality brain-computer interface for severe stroke upper limb motor recovery

Ryan P. Spicer; Julia Anglin; David M. Krum

There are few effective treatments for rehabilitation of severe motor impairment after stroke. We developed a novel closed-loop neurofeedback system called REINVENT to promote motor recovery in this population. REINVENT (Rehabilitation Environment using the Integration of Neuromuscular-based Virtual Enhancements for Neural Training) harnesses recent advances in neuroscience, wearable sensors, and virtual technology and integrates low-cost electroencephalography (EEG) and electromyography (EMG) sensors with feedback in a head-mounted virtual reality display (VR) to provide neurofeedback when an individuals neuromuscular signals indicate movement attempt, even in the absence of actual movement. Here we describe the REINVENT prototype and provide evidence of the feasibility and safety of using REINVENT with older adults.


ieee virtual reality conference | 2014

Tablet-based interaction panels for immersive environments

David M. Krum; Thai Phan; Lauren Cairco Dukes; Peter Wang; Mark T. Bolas

With the current widespread interest in head mounted displays, we perceived a need for devices that support expressive and adaptive interaction in a low-cost, eyes-free manner. Leveraging rapid prototyping techniques for fabrication, we have designed and manufactured a variety of panels that can be overlaid on multi-touch tablets and smartphones. The panels are coupled with an app running on the multi-touch device that exchanges commands and state information over a wireless network with the virtual reality application. Sculpted features of the panels provide tactile disambiguation of control widgets and an onscreen heads-up display provides interaction state information. A variety of interaction mappings can be provided through software to support several classes of interaction techniques in virtual environments. We foresee additional uses for applications where eyes-free use and adaptable interaction interfaces can be beneficial.

Collaboration


Dive into the David M. Krum's collaboration.

Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sin-Hwa Kang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Evan A. Suma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

J. Adam Jones

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Albert A. Rizzo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Belinda Lange

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chien-Yen Chang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Larry F. Hodges

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge