Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Evan Suma Rosenberg is active.

Publication


Featured researches published by Evan Suma Rosenberg.


ieee virtual reality conference | 2017

An evaluation of strategies for two-user redirected walking in shared physical spaces

Mahdi Azmandian; Timofey Grechkin; Evan Suma Rosenberg

As the focus of virtual reality technology is shifting from singleperson experiences to multi-user interactions, it becomes increasingly important to accommodate multiple co-located users within a shared real-world space. For locomotion and navigation, the introduction of multiple users moving both virtually and physically creates additional challenges related to potential user-on-user collisions. In this work, we focus on defining the extent of these challenges, in order to apply redirected walking to two users immersed in virtual reality experiences within a shared physical tracked space. Using a computer simulation framework, we explore the costs and benefits of splitting available physical space between users versus attempting to algorithmically prevent user-to-user collisions. We also explore fundamental components of collision prevention such as steering the users away from each other, forced stopping, and user re-orientation. Each component was analyzed for the number of potential disruptions to the flow of the virtual experience. We also develop a novel collision prevention algorithm that reduces overall interruptions by 17.6% and collision prevention events by 58.3%. Our results show that sharing space using our collision prevention method is superior to subdividing the tracked space.


Computer Animation and Virtual Worlds | 2017

Just‐in‐time, viable, 3‐D avatars from scans

Andrew W. Feng; Evan Suma Rosenberg; Ari Shapiro

We demonstrate a system that can generate a photorealistic, interactive 3‐D character from a human subject that is capable of movement, emotion, speech, and gesture in less than 20 min without the need for 3‐D artist intervention or specialized technical knowledge through a near automatic process. Our method uses mostly commodity or off‐the‐shelf hardware. We demonstrate the just‐in‐time use of generating such 3‐D models for virtual and augmented reality, games, simulation, and communication. We anticipate that the inexpensive generation of such photorealistic models will be useful in many venues where a just‐in‐time 3‐D reconstructions of digital avatars that resemble particular human subjects is necessary.


IEEE Computer Graphics and Applications | 2018

15 Years of Research on Redirected Walking in Immersive Virtual Environments

Niels Christian Nilsson; Tabitha C. Peck; Gerd Bruder; Eri Hodgson; Stefania Serafin; Frank Steinicke; Evan Suma Rosenberg

Virtual reality users wearing head-mounted displays can experience the illusion of walking in any direction for infinite distance while, in reality, they are walking a curvilinear path in physical space. This is accomplished by introducing unnoticeable rotations to the virtual environment-a technique called redirected walking. This paper gives an overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago.


Proceedings of SPIE | 2017

The mixed reality of things: emerging challenges for human-information interaction

Ryan P. Spicer; Stephen Russell; Evan Suma Rosenberg

Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the Internet of Things expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.


Next-Generation Analyst VI | 2018

Collaborative mixed reality (MxR) and networked decision making

Theron Trout; Stephen Russell; Andre Harrison; Ryan P. Spicer; Mark Dennison; Jerald Thomas; Evan Suma Rosenberg

Collaborative decision-making remains a significant research challenge that is made even more complicated in real-time or tactical problem-contexts. Advances in technology have dramatically assisted the ability for computers and networks to improve the decision-making process (i.e. intelligence, design, and choice). In the intelligence phase of decision making, mixed reality (MxR) has shown a great deal of promise through implementations of simulation and training. However little research has focused on an implementation of MxR to support the entire scope of the decision cycle, let alone collaboratively and in a tactical context. This paper presents a description of the design and initial implementation for the Defense Integrated Collaborative Environment (DICE), an experimental framework for supporting theoretical and empirical research on MxR for tactical decision-making support.


ieee virtual reality conference | 2017

Rapid creation of photorealistic virtual reality content with consumer depth cameras

Chih-Fan Chen; Mark T. Bolas; Evan Suma Rosenberg

Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.


ieee virtual reality conference | 2018

Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis

Chih-Fan Chen; Evan Suma Rosenberg


ieee virtual reality conference | 2018

General Chair Message

Betty J. Mohler; Torsten W. Kuhlen; Matthias Bues; Evan Suma Rosenberg; Honorary Martin Gobel


ieee virtual reality conference | 2018

Redirected Walking in Irregularly Shaped Physical Environments with Dynamic Obstacles

Haiwei Chen; Samantha Chen; Evan Suma Rosenberg


international conference on image processing | 2017

View-dependent virtual reality content from RGB-D images

Chih-Fan Chen; Mark T. Bolas; Evan Suma Rosenberg

Collaboration


Dive into the Evan Suma Rosenberg's collaboration.

Top Co-Authors

Avatar

Chih-Fan Chen

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ryan P. Spicer

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew W. Feng

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ari Shapiro

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerd Bruder

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Mahdi Azmandian

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge