Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Prudhomme is active.

Publication


Featured researches published by Andrew Prudhomme.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


Proceedings of SPIE | 2013

CalVR: an advanced open source virtual reality software framework

Jürgen P. Schulze; Andrew Prudhomme; Philip Weber; Thomas A. DeFanti

We developed CalVR because none of the existing virtual reality software frameworks offered everything we needed, such as cluster-awareness, multi-GPU capability, Linux compatibility, multi-user support, collaborative session support, or custom menu widgets. CalVR combines features from multiple existing VR frameworks into an open-source system, which we use in our laboratory on a daily basis, and for which dozens of VR applications have already been written at UCSD but also other research laboratories world-wide. In this paper, we describe the philosophy behind CalVR, its standard and unique features and functions, its programming interface, and its inner workings.


Proceedings of SPIE | 2011

Acquisition of stereo panoramas for display in VR environments

Richard A. Ainsworth; Daniel J. Sandin; Jürgen P. Schulze; Andrew Prudhomme; Thomas A. DeFanti; Madhusudhanan Srinivasan

Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewers perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.


international symposium on parallel and distributed processing and applications | 2013

Cultural heritage omni-stereo panoramas for immersive cultural analytics — From the Nile to the Hijaz

Neil Smith; Steve Cutchin; Robert Kooima; Richard A. Ainsworth; Daniel J. Sandin; Jürgen P. Schulze; Andrew Prudhomme; Falko Kuester; Thomas E. Levy; Thomas A. DeFanti

The digital imaging acquisition and visualization techniques described here provides a hyper-realistic stereoscopic spherical capture of cultural heritage sites. An automated dualcamera system is used to capture sufficient stereo digital images to cover a sphere or cylinder. The resulting stereo images are projected undistorted in VR systems providing an immersive virtual environment in which researchers can collaboratively study the important textural details of an excavation or historical site. This imaging technique complements existing technologies such as LiDAR or SfM providing more detailed textural information that can be used in conjunction for analysis and visualization. The advantages of this digital imaging technique for cultural heritage can be seen in its non-invasive and rapid capture of heritage sites for documentation, analysis, and immersive visualization. The technique is applied to several significant heritage sites in Luxor, Egypt and Saudi Arabia.


virtual reality software and technology | 2010

A multi-viewer tiled autostereoscopic virtual reality display

Robert Kooima; Andrew Prudhomme; Jürgen P. Schulze; Daniel J. Sandin; Thomas A. DeFanti

Recognizing the value of autostereoscopy for 3D displays in public contexts, we pursue the goal of large-scale, high-resolution, immersive virtual reality using lenticular displays. Our contributions include the scalable tiling of lenticular displays to large fields of view and the use of GPU image interleaving and application optimization for real-time performance. In this context, we examine several ways to improve group-viewing by combining user tracking with multi-view displays.


Advances in Computers | 2011

Advanced Applications of Virtual Reality

Jürgen P. Schulze; Han Suk Kim; Philip Weber; Andrew Prudhomme; Roger E. Bohn; Maurizio Seracini; Thomas A. DeFanti

Abstract In the first 5 years of virtual reality application research at the California Institute for Telecommunications and Information Technology (Calit2), we created numerous software applications for virtual environments. Calit2 has one of the most advanced virtual reality laboratories with the five-walled StarCAVE and the worlds first passive stereo, LCD panel-based immersive virtual reality system, the NexCAVE. The combination of cutting edge hardware, direct access to world class researchers on the campus of UCSD, and Calit2s mission to bring the first two together to make new advances at the intersection of these disciplines enabled us to research the future of scientific virtual reality applications. This chapter reports on some of the most notable applications we developed.


WOOT'15 Proceedings of the 9th USENIX Conference on Offensive Technologies | 2015

Fast and vulnerable: a story of telematic failures

Ian D. Foster; Andrew Prudhomme; Karl Koscher; Stefan Savage


symposium on 3d user interfaces | 2012

Democratizing rendering for multiple viewers in surround VR systems

Jürgen P. Schulze; Daniel Acevedo; John Mangan; Andrew Prudhomme; Phi Nguyen; Philip Weber


Proceedings of SPIE | 2011

Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

Todd Margolis; Thomas A. DeFanti; Gregory Dawe; Andrew Prudhomme; Jürgen P. Schulze; Steve Cutchin


Proceedings of SPIE | 2014

Scalable metadata environments (MDE): artistically impelled immersive environments for large-scale data exploration

Ruth West; Todd Margolis; Andrew Prudhomme; Jürgen P. Schulze; Iman Mostafavi; J. P. Lewis; Joachim Gossmann; Rajvikram Singh

Collaboration


Dive into the Andrew Prudhomme's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Philip Weber

University of California

View shared research outputs
Top Co-Authors

Avatar

Robert Kooima

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Steve Cutchin

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Falko Kuester

University of California

View shared research outputs
Top Co-Authors

Avatar

Gregory Dawe

University of California

View shared research outputs
Top Co-Authors

Avatar

Todd Margolis

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge