Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel J. Sandin is active.

Publication


Featured researches published by Daniel J. Sandin.


international conference on computer graphics and interactive techniques | 1997

The ImmersaDesk and Infinity Wall projection-based virtual reality displays

Marek Czernuszenko; Dave Pape; Daniel J. Sandin; Thomas A. DeFanti; Gregory Dawe; Maxine D. Brown

Virtual reality (VR) can be defined as interactive computer graphics that provides viewer-centered perspective, large field of view and stereo. Head-mounted displays (HMDs) and BOOMs™ achieve these features with small display screens which move with the viewer, close to the viewers eyes. Projection-based displays [3, 7], supply these characteristics by placing large, fixed screens more distant from the viewer. The Electronic Visualization Laboratory (EVL) of the University of Illinois at Chicago has specialized in projection-based VR systems. EVLs projection-based VR display, the CAVE™ [2], premiered at the SIGGRAPH 92 conference.In this article we present two new, CAVE-derived, projection-based VR displays developed at EVL: the ImmersaDesk™ and the Infinity Wall™, a VR version of the PowerWall [9]. We describe the different requirements which led to their design, and compare these systems to other VR devices.


IEEE Transactions on Visualization and Computer Graphics | 2008

Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System

Tom Peterka; Robert Kooima; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Thomas A. DeFanti

A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.


international conference on computer graphics and interactive techniques | 1989

Ray tracing deterministic 3-D fractals

John C. Hart; Daniel J. Sandin; Louis H. Kauffman

As shown in 1982, Julia sets of quadratic functions as well as many other deterministic fractals exist in spaces of higher dimensionality than the complex plane. Originally a boundary-tracking algorithm was used to view these structures but required a large amount of storage space to operate. By ray tracing these objects, the storage facilities of a graphics workstation frame buffer are sufficient. A short discussion of a specific set of 3-D deterministic fractals precedes a full description of a ray-tracing algorithm applied to these objects. A comparison with the boundary-tracking method and applications to other 3-D deterministic fractals are also included.


international conference on computer graphics and interactive techniques | 2005

The Varrier TM autostereoscopic virtual reality display

Daniel J. Sandin; Todd Margolis; Jinghua Ge; Javier Girado; Tom Peterka; Thomas A. DeFanti

Virtual reality (VR) has long been hampered by the gear needed to make the experience possible; specifically, stereo glasses and tracking devices. Autostereoscopic display devices are gaining popularity by freeing the user from stereo glasses, however few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and produced a large scale, high resolution head-tracked barrier-strip autostereoscopic display system that produces a VR immersive experience without requiring the user to wear any encumbrances. The resulting system, called Varrier, is a passive parallax barrier 35-panel tiled display that produces a wide field of view, head-tracked VR experience. This paper presents background material related to parallax barrier autostereoscopy, provides system configuration and construction details, examines Varrier interleaving algorithms used to produce the stereo images, introduces calibration and testing, and discusses the camera-based tracking subsystem.


Proceedings of SPIE | 2013

CAVE2: a hybrid reality environment for immersive simulation and information analysis

Alessandro Febretti; Arthur Nishimoto; Terrance Thigpen; Jonas Talandis; Lance Long; Jd Pirtle; Tom Peterka; Alan Verlo; Maxine D. Brown; Dana Plepys; Daniel J. Sandin; Luc Renambot; Andrew E. Johnson; Jason Leigh

Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


Presence: Teleoperators & Virtual Environments | 2007

Size-Constancy in the CAVE

Robert V. Kenyon; Daniel J. Sandin; Randall C. Smith; Richard R. Pawlicki; Thomas A. DeFanti

The use of virtual environments (VE) for many research and commercial purposes relies on its ability to generate environments that faithfully reproduce the physical world. However, due to its limitations the VE can have a number of flaws that adversely affect its use and believability. One of the more important aspects of this problem is whether the size of an object in the VE is perceived as it would be in the physical world. One of the fundamental phenomena for correct size is size-constancy, that is, an object is perceived to be the same size regardless of its distance from the observer. This is in spite of the fact that the retinal size of the object shrinks with increasing distance from the observer. We examined size-constancy in the CAVE and found that size-constancy is a strong and dominant perception in our subject population when the test object is accompanied by surrounding environmental objects. Furthermore, size-constancy changes to a visual angle performance (i.e., object size changed with distance from the subject) when these surrounding objects are removed from the scene. As previously described for the physical world, our results suggest that it is necessary to provide surrounding objects to aid in the determination of an objects depth and to elicit size-constancy in VE. These results are discussed regarding their implications for viewing objects in projection-based VE and the environments that play a role in the perception of object size in the CAVE.


ieee virtual reality conference | 2007

A GPU Sub-pixel Algorithm for Autostereoscopic Virtual Reality

Robert Kooima; Tom Peterka; Javier Girado; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Autostereoscopic displays enable unencumbered immersive virtual reality, but at a significant computational expense. This expense impacts the feasibility of autostereo displays in high-performance real-time interactive applications. A new autostereo rendering algorithm named autostereo combiner addresses this problem using the programmable vertex and fragment pipelines of modern graphics processing units (GPUs). This algorithm is applied to the Varrier, a large-scale, head-tracked, parallax barrier autostereo virtual reality platform. In this capacity, the Combiner algorithm has shown performance gains of 4x over traditional parallax barrier rendering algorithms. It has enabled high-performance rendering at sub-pixel scales, affording a 2x increase in resolution and showing a 1.4x improvement in visual acuity


Proceedings of the IEEE | 2013

Scalable Resolution Display Walls

Jason Leigh; Andrew E. Johnson; Luc Renambot; Tom Peterka; Byungil Jeong; Daniel J. Sandin; Jonas Talandis; Ratko Jagodic; Sungwon Nam; Hyejung Hur; Yiwen Sun

This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines.


Proceedings of SPIE | 2001

Varrier autostereographic display

Daniel J. Sandin; Todd Margolis; Greg Dawe; Jason Leigh; Thomas A. DeFanti

The goal of this research is to develop a head-tracked, stern virtual reality system utilizing plasma or LCD panels. This paper describes a head-tracked barrier auto-stereographic method that is optimized for real-time interactive virtual reality systems. In this method, virtual barrier screen is created simulating the physical barrier screen, and placed in the virtual world in front of the projection plane. An off- axis perspective projection of this barrier screen, combined with the rest of the virtual world, is projected from at least two viewpoints corresponding to the eye positions of the head- tracked viewer. During the rendering process, the simulated barrier screen effectively casts shadows on the projection plane. Since the different projection points cast shadows at different angles, the different viewpoints are spatially separated on the projection plane. These spatially separated images are projected into the viewers space at different angles by the physical barrier screen. The flexibility of this computational process allows more complicated barrier screens than the parallel opaque lines typically used in barrier strip auto-stereography. In addition this method supports the focusing and steering of images for a users given viewpoint, and allows for very wide angles of view. This method can produce an effective panel-based auto-stereo virtual reality system.

Collaboration


Dive into the Daniel J. Sandin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Maxine D. Brown

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Louis H. Kauffman

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Tom Peterka

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robert Kooima

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Javier Girado

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jinghua Ge

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Dave Pape

University at Buffalo

View shared research outputs
Researchain Logo
Decentralizing Knowledge