Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Kooima is active.

Publication


Featured researches published by Robert Kooima.


IEEE Transactions on Visualization and Computer Graphics | 2008

Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System

Tom Peterka; Robert Kooima; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Thomas A. DeFanti

A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


IEEE Transactions on Visualization and Computer Graphics | 2009

Planetary-Scale Terrain Composition

Robert Kooima; Jason Leigh; Andrew E. Johnson; D. A. Roberts; Mark U. SubbaRao; Thomas A. DeFanti

Many interrelated planetary height map and surface image map data sets exist, and more data are collected each day. Broad communities of scientists require tools to compose these data interactively and explore them via real-time visualization. While related, these data sets are often unregistered with one another, having different projection, resolution, format, and type. We present a GPU-centric approach to the real-time composition and display of unregistered-but-related planetary-scale data. This approach employs a GPGPU process to tessellate spherical height fields. It uses a render-to-vertex-buffer technique to operate upon polygonal surface meshes in image space, allowing geometry processes to be expressed in terms of image processing. With height and surface map data processing unified in this fashion, a number of powerful composition operations may be uniformly applied to both. Examples include adaptation to nonuniform sampling due to projection, seamless blending of data of disparate resolution or transformation regardless of boundary, and the smooth interpolation of levels of detail in both geometry and imagery. Issues of scalability and precision are addressed, giving out-of-core access to giga-pixel data sources, and correct rendering at scales approaching one meter.


ieee virtual reality conference | 2007

A GPU Sub-pixel Algorithm for Autostereoscopic Virtual Reality

Robert Kooima; Tom Peterka; Javier Girado; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Autostereoscopic displays enable unencumbered immersive virtual reality, but at a significant computational expense. This expense impacts the feasibility of autostereo displays in high-performance real-time interactive applications. A new autostereo rendering algorithm named autostereo combiner addresses this problem using the programmable vertex and fragment pipelines of modern graphics processing units (GPUs). This algorithm is applied to the Varrier, a large-scale, head-tracked, parallax barrier autostereo virtual reality platform. In this capacity, the Combiner algorithm has shown performance gains of 4x over traditional parallax barrier rendering algorithms. It has enabled high-performance rendering at sub-pixel scales, affording a 2x increase in resolution and showing a 1.4x improvement in visual acuity


ieee virtual reality conference | 2007

Dynallax: Solid State Dynamic Parallax Barrier Autostereoscopic VR Display

Tom Peterka; Robert Kooima; Javier Girado; Jinghua Ge; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Jürgen P. Schulze; Thomas A. DeFanti

A novel barrier strip autostereoscopic (AS) display is demonstrated using a solid-state dynamic parallax barrier. A dynamic barrier mitigates restrictions inherent in static barrier systems such as fixed view distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system. Furthermore, users can switch between 3D and 2D modes by disabling the barrier. Dynallax is head-tracked, directing view channels to positions in space reported by a tracking system in real time. Such head-tracked parallax barrier systems have traditionally supported only a single viewer, but by varying the barrier period to eliminate conflicts between viewers, Dynallax presents four independent eye channels when two viewers are present. Each viewer receives an independent pair of left and right eye perspective views based on their position in 3D space. The display device is constructed using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and the rear display produces a modulated VR scene composed of two or four channels. A small-scale head-tracked prototype VR system is demonstrated.


tangible and embedded interaction | 2010

Cartouche: conventions for tangibles bridging diverse interactive systems

Brygg Ullmer; Zachary Dever; Rajesh Sankaran; Cornelius Toole; Chase Freeman; Brooke Cassady; Cole Wiley; Mohamed Diabi; Alvin Wallace Jr.; Michael DeLatin; Blake Tregre; Kexi Liu; Srikanth Jandhyala; Robert Kooima; Chris Branton; Rod Parker

We describe an approach for a class of tangible interaction elements that are applicable across a broad variety of interactive systems. These tangibles share certain physical, visual, tagging, and software conventions, while fostering diversity in many aspects of design and function. Building on related techniques using paper and graspable artifacts as interactive embodiments of digital information, we propose several fixed and free parameters, present illustrative examples and applications, and discuss the resulting design space.


international symposium on parallel and distributed processing and applications | 2013

Cultural heritage omni-stereo panoramas for immersive cultural analytics — From the Nile to the Hijaz

Neil Smith; Steve Cutchin; Robert Kooima; Richard A. Ainsworth; Daniel J. Sandin; Jürgen P. Schulze; Andrew Prudhomme; Falko Kuester; Thomas E. Levy; Thomas A. DeFanti

The digital imaging acquisition and visualization techniques described here provides a hyper-realistic stereoscopic spherical capture of cultural heritage sites. An automated dualcamera system is used to capture sufficient stereo digital images to cover a sphere or cylinder. The resulting stereo images are projected undistorted in VR systems providing an immersive virtual environment in which researchers can collaboratively study the important textural details of an excavation or historical site. This imaging technique complements existing technologies such as LiDAR or SfM providing more detailed textural information that can be used in conjunction for analysis and visualization. The advantages of this digital imaging technique for cultural heritage can be seen in its non-invasive and rapid capture of heritage sites for documentation, analysis, and immersive visualization. The technique is applied to several significant heritage sites in Luxor, Egypt and Saudi Arabia.


virtual reality software and technology | 2010

A multi-viewer tiled autostereoscopic virtual reality display

Robert Kooima; Andrew Prudhomme; Jürgen P. Schulze; Daniel J. Sandin; Thomas A. DeFanti

Recognizing the value of autostereoscopy for 3D displays in public contexts, we pursue the goal of large-scale, high-resolution, immersive virtual reality using lenticular displays. Our contributions include the scalable tiling of lenticular displays to large fields of view and the use of GPU image interleaving and application optimization for real-time performance. In this context, we examine several ways to improve group-viewing by combining user tracking with multi-view displays.


Future Generation Computer Systems | 2006

Personal varrier: autostereoscopic virtual reality display for distributed scientific visualization

Tom Peterka; Daniel J. Sandin; Jinghua Ge; Javier Girado; Robert Kooima; Jason Leigh; Andrew E. Johnson; Marcus Thiebaux; Thomas A. DeFanti

As scientific data sets increase in size, dimensionality, and complexity, new high resolution, interactive, collaborative networked display systems are required to view them in real-time. Increasingly, the principles of virtual reality (VR) are being applied to modern scientific visualization. One of the tenets of VR is stereoscopic (stereo or 3d) display; however the need to wear stereo glasses or other gear to experience the virtual world is encumbering and hinders other positive aspects of VR such as collaboration. Autostereoscopic (autostereo) displays presented imagery in 3d without the need to wear glasses or other gear, but few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and built a single-screen version of its 35-panel tiled Varrier display, called Personal Varrier. Based on a static parallax barrier and the Varrier computational method, Personal Varrier provides a quality 3d autostereo experience in an economical, compact form factor. The system debuted at iGrid 2005 in San Diego, CA, accompanied by a suite of distributed and local scientific visualization and 3d teleconferencing applications. The CAVEwave National LambdaRail (NLR) network was vital to the success of the stereo teleconferencing.


electronic imaging | 2007

Evolution of the Varrier autostereoscopic VR display: 2001-2007

Tom Peterka; Robert Kooima; Javier Girado; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person interactive VR experience without the need for glasses or other gear to be worn by the user. Since Varriers inception, new algorithmic and systemic developments have produced performance and quality improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics card enhancements. Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier. Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable to commercially available tracking systems. Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported. Local as well as distributed computation is employed in various applications. Long-distance collaboration has been demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop forms to fit a variety of space and budget constraints. Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a static barrier.

Collaboration


Dive into the Robert Kooima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Jinghua Ge

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Tom Peterka

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Javier Girado

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Luc Renambot

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge