Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinghua Ge is active.

Publication


Featured researches published by Jinghua Ge.


international conference on computer graphics and interactive techniques | 2005

The Varrier TM autostereoscopic virtual reality display

Daniel J. Sandin; Todd Margolis; Jinghua Ge; Javier Girado; Tom Peterka; Thomas A. DeFanti

Virtual reality (VR) has long been hampered by the gear needed to make the experience possible; specifically, stereo glasses and tracking devices. Autostereoscopic display devices are gaining popularity by freeing the user from stereo glasses, however few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and produced a large scale, high resolution head-tracked barrier-strip autostereoscopic display system that produces a VR immersive experience without requiring the user to wear any encumbrances. The resulting system, called Varrier, is a passive parallax barrier 35-panel tiled display that produces a wide field of view, head-tracked VR experience. This paper presents background material related to parallax barrier autostereoscopy, provides system configuration and construction details, examines Varrier interleaving algorithms used to produce the stereo images, introduces calibration and testing, and discusses the camera-based tracking subsystem.


eurographics | 2001

Adaptive networking for tele-immersion

Jason Leigh; Oliver Yu; Dan Schonfeld; Rashid Ansari; Eric He; A. M. Nayak; Jinghua Ge; Naveen K. Krishnaprasad; Kyoung Shin Park; Yongjoo Cho; Liujia Hu; Ray Fang; Alan Verlo; Linda Winkler; Thomas A. DeFanti

Tele-Immersive applications possess an unusually broad range of networking requirements. As high-speed and Quality of Service-enabled networks emerge, it will becoming more difficult for developers of Tele-Immersion applications, and networked applications in general, to take advantage of these enhanced services. This paper proposes an adaptive networking framework to ultimately allow applications to optimize their network utilization in pace with advances in networking services. In working toward this goal, this paper will present a number of networking techniques for improving performance in tele-immersive applications and examines whether the Differentiated Services mechanism for network Quality of Service is suitable for Tele-Immersion.


ieee virtual reality conference | 2007

A GPU Sub-pixel Algorithm for Autostereoscopic Virtual Reality

Robert Kooima; Tom Peterka; Javier Girado; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Autostereoscopic displays enable unencumbered immersive virtual reality, but at a significant computational expense. This expense impacts the feasibility of autostereo displays in high-performance real-time interactive applications. A new autostereo rendering algorithm named autostereo combiner addresses this problem using the programmable vertex and fragment pipelines of modern graphics processing units (GPUs). This algorithm is applied to the Varrier, a large-scale, head-tracked, parallax barrier autostereo virtual reality platform. In this capacity, the Combiner algorithm has shown performance gains of 4x over traditional parallax barrier rendering algorithms. It has enabled high-performance rendering at sub-pixel scales, affording a 2x increase in resolution and showing a 1.4x improvement in visual acuity


ieee virtual reality conference | 2007

Dynallax: Solid State Dynamic Parallax Barrier Autostereoscopic VR Display

Tom Peterka; Robert Kooima; Javier Girado; Jinghua Ge; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Jürgen P. Schulze; Thomas A. DeFanti

A novel barrier strip autostereoscopic (AS) display is demonstrated using a solid-state dynamic parallax barrier. A dynamic barrier mitigates restrictions inherent in static barrier systems such as fixed view distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system. Furthermore, users can switch between 3D and 2D modes by disabling the barrier. Dynallax is head-tracked, directing view channels to positions in space reported by a tracking system in real time. Such head-tracked parallax barrier systems have traditionally supported only a single viewer, but by varying the barrier period to eliminate conflicts between viewers, Dynallax presents four independent eye channels when two viewers are present. Each viewer receives an independent pair of left and right eye perspective views based on their position in 3D space. The display device is constructed using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and the rear display produces a modulated VR scene composed of two or four channels. A small-scale head-tracked prototype VR system is demonstrated.


Future Generation Computer Systems | 2006

Personal varrier: autostereoscopic virtual reality display for distributed scientific visualization

Tom Peterka; Daniel J. Sandin; Jinghua Ge; Javier Girado; Robert Kooima; Jason Leigh; Andrew E. Johnson; Marcus Thiebaux; Thomas A. DeFanti

As scientific data sets increase in size, dimensionality, and complexity, new high resolution, interactive, collaborative networked display systems are required to view them in real-time. Increasingly, the principles of virtual reality (VR) are being applied to modern scientific visualization. One of the tenets of VR is stereoscopic (stereo or 3d) display; however the need to wear stereo glasses or other gear to experience the virtual world is encumbering and hinders other positive aspects of VR such as collaboration. Autostereoscopic (autostereo) displays presented imagery in 3d without the need to wear glasses or other gear, but few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and built a single-screen version of its 35-panel tiled Varrier display, called Personal Varrier. Based on a static parallax barrier and the Varrier computational method, Personal Varrier provides a quality 3d autostereo experience in an economical, compact form factor. The system debuted at iGrid 2005 in San Diego, CA, accompanied by a suite of distributed and local scientific visualization and 3d teleconferencing applications. The CAVEwave National LambdaRail (NLR) network was vital to the success of the stereo teleconferencing.


computer vision and pattern recognition | 2005

Camera Based Automatic Calibration for the Varrier-System

Jinghua Ge; Daniel J. Sandin; Tom Peterka; Todd Margolis; Thomas A. DeFanti

Varrier is a head-tracked, 35-panel tiled autostereoscopic display system which is produced by The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC). Varrier produces autostereoscopic imagery through a combination of a physical parallax barrier and a virtual barrier, so that the stereoscopic images are directed correctly into the viewer’s eyes. Since a small amount of rotation and translation between physical and virtual barriers can cause large-scale effects, registration is critical for correct stereo viewing. The process is automated by examining image frames of two video cameras separated by the interocular distance as a simulation of human eyes. Three registration parameters for each panel are calibrated in the process. An arbitrary start condition is allowed and a robust stopping criterion is used to end the process and report results. Instead of exhaustive three dimensional searching, an efficient two phase calibration method is introduced. The combination of a heuristic rough calibration and an adaptive fine calibration guarantees a fast searching process with the best solution.


electronic imaging | 2007

Evolution of the Varrier autostereoscopic VR display: 2001-2007

Tom Peterka; Robert Kooima; Javier Girado; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person interactive VR experience without the need for glasses or other gear to be worn by the user. Since Varriers inception, new algorithmic and systemic developments have produced performance and quality improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics card enhancements. Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier. Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable to commercially available tracking systems. Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported. Local as well as distributed computation is employed in various applications. Long-distance collaboration has been demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop forms to fit a variety of space and budget constraints. Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a static barrier.


international conference on virtual reality | 2006

Point-based VR visualization for large-scale mesh datasets by real-time remote computation

Jinghua Ge; Daniel J. Sandin; Andrew E. Johnson; Tom Peterka; Robert Kooima; Javier Girado; Thomas A. DeFanti

High speed interactive visualization of large-scale mesh datasets for desktop VR facilities is still a challenge because of the slow geometry setup and rasterization for huge number of small triangles. This paper presents a point-based virtual reality (VR) visualization pipeline for large-scale mesh datasets in a client-server architecture. Remote server computation which samples the triangle mesh into discrete 2D grids is steered by the client-end interactive frustum request. A point-based geometry is built up incrementally during run time for both server and client. By organizing the point model into a multi-resolution octree-based space partition hierarchy, the client-end visualization ensures fast view reconstruction by splatting the available points onto the screen with efficient occlusion culling and view-dependent level of detail (LOD) control. The combination of the high-priority client side local splatting and server side low-speed view updating decreases the dependence on remote computation performance and network requirements for an interactive VR visualization.


International Journal of Image and Graphics | 2008

A POINT-BASED ASYNCHRONOUS REMOTE VISUALIZATION FRAMEWORK FOR REAL-TIME VIRTUAL REALITY

Jinghua Ge; Daniel J. Sandin; Tom Peterka; Robert Kooima; Javier Girado; Andrew E. Johnson

High speed interactive virtual reality (VR) exploration of scientific datasets is a challenge when the visualization is computationally expensive. This paper presents a point-based remote visualization pipeline for real-time virtual reality (VR) with asynchronous client-server coupling. Steered by the client-end frustum request, the remote server samples the original dataset into 3D point samples and sends them back to the client for view updating. From every view updating frame, the client incrementally builds up a point-based geometry under an octree-based space partition hierarchy. At every view-reconstruction frame, the client continuously splats the available points onto the screen with efficient occlusion culling and view-dependent level of detail (LOD) control. An experimental visualization framework with a server-end computer cluster and a client-end head-tracked autostereo VR desktop display is used to visualize large-scale mesh datasets and ray-traced 4D Julia set datasets. The overall performance of the VR view reconstruction is about 15 fps and independent of the original dataset complexity.


Archive | 2001

AGAVE : Access Grid Augmented Virtual Environment

Jason Leigh; Greg Dawe; Jonas Talandis; Eric He; Shalini Venkataraman; Jinghua Ge; Daniel J. Sandin; Thomas A. DeFanti

Collaboration


Dive into the Jinghua Ge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Tom Peterka

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Javier Girado

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Robert Kooima

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Alan Verlo

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Byungil Jeong

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Eric He

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge