Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen P. Schulze is active.

Publication


Featured researches published by Jürgen P. Schulze.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


Proceedings of the National Academy of Sciences of the United States of America | 2008

High-precision radiocarbon dating and historical biblical archaeology in southern Jordan

Thomas E. Levy; Thomas Higham; Christopher Bronk Ramsey; Neil Smith; Erez Ben-Yosef; Mark D. Robinson; Stefan Münger; Kyle A. Knabb; Jürgen P. Schulze; Mohammad Najjar; Lisa Tauxe

Recent excavations and high-precision radiocarbon dating from the largest Iron Age (IA, ca. 1200–500 BCE) copper production center in the southern Levant demonstrate major smelting activities in the region of biblical Edom (southern Jordan) during the 10th and 9th centuries BCE. Stratified radiocarbon samples and artifacts were recorded with precise digital surveying tools linked to a geographic information system developed to control on-site spatial analyses of archaeological finds and model data with innovative visualization tools. The new radiocarbon dates push back by 2 centuries the accepted IA chronology of Edom. Data from Khirbat en-Nahas, and the nearby site of Rujm Hamra Ifdan, demonstrate the centrality of industrial-scale metal production during those centuries traditionally linked closely to political events in Edoms 10th century BCE neighbor ancient Israel. Consequently, the rise of IA Edom is linked to the power vacuum created by the collapse of Late Bronze Age (LB, ca. 1300 BCE) civilizations and the disintegration of the LB Cypriot copper monopoly that dominated the eastern Mediterranean. The methodologies applied to the historical IA archaeology of the Levant have implications for other parts of the world where sacred and historical texts interface with the material record.


EGVE '02 Proceedings of the workshop on Virtual environments 2002 | 2002

Evaluation of a collaborative volume rendering application in a distributed virtual environment

Uwe Wössner; Jürgen P. Schulze; Steffen P. Walz; Ulrich Lang

In this paper, we present a collaborative volume rendering application which can be used in distributed virtual environments. The application allows the users to collaboratively view volumetric data and manipulate the transfer functions. Furthermore, 3D markers can be used to support communication. The collaborative setup includes a full duplex audio channel between the virtual environments. The developed software was evaluated with external users who were asked to solve tasks in two scenarios which resembled real-world situations from the medical field: a presentation and a time-constrained search task. For the evaluation, two 4-sided CAVE-like virtual environments were linked. The collaborative application was analyzed for both technical and social aspects.


eurographics | 2003

Integrating pre-integration into the shear-warp algorithm

Jürgen P. Schulze; Martin Kraus; Ulrich Lang; Thomas Ertl

The shear-warp volume rendering algorithm is one of the fastest algorithms for volume rendering, but it achieves this rendering speed only by sacrificing interpolation between the slices of the volume data. Unfortunately, this restriction to bilinear interpolation within the slices severely compromises the resulting image quality. This paper presents the implementation of pre-integrated volume rendering in the shear-warp algorithm for parallel projection to overcome this drawback. A pre-integrated lookup table is used during compositing to perform a substantially improved interpolation between the voxels in two adjacent slices.We discuss the design and implementation of our extension of the shear-warp algorithm in detail. We also clarify the concept of opacity and color correction, and derive the required sampling rate of volume rendering with postclassification. Furthermore, the modified algorithm is compared to the traditional shear-warp rendering approach in terms of rendering speed and image quality.


ieee visualization | 2001

The perspective shear-warp algorithm in a virtual environment

Jürgen P. Schulze; Roland Niemeier; Ulrich Lang

Since the original paper of Lacroute and Levoy (1994), where the shear-warp factorization was also shown for perspective projections, a lot of work has been carried out using the shear-warp factorization with parallel projections. However, none of it has proved or improved the algorithm for the perspective projection. Also in Lacroutes Volpack library, the perspective shear-warp volume rendering algorithm is missing. This paper reports on an implementation of the perspective shear-warp algorithm, which includes enhancements for its application in immersive virtual environments. Furthermore, a mathematical proof for the correctness of the permutation of projection and warp is provided, so far a basic assumption of the shear-warp perspective projection.


ieee virtual reality conference | 2007

Dynallax: Solid State Dynamic Parallax Barrier Autostereoscopic VR Display

Tom Peterka; Robert Kooima; Javier Girado; Jinghua Ge; Daniel J. Sandin; Andrew E. Johnson; Jason Leigh; Jürgen P. Schulze; Thomas A. DeFanti

A novel barrier strip autostereoscopic (AS) display is demonstrated using a solid-state dynamic parallax barrier. A dynamic barrier mitigates restrictions inherent in static barrier systems such as fixed view distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system. Furthermore, users can switch between 3D and 2D modes by disabling the barrier. Dynallax is head-tracked, directing view channels to positions in space reported by a tracking system in real time. Such head-tracked parallax barrier systems have traditionally supported only a single viewer, but by varying the barrier period to eliminate conflicts between viewers, Dynallax presents four independent eye channels when two viewers are present. Each viewer receives an independent pair of left and right eye perspective views based on their position in 3D space. The display device is constructed using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and the rear display produces a modulated VR scene composed of two or four channels. A small-scale head-tracked prototype VR system is demonstrated.


Proceedings of SPIE | 2013

CalVR: an advanced open source virtual reality software framework

Jürgen P. Schulze; Andrew Prudhomme; Philip Weber; Thomas A. DeFanti

We developed CalVR because none of the existing virtual reality software frameworks offered everything we needed, such as cluster-awareness, multi-GPU capability, Linux compatibility, multi-user support, collaborative session support, or custom menu widgets. CalVR combines features from multiple existing VR frameworks into an open-source system, which we use in our laboratory on a daily basis, and for which dozens of VR applications have already been written at UCSD but also other research laboratories world-wide. In this paper, we describe the philosophy behind CalVR, its standard and unique features and functions, its programming interface, and its inner workings.


human factors in computing systems | 2009

Wetpaint: scraping through multi-layered images

Leonardo Bonanni; Xiao Xiao; Matthew Hockenberry; Praveen R. Subramani; Hiroshi Ishii; Maurizio Seracini; Jürgen P. Schulze

We introduce a technique for exploring multi-layered images by scraping arbitrary areas to determine meaningful relationships. Our system, called Wetpaint, uses perceptual depth cues to help users intuitively navigate between corresponding layers of an image, allowing a rapid assessment of changes and relationships between different views of the same area. Inspired by art diagnostic techniques, this tactile method could have distinct advantages in the general domain as shown by our user study. We propose that the physical metaphor of scraping facilitates the process of determining correlations between layers of an image because it compresses the process of planning, comparison and annotation into a single gesture. We discuss applications for geography, design, and medicine.


eurographics | 2004

Medical applications of multi-field volume rendering and VR techniques

Joe Kniss; Jürgen P. Schulze; Uwe Wössner; Peter Winkler; Ulrich Lang; Charles D. Hansen

This paper reports on a new approach for visualizing multi-field MRI or CT datasets in an immersive environment with medical applications. Multi-field datasets combine multiple scanning modalities into a single 3D, multivalued, dataset. In our approach, they are classified and rendered using real-time hardware accelerated volume rendering, and displayed in a hybrid work environment, consisting of a dual power wall and a desktop PC. For practical reasons in this environment, the design and use of the transfer functions is subdivided into two steps, classification and exploration. The classification step is done at the desktop, taking advantage of the 2D mouse as a high accuracy input device. The exploration process takes place on the powerwall. We present our new approach, describe the underlying implementation issues, report on our experiences with different immersive environments, and suggest ways it can be used for collaborative medical diagnosis and treatment planning.


international conference on conceptual structures | 2015

Towards an Integrated Cyberinfrastructure for Scalable Data-driven Monitoring, Dynamic Prediction and Resilience of Wildfires

Ilkay Altintas; Jessica Block; Raymond A. de Callafon; Daniel Crawl; Charles Cowart; Amarnath Gupta; Mai H. Nguyen; Hans-Werner Braun; Jürgen P. Schulze; Michael J. Gollner; Arnaud Trouvé; Larry Smarr

Abstract Wildfires are critical for ecosystems in many geographical regions. However, our current urbanized existence in these environments is inducing the ecological balance to evolve into a different dynamic leading to the biggest fires in history. Wildfire wind speeds and directions change in an instant, and first responders can only be effective if they take action as quickly as the conditions change. What is lacking in disaster management today is a system integration of real-time sensor networks, satellite imagery, near-real time data management tools, wildfire simulation tools, and connectivity to emergency command centers before, during and after a wildfire. As a first time example of such an integrated system, the WIFIRE project is building an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. This paper summarizes the approach and early results of the WIFIRE project to integrate networked observations, e.g., heterogeneous satellite data and real-time remote sensor data with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfires Rate of Spread.

Collaboration


Dive into the Jürgen P. Schulze's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip Weber

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han Suk Kim

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Falko Kuester

University of California

View shared research outputs
Top Co-Authors

Avatar

Thomas E. Levy

University of California

View shared research outputs
Top Co-Authors

Avatar

Amit Chourasia

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge