Dirk Reiners
University of Louisiana at Lafayette
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dirk Reiners.
Computers & Graphics | 2004
Marcus Roth; Gerrit Voss; Dirk Reiners
Abstract The support for multi-threaded applications in current scene graphs is very limited, if it is supported at all. This work presents an approach for a very general multi-threading framework that allows total separation of threads without total replication of data. It also supports extensions to clusters, both for sort-first and sort-last rendering configurations. The described concepts have been implemented in the OpenSG scene graph and are widely and successfully used.
Virtual Reality | 2011
Steven A. White; Mores Prachyabrued; Terrence L. Chambers; Christoph W. Borst; Dirk Reiners
The simulated MIG lab (sMIG) is a training simulator for Metal Inert Gas (MIG) welding. It is based on commercial off the shelf (COTS) components and targeted at familiarizing beginning students with the MIG equipment and best practices to follow to become competent and effective MIG welders. To do this, it simulates the welding process as realistically as possible using standard welding hardware components (helmet, gun) for input and by using head-tracking and a 3D-capable low-cost monitor and standard speakers for output. We developed a simulation to generate realistic audio and visuals based on numerical heat transfer methods and verified the accuracy against real welds. sMIG runs in real time producing a realistic, interactive, and immersive welding experience while maintaining a low installation cost. In addition to being realistic, the system provides instant feedback beyond what is possible in a traditional lab. This help students avoid learning (and unlearning) incorrect movement patterns.
Proceedings of the workshop on Virtual environments 2003 | 2003
Wolfram Kresse; Dirk Reiners; Christian Knöpfle
Digital projectors have a significant advantage over CRTs for IPT setups: brightness. But they also have a number of disadvantages, one of which is color consistency. This problem is exacerbated when using the Infitec method for stereo separation, which in itself has some strong advantages for CAVE and tiled wall setups. In this paper we will describe a method for color and brightness correction of multi-projector display systems. The method itself is used in two new projection systems, which are currently under construction at Fraunhofer-IGD: The HEyewall and the Digital CAVE. The HEyeWall is the first stereo capable tiled display worldwide. The Digital CAVE is the first CAVE with digital projectors and stereo separation based on Infotec(tm). In this paper we present these new IPTs in more detail and also present our experience with digital projectors.To calibrate all the involved projectors photometric measurements of the different projectors are used to calculate a common gamut in a linear colorspace. Input colors are mapped into this gamut and from there mapped into the individual projectors colorspace. This method allows to adjust the rendering output of two or more projectors with different color gamuts in such a way that the projected images are photometrically calibrated. Since the correction has to be done for each pixel, a straightforward implementation would be very slow and far away from realtime. Consequently we will outline a method how to improve performance and overcome this limitation.
ieee virtual reality conference | 2009
Steven A. White; Mores Prachyabrued; Dhruva Baghi; Dirk Reiners; Christoph W. Borst; Terry Chambers; Amit Aglawe
The goal of this project is to develop a training system that can simulate the welding process in real-time and give feedback that avoids learning wrong motion patterns for beginning welders and can be used to analyze the process by the teacher afterwards. The system is based mainly on COTS components. A standard PC with a Dual-core CPU and a medium-end nVidia graphics card is sufficient. Input is done with a regular welding gun to allow realistic training. The gun is tracked by an OptiTrack system with 3 FLEX:V100 cameras. The same is also used to track a regular welding helmet to get accurate eye positions for display, which was chosen over glasses for robustness. The display itself is a Zalman Trimon stereo monitor that is laid out horizontally. The software is designed around a main simulation component for solving heat conduction on a grid of simulation points based on local GaussSeidel elimination.
ieee virtual reality conference | 2007
Jan P. Springer; Stephan Beck; Felix Weiszig; Dirk Reiners; Bernd Froehlich
We introduce a new concept for improved interaction with complex scenes: multi-frame rate rendering and display. Multi-frame rate rendering produces a multi-frame rate display by optically or digitally compositing the results of asynchronously running image generators. Interactive parts of a scene are rendered at the highest possible frame rates while the rest of the scene is rendered at regular frame rates. The composition of image components generated with different update rates may cause certain visual artifacts, which can be partially overcome with our rendering techniques. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance while slight visual artifacts are either not even recognized or gladly tolerated by users. Overall, digital composition shows the most promising results, since it introduces the least artifacts while requiring the transfer of frame buffer content between different image generators
IEEE Computer Graphics and Applications | 2009
Robert W. Lindeman; Dirk Reiners; Anthony Steed
This article reports on the experience of conducting the program committee (PC) meeting for the IEEE VR 2009 conference in Second Life. More than 50 PC members from around the globe met virtually over a two-day period to decide which papers to accept for presentation at the conference. Survey responses from 42 of the PC members indicate generally positive feelings toward this meeting format, although several alterations will help improve interaction between attendees in future meetings.
BMC Bioinformatics | 2010
Ming Jia; Suh-Yeon Choi; Dirk Reiners; Eve Syrkin Wurtele; Julie A. Dickerson
BackgroundLinking high-throughput experimental data with biological networks is a key step for understanding complex biological systems. Currently, visualization tools for large metabolic networks often result in a dense web of connections that is difficult to interpret biologically. The MetNetGE application organizes and visualizes biological networks in a meaningful way to improve performance and biological interpretability.ResultsMetNetGE is an interactive visualization tool based on the Google Earth platform. MetNetGE features novel visualization techniques for pathway and ontology information display. Instead of simply showing hundreds of pathways in a complex graph, MetNetGE gives an overview of the network using the hierarchical pathway ontology using a novel layout, called the Enhanced Radial Space-Filling (ERSF) approach that allows the network to be summarized compactly. The non-tree edges in the pathway or gene ontology, which represent pathways or genes that belong to multiple categories, are linked using orbital connections in a third dimension. Biologists can easily identify highly activated pathways or gene ontology categories by mapping of summary experiment statistics such as coefficient of variation and overrepresentation values onto the visualization. After identifying such pathways, biologists can focus on the corresponding region to explore detailed pathway structure and experimental data in an aligned 3D tiered layout. In this paper, the use of MetNetGE is illustrated with pathway diagrams and data from E. coli and Arabidopsis.ConclusionsMetNetGE is a visualization tool that organizes biological networks according to a hierarchical ontology structure. The ERSF technique assigns attributes in 3D space, such as color, height, and transparency, to any ontological structure. For hierarchical data, the novel ERSF layout enables the user to identify pathways or categories that are differentially regulated in particular experiments. MetNetGE also displays complex biological pathway in an aligned 3D tiered layout for exploration.
ieee aerospace conference | 2010
Dioselin Courter; Jan P. Springer; Carsten Neumann; Carolina Cruz-Neira; Dirk Reiners
There are a number of 3D applications that have shown the benefits of using visualization techniques for virtual prototyping and education in aerospace engineering. However, these applications typically run only on desktop computers and user interaction is limited to mouse and keyboard. Virtual reality technologies provide a richer set of user experience by combining stereoscopic images with interactive, multi-sensory, and viewer-centered environments. We present an interactive and immersive walk-through application of a space station that can be configured and executed on multiple operating systems and platforms. The hardware setup may vary for each platform. Some may be fully immersive environments with multiple projection screens that surround the user and provide spatial tracking, while others may only provide a single PC equipped with active stereo graphics output. Even a laptop using only monoscopic images is supported. The application has been tested in our omni-directional treadmill system, which includes spatial tracking of the users head and is run on a graphics cluster which drives three projection screens around the treadmill. This kind of immersion using stereo projection and correctly scaled structures allows for a better sense of spatial relationships, because users receive bio-mechanical feedback during navigation by walking. For computer desktop setups, the software can be configured to run on a single screen and with a game controller for navigation.
ieee virtual reality conference | 2008
Jan P. Springer; Christopher Lux; Dirk Reiners; Bernd Froehlich
Multi-frame rate rendering is a parallel rendering technique that renders interactive parts of the scene on one graphics card while the rest of the scene is rendered asynchronously on a second graphics card. The resulting color and depth images of both render processes are composited and displayed. This paper presents advanced multi-frame rate rendering techniques, which remove limitations of the original approach and reduce artifacts. The interactive manipulation of light sources and their parameters affects the entire scene. Our multi-GPU deferred shading splits the rendering task into a rasterization and lighting pass and distributes the passes to the appropriate graphics card to enable light manipulations at high frame rates independent of the geometry complexity of the scene. We also developed a parallel volume rendering technique, which allows the manipulation of objects inside a translucent volume at high frame rates. Due to the asynchronous nature of multi-frame rate rendering artifacts may occur during the migration of objects from the slow to the fast graphics card, and vice versa. We show how proper state management can be used to avoid these artifacts almost completely. These techniques were developed in the context of a single-system multi-GPU setup, which considerably simplifies the implementation and increases performance.
IEEE Transactions on Visualization and Computer Graphics | 2011
Malcolm Hutson; Dirk Reiners
Several critical limitations exist in the currently available tracking technologies for fully enclosed virtual reality (VR) systems. While several 6DOF tracking projects such as Hedgehog have successfully demonstrated excellent accuracy, precision, and robustness within moderate budgets, these projects still include elements of hardware that can interfere with the users visual experience. The objective of this project is to design a tracking solution for fully enclosed VR displays that achieves comparable performance to available commercial solutions but without any artifacts that can obscure the users view. JanusVF is a tracking solution involving a cooperation of both the hardware sensors and the software rendering system. A small, high-resolution camera is worn on the users head, but faces backward (180 degree rotation about vertical from the users perspective). After acquisition of the initial state, the VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene. These virtual markers are only drawn behind the user and in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter algorithm to update the head pose. Experiments analyzing accuracy, precision, and latency in a six-sided CAVE-like system show performance that is comparable to alternative commercial technologies.