Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Reinhold Behringer is active.

Publication


Featured researches published by Reinhold Behringer.


ieee virtual reality conference | 1999

Registration for outdoor augmented reality applications using computer vision techniques and hybrid sensors

Reinhold Behringer

Registration for outdoor systems for Augmented Reality (AR) cannot rely on the methods developed for indoor use (e.g., magnetic tracking, fiducial markers). Although GPS and the earths magnetic field can be used to obtain a rough estimate of position and orientation, the precision of this registration method is not high enough for satisfying AR overlay. Computer vision methods can help to improve the registration precision by tracking visual clues whose real world positions are known. We have developed a system that can exploit horizon silhouettes for improving the orientation precision of a camera which is aligned with the users view. It has been shown that this approach is able to provide registration even as a stand-alone system, although the usual limitations of computer vision prohibit to use it under unfavorable conditions. This paper describes the approach of registration by using horizon silhouettes. Based on the known observer location (from GPS), the 360 degree silhouette is computed from a digital elevation map database. Registration is achieved, when the extracted visual horizon silhouette segment is matched onto this predicted silhouette. Significant features (mountain peaks) are cues which provide hypotheses for the match. Several criteria are tested to find the best matching hypothesis. The system is implemented on a PC under Windows NT. Results are shown in this paper.


international symposium on mixed and augmented reality | 2003

3D audio augmented reality: implementation and experiments

Venkataraman Sundareswaran; Kenneth Wang; Steven Chen; Reinhold Behringer; Joshua McGee; Clement Tam; Pavel Zahorik

Augmented reality (AR) presentations may be visual or auditory. Auditory presentation has the potential to provide hands-free and visually non-obstructing cues. Recently, we have developed a 3D audio wearable system that can be used to provide alerts and informational cues to a mobile user in such a manner as to appear to emanate from specific locations in the users environment. In order to study registration errors in 3D audio AR representations, we conducted a perceptual training experiment in which visual and auditory cues were presented to observers. The results of this experiment suggest that perceived registration errors may be reduced through head movement and through training presentations that include both visual and auditory cues.


Journal of Field Robotics | 2006

SciAutonics-Auburn Engineering’s Low Cost High Speed ATV for the 2005 DARPA Grand Challenge

William Travis; Robert Daily; David M. Bevly; Kevin Knoedler; Reinhold Behringer; Hannes Hemetsberger; Jürgen Kogler; Wilfried Kubinger; Bram Alefs

This paper presents a summary of SciAutonics-Auburn Engineering’s efforts in the 2005 DARPA Grand Challenge. The areas discussed in detail include the team makeup and strategy, vehicle choice, software architecture, vehicle control, navigation, path planning, and obstacle detection. In particular, the advantages and complications involved in fielding a low budget all-terrain vehicle are presented. Emphasis is placed on detailing the methods used for high-speed control, customized navigation, and a novel stereo vision system. The platform chosen required a highly accurate model and a well-tuned navigation system in order to meet the demands of the Grand Challenge. Overall, the vehicle completed three out of four runs at the National Qualification Event and traveled 16 miles in the Grand Challenge before a hardware failure disabled operation. The performance in the events is described, along with a success and failure analysis.


ieee intelligent transportation systems | 2005

RASCAL - an autonomous ground vehicle for desert driving in the DARPA Grand Challenge 2005

Reinhold Behringer; William Travis; Rob Daily; David M. Bevly; Wilfried Kubinger; W. Herzner; V. Fehlberg

The DARPA Grand Challenge is a competition of autonomous ground vehicles in the Mojave desert, with a prize of } for the winner. This event was organized in 2004 and held annually at least until 2007, until a team wins the prize. The teams are coming from various background, but the rule that no US government funding or technology that was created with US government funding could be used for this competition, prevented some of the well established players to participate. The team SciAutonics/Auburn-Engineering continues their effort to build a system for participation in this challenge, based on the 2004 entry RASCAL. The main focus in the system design is on improvements of the design from 2004. Novel sensing modalities the team plans to use in 2005, are a stereo vision system and a radar system for obstacle detection. Offline simulation allows to analyze situations in the laboratory and to replay recordings from sensors. The Grand Challenge 2005 takes place on October 8, and the SciAutonics/Auburn team intends to compete with the improved RASCAL system.


international conference on multimedia computing and systems | 1999

A novel interface for device diagnostics using speech recognition, augmented reality visualization, and 3D audio auralization

Reinhold Behringer; Steven Chen; Venkataraman Sundareswaran; Kenneth Wang; Marius S. Vassiliou

Routine maintenance and error diagnostics of technical devices can be greatly enhanced by applying multimedia technology. The Rockwell Science Center is developing a system which can indicate maintenance instructions or diagnosis results for a device directly into the view of the user by utilizing augmented reality and multimedia techniques. The system can overlay 3D rendered objects, animations, and text annotations onto the live video image of a known object, captured by a movable camera. The status of device components can be queried by the user through a speech recognition system. The response is given as an animation of the relevant device module, overlaid onto the real object into the users view, and/or as auditory cues using spatialized 3D audio. The position of the user/camera relative to the device is tracked by a computer vision based tracking system. The diagnostics system also allows the user to leave spoken annotations attached to device modules for other users to retrieve. The system is implemented on a distributed network of PCs, utilizing standard commercial off-the-shelf (COTS) components.


Computers & Graphics | 1999

A Distributed Device Diagnostics System Utilizing Augmented Reality and 3D Audio

Reinhold Behringer; Steven Chen; Venkataraman Sundareswaran; Kenneth Wang; Marius S. Vassiliou

Abstract Augmented reality (AR), combining virtual environments with the perception of the real world, can be used to provide instructions for routine maintenance and error diagnostics of technical devices. The Rockwell Science Center (RSC) is developing a system that utilizes AR techniques to provide “X-ray vision” into real objects. The system can overlay 3D rendered objects, animations, and text annotations onto the video image of a known object. An automated speech recognition system allows the user to query the status of device components. The response is given as an animated rendition of a CAD model and/or as auditory cues using 3D audio. This diagnostics system also allows the user to leave spoken annotations attached to device modules as ASCII text. The position of the user/camera relative to the device is tracked by a computer-vision-based tracking system using fiducial markers. The system is implemented on a distributed network of PCs, utilizing standard commercial off-the-shelf components (COTS).


Enhanced and synthetic vision 2000. Conference | 2000

System for synthetic vision and augmented reality in future flight decks

Reinhold Behringer; Clement Tam; Joshua McGee; Venkataraman Sundareswaran; Marius S. Vassiliou

Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.


international symposium on mixed and augmented reality | 2004

Vision-based augmented reality for pilot guidance in airport runways and taxiways

Jose Molineros; Reinhold Behringer; Clement Tam

This paper describes our on-going efforts to develop an augmented reality system for enhanced pilot situational awareness in airport runways and taxiways. The system consists of a sensing component based on computer vision and an information component based on high-fidelity graphic model databases. Vision algorithms are used for a variety of guidance and warning tasks. A necessary requirement is for vision algorithms to have a real-time response.


Enhanced and Synthetic Vision 1999 | 1999

Registration for an augmented reality system enhancing the situational awareness in an outdoor scenario

Reinhold Behringer

Augmented reality techniques can significantly enhance the situational awareness of a user by providing 3D information registered to the users view of the world. Critical for usability of such a system is the precise registration. Outdoor AR systems usually employ a GPS for position and a hybrid combination of magnetometer and inclinometer for orientation estimation, which provide only a limited precision. At the Rockwell Science Center (RSC), an approach is being developed which uses terrain horizon silhouettes as visual cues for improving the registration precision and for sensor calibration. Since the observer position is known from GPS data, the horizon silhouette can be predicted using a digital elevation model (DEM) database. The silhouette segment, which the user sees, is extracted from a camera pointing in the same direction as the user. This segment is then matched with the complete 360 degree horizon silhouette from the DEM data. When the optimal match has been determined, camera azimuth, roll, and pitch angle can be computed. This registration approach is being implemented in an AR system for enhancing the situational awareness in an outdoor scenario which is being developed at RSC. Currently, the information augmentation is provided by a registered overlay onto a live video stream.


IFAC Proceedings Volumes | 2004

The DARPA grand challenge - autonomous ground vehicles in the desert

Reinhold Behringer

Abstract The DARPA Grand Challenge 2004, the first large-scale competition of autonomous ground vehicles, was organized by DARPA in March 2004 to demonstrate the capabilities of various concepts of autonomous driving under competitive and harsh, non-cooperating environmental conditions. This “realistic” environment was found in the Mojave Desert, and the route was set from Barstow to Primm (near Las Vegas), through dirt roads and partiallyoff-road. Fifteen vehicles were admitted to this event after passing a qualification round, set up at the California Speedway in Fontana. The actual competition course was 150 miles long, but none of the vehicles finished driving the complete course – the most successful vehicle just drove 7.4 miles. However, despite this outcome, the event still is considered a success by demonstrating autonomous driving technology and showcasing a wide varietyof approaches.

Collaboration


Dive into the Reinhold Behringer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge