Jürgen Kogler
Austrian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jürgen Kogler.
international conference on computer vision systems | 2009
Jürgen Kogler; Christoph Sulzbachner; Wilfried Kubinger
This paper presents a silicon retina-based stereo vision system, which is used for a pre-crash warning application for side impacts. We use silicon retina imagers for this task, because the advantages of the camera, derived from the human vision system, are high temporal resolution up to 1ms and the handling of various lighting conditions with a dynamic range of ~120dB . A silicon retina delivers asynchronous data which are called address events (AE). Different stereo matching algorithms are available, but these algorithms normally work with full frame images. In this paper we evaluate how the AE data from the silicon retina sensors must be adapted to work with full-frame area-based and feature-based stereo matching algorithms.
international symposium on visual computing | 2011
Jürgen Kogler; Martin Humenberger; Christoph Sulzbachner
In this paper we present different approaches of 3D stereo matching for bio-inspired image sensors. In contrast to conventional digital cameras, this image sensor, called Silicon Retina, delivers asynchronous events instead of synchronous intensity or color images. The events represent either an increase (on-event) or a decrease (off-event) of a pixels intensity. The sensor can provide events with a time resolution of up to 1ms and it operates in a dynamic range of up to 120dB. In this work we use two silicon retina cameras as a stereo sensor setup for 3D reconstruction of the observed scene, as already known from conventional cameras. The polarity, the timestamp, and a history of the events are used for stereo matching. Due to the different information content and data type of the events, in comparison to conventional pixels, standard stereo matching approaches cannot directly be used. Thus, we developed an area-based, an event-image-based, and a time-based approach and evaluated the results achieving promising results for stereo matching based on events.
Journal of Field Robotics | 2006
William Travis; Robert Daily; David M. Bevly; Kevin Knoedler; Reinhold Behringer; Hannes Hemetsberger; Jürgen Kogler; Wilfried Kubinger; Bram Alefs
This paper presents a summary of SciAutonics-Auburn Engineering’s efforts in the 2005 DARPA Grand Challenge. The areas discussed in detail include the team makeup and strategy, vehicle choice, software architecture, vehicle control, navigation, path planning, and obstacle detection. In particular, the advantages and complications involved in fielding a low budget all-terrain vehicle are presented. Emphasis is placed on detailing the methods used for high-speed control, customized navigation, and a novel stereo vision system. The platform chosen required a highly accurate model and a well-tuned navigation system in order to meet the demands of the Grand Challenge. Overall, the vehicle completed three out of four runs at the National Qualification Event and traveled 16 miles in the Grand Challenge before a hardware failure disabled operation. The performance in the events is described, along with a success and failure analysis.
Journal of Electronic Imaging | 2014
Jürgen Kogler; Florian Eibensteiner; Martin Humenberger; Christoph Sulzbachner; Margrit Gelautz; Josef Scharinger
Abstract. We present two improvement techniques for stereo matching algorithms using silicon retina sensors. We verify the results with ground truth data. In contrast to conventional monochrome/color cameras, silicon retina sensors deliver an asynchronous flow of events instead of common framed and discrete intensity or color images. While using this kind of sensor in a stereo setup to enable new fields of applications, it also introduces new challenges in terms of stereo image analysis. Using this type of sensor, stereo matching algorithms have to deal with sparse event data, thus, less information. This affects the quality of the achievable disparity results and renders improving the stereo matching algorithms a necessary task. For this reason, we introduce two techniques for increasing the accuracy of silicon retina stereo results, in the sense that the average distance error is reduced. The first method is an adapted belief propagation approach optimizing the initial matching cost volume, and the second is an innovative two-stage postfilter for smoothing and outlier rejection. The evaluation shows that the proposed techniques increase the accuracy of the stereo matching and constitute a useful extension for using silicon retina sensors for depth estimation.
computer aided systems theory | 2011
Florian Eibensteiner; Jürgen Kogler; Christoph Sulzbachner; Josef Scharinger
In this paper, we propose a silicon-retina-based stereo vision system, used for pre-crash warning respectively side-impact detection applications in vehicles. The bio-inspired Silicon Retina sensor is a new kind of sensor, with a high temporal resolution of 1ms and a dynamic range of approx. 120dB. These type of imagers deliver data asynchronously and only when the intensity of the ambient light changes. Therefore, the amount of data that must be processed decreases significantly compared to standard CMOS or CCD imagers. The sensor uses an address-event representation (AER) protocol to transfer the event-triggered information. Concerning these special output characteristics, a novel approach regarding acquisition, storage, and stereo matching of the data were implemented. The concept of the algorithm is specifically targeted and optimized for an implementation in hardware, e.g. on a Field Programmable Gate Array (FPGA).
computer vision and pattern recognition | 2011
Christoph Sulzbachner; Christian Zinner; Jürgen Kogler
This paper presents an optimized implementation of a Silicon Retina based stereo matching algorithm using time-space correlation. The algorithm combines an event-based time correlation approach with a census transform based matching method on grayscale images that are generated from the sensor output. The data processing part of the system is optimized for an Intel i7 mobile architecture and a C64x+ multi-core digital signal processor (DSP). Both platforms use an additional C64x+ single-core DSP system for acquisition and pre-processing of sensor data. We focus on the performance optimization techniques that had a major impact on the run-time performance of both processor architectures used.
Archive | 2011
Jürgen Kogler; Christoph Sulzbachner; Martin Humenberger; Florian Eibensteiner
Several industry, home, or automotive applications need 3D or at least range data of the observed environment to operate. Such applications are, e.g., driver assistance systems, home care systems, or 3D sensing and measurement for industrial production. State-of-the-art range sensors are laser range finders or laser scanners (LIDAR, light detection and ranging), time-of-flight (TOF) cameras, and ultrasonic sound sensors. All of them are embedded, which means that the sensors operate independently and have an integrated processing unit. This is advantageous because the processing power in the mentioned applications is limited and they are computationally intensive anyway. Another benefits of embedded systems are a low power consumption and a small form factor. Furthermore, embedded systems are full customizable by the developer and can be adapted to the specific application in an optimal way. A promising alternative to the mentioned sensors is stereo vision. Classic stereo vision uses a stereo camera setup, which is built up of two cameras (stereo camera head), mounted in parallel and separated by the baseline. It captures a synchronized stereo pair consisting of the left camera’s image and the right camera’s image. The main challenge of stereo vision is the reconstruction of 3D information of a scene captured from two different points of view. Each visible scene point is projected on the image planes of the cameras. Pixels which represent the same scene points on different image planes correspond to each other. These correspondences can then be used to determine the three dimensional position of the projected scene point in a defined coordinate system. In more detail, the horizontal displacement, called the disparity, is inverse proportional to the scene point’s depth. With this information and the camera’s intrinsic parameters (principal point and focal length), the 3D position can be reconstructed. Fig. 1 shows a typical stereo camera setup. The projections of scene point P are pl and pr. Once the correspondences are found, the disparity is calculated with
ieee asme international conference on mechatronic and embedded systems and applications | 2010
Christoph Sulzbachner; Jürgen Kogler; Wilfried Kubinger
In this paper we present an embedded high performance Serial RapidIO™ data acquisition interface for Silicon Retina technology based computer vision applications. The Silicon Retina technology is a new kind of bio-inspired analogue sensor that provides only event-triggered information depending on variations of intensity in a scene. Unaltered parts of a scene without intensity variations need neither be transmitted nor processed. Due to the asynchronous behavior and the varying data-rates up to a peak of 6M events per second (Meps) per channel and a time resolution of 10ns of the imager, a distributed digital signal processing system using both a single-core and a multi-core fixed-point digital signal processor (DSP) is used. The single-core DSP is used for data pre-processing of the compressed data streams and forwarding it to the multi-core DSP, which processes the actual data. Pre-processing also includes disposing the data required for processing on the multi-core system using a data parallelism concept. We discuss both design considerations, and implementation details of the interface and the pre-processing algorithm.
computer vision and pattern recognition | 2017
Ewa Piatkowska; Jürgen Kogler; Nabil Belbachir; Margrit Gelautz
Event-based vision, as realized by bio-inspired Dynamic Vision Sensors (DVS), is gaining more and more popularity due to its advantages of high temporal resolution, wide dynamic range and power efficiency at the same time. Potential applications include surveillance, robotics, and autonomous navigation under uncontrolled environment conditions. In this paper, we deal with event-based vision for 3D reconstruction of dynamic scene content by using two stationary DVS in a stereo configuration. We focus on a cooperative stereo approach and suggest an improvement over a previously published algorithm that reduces the measured mean error by over 50 percent. An available ground truth data set for stereo event data is utilized to analyze the algorithms sensitivity to parameter variation and for comparison with competing techniques.
Archive | 2010
Jürgen Kogler; Christoph Sulzbachner; Erwin Schoitsch; Wilfried Kubinger; Martin Litzenberger
Reliable Advanced Driver Assistance Systems (ADAS) are intended to assist the driver under various traffic, weather and other environment conditions. The growing traffic requires sensors and systems which handle difficult urban and non-urban scenarios. For such systems new cost-efficient sensor technologies are developed and evaluated in the EU-FP7 project ADOSE (reliable Application-specific Detection of road users with vehicle On-board Sensors, providing the vehicle with a virtual safety belt by addressing complementary safety functions.