Christoph Sulzbachner
Austrian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christoph Sulzbachner.
international conference on computer vision systems | 2009
Jürgen Kogler; Christoph Sulzbachner; Wilfried Kubinger
This paper presents a silicon retina-based stereo vision system, which is used for a pre-crash warning application for side impacts. We use silicon retina imagers for this task, because the advantages of the camera, derived from the human vision system, are high temporal resolution up to 1ms and the handling of various lighting conditions with a dynamic range of ~120dB . A silicon retina delivers asynchronous data which are called address events (AE). Different stereo matching algorithms are available, but these algorithms normally work with full frame images. In this paper we evaluate how the AE data from the silicon retina sensors must be adapted to work with full-frame area-based and feature-based stereo matching algorithms.
international symposium on visual computing | 2011
Jürgen Kogler; Martin Humenberger; Christoph Sulzbachner
In this paper we present different approaches of 3D stereo matching for bio-inspired image sensors. In contrast to conventional digital cameras, this image sensor, called Silicon Retina, delivers asynchronous events instead of synchronous intensity or color images. The events represent either an increase (on-event) or a decrease (off-event) of a pixels intensity. The sensor can provide events with a time resolution of up to 1ms and it operates in a dynamic range of up to 120dB. In this work we use two silicon retina cameras as a stereo sensor setup for 3D reconstruction of the observed scene, as already known from conventional cameras. The polarity, the timestamp, and a history of the events are used for stereo matching. Due to the different information content and data type of the events, in comparison to conventional pixels, standard stereo matching approaches cannot directly be used. Thus, we developed an area-based, an event-image-based, and a time-based approach and evaluated the results achieving promising results for stereo matching based on events.
computer vision and pattern recognition | 2012
Martin Humenberger; Stephan Schraml; Christoph Sulzbachner; Ahmed Nabil Belbachir; Ágoston Srp; Ferenc Vajda
In this paper, we present a bio-inspired, purely passive, and embedded fall detection system for its application towards safety for elderly at home. Bio-inspired means the use of two optical detector chips with event-driven pixels that are sensitive to relative light intensity changes only. The two chips are used as stereo configuration which enables a 3D representation of the observed area with a stereo matching technique. In contrast to conventional digital cameras, this image sensor delivers asynchronous events instead of synchronous intensity or color images, thus, the privacy issue is systematically solved. Another advantage is that stationary installed fall detection systems have a better acceptance for independent living compared to permanently worn devices. The fall detection is done by a trained neural network. First, a meaningful feature vector is calculated from the point clouds, then the neural network classifies the actual event as fall or non-fall. All processing is done on an embedded device consisting of an FPGA for stereo matching and a DSP for neural network calculation achieving several fall evaluations per second. The results evaluation showed that our fall detection system achieves a fall detection rate of more than 96% with false positives below 5% for our prerecorded dataset consisting of 679 fall scenarios.
Journal of Electronic Imaging | 2014
Jürgen Kogler; Florian Eibensteiner; Martin Humenberger; Christoph Sulzbachner; Margrit Gelautz; Josef Scharinger
Abstract. We present two improvement techniques for stereo matching algorithms using silicon retina sensors. We verify the results with ground truth data. In contrast to conventional monochrome/color cameras, silicon retina sensors deliver an asynchronous flow of events instead of common framed and discrete intensity or color images. While using this kind of sensor in a stereo setup to enable new fields of applications, it also introduces new challenges in terms of stereo image analysis. Using this type of sensor, stereo matching algorithms have to deal with sparse event data, thus, less information. This affects the quality of the achievable disparity results and renders improving the stereo matching algorithms a necessary task. For this reason, we introduce two techniques for increasing the accuracy of silicon retina stereo results, in the sense that the average distance error is reduced. The first method is an adapted belief propagation approach optimizing the initial matching cost volume, and the second is an innovative two-stage postfilter for smoothing and outlier rejection. The evaluation shows that the proposed techniques increase the accuracy of the stereo matching and constitute a useful extension for using silicon retina sensors for depth estimation.
computer aided systems theory | 2011
Florian Eibensteiner; Jürgen Kogler; Christoph Sulzbachner; Josef Scharinger
In this paper, we propose a silicon-retina-based stereo vision system, used for pre-crash warning respectively side-impact detection applications in vehicles. The bio-inspired Silicon Retina sensor is a new kind of sensor, with a high temporal resolution of 1ms and a dynamic range of approx. 120dB. These type of imagers deliver data asynchronously and only when the intensity of the ambient light changes. Therefore, the amount of data that must be processed decreases significantly compared to standard CMOS or CCD imagers. The sensor uses an address-event representation (AER) protocol to transfer the event-triggered information. Concerning these special output characteristics, a novel approach regarding acquisition, storage, and stereo matching of the data were implemented. The concept of the algorithm is specifically targeted and optimized for an implementation in hardware, e.g. on a Field Programmable Gate Array (FPGA).
computer vision and pattern recognition | 2011
Christoph Sulzbachner; Christian Zinner; Jürgen Kogler
This paper presents an optimized implementation of a Silicon Retina based stereo matching algorithm using time-space correlation. The algorithm combines an event-based time correlation approach with a census transform based matching method on grayscale images that are generated from the sensor output. The data processing part of the system is optimized for an Intel i7 mobile architecture and a C64x+ multi-core digital signal processor (DSP). Both platforms use an additional C64x+ single-core DSP system for acquisition and pre-processing of sensor data. We focus on the performance optimization techniques that had a major impact on the run-time performance of both processor architectures used.
Archive | 2011
Jürgen Kogler; Christoph Sulzbachner; Martin Humenberger; Florian Eibensteiner
Several industry, home, or automotive applications need 3D or at least range data of the observed environment to operate. Such applications are, e.g., driver assistance systems, home care systems, or 3D sensing and measurement for industrial production. State-of-the-art range sensors are laser range finders or laser scanners (LIDAR, light detection and ranging), time-of-flight (TOF) cameras, and ultrasonic sound sensors. All of them are embedded, which means that the sensors operate independently and have an integrated processing unit. This is advantageous because the processing power in the mentioned applications is limited and they are computationally intensive anyway. Another benefits of embedded systems are a low power consumption and a small form factor. Furthermore, embedded systems are full customizable by the developer and can be adapted to the specific application in an optimal way. A promising alternative to the mentioned sensors is stereo vision. Classic stereo vision uses a stereo camera setup, which is built up of two cameras (stereo camera head), mounted in parallel and separated by the baseline. It captures a synchronized stereo pair consisting of the left camera’s image and the right camera’s image. The main challenge of stereo vision is the reconstruction of 3D information of a scene captured from two different points of view. Each visible scene point is projected on the image planes of the cameras. Pixels which represent the same scene points on different image planes correspond to each other. These correspondences can then be used to determine the three dimensional position of the projected scene point in a defined coordinate system. In more detail, the horizontal displacement, called the disparity, is inverse proportional to the scene point’s depth. With this information and the camera’s intrinsic parameters (principal point and focal length), the 3D position can be reconstructed. Fig. 1 shows a typical stereo camera setup. The projections of scene point P are pl and pr. Once the correspondences are found, the disparity is calculated with
ieee asme international conference on mechatronic and embedded systems and applications | 2010
Christoph Sulzbachner; Jürgen Kogler; Wilfried Kubinger
In this paper we present an embedded high performance Serial RapidIO™ data acquisition interface for Silicon Retina technology based computer vision applications. The Silicon Retina technology is a new kind of bio-inspired analogue sensor that provides only event-triggered information depending on variations of intensity in a scene. Unaltered parts of a scene without intensity variations need neither be transmitted nor processed. Due to the asynchronous behavior and the varying data-rates up to a peak of 6M events per second (Meps) per channel and a time resolution of 10ns of the imager, a distributed digital signal processing system using both a single-core and a multi-core fixed-point digital signal processor (DSP) is used. The single-core DSP is used for data pre-processing of the compressed data streams and forwarding it to the multi-core DSP, which processes the actual data. Pre-processing also includes disposing the data required for processing on the multi-core system using a data parallelism concept. We discuss both design considerations, and implementation details of the interface and the pre-processing algorithm.
computer aided systems theory | 2007
Wilfried Kubinger; Franz Rinnerthaler; Christoph Sulzbachner; Josef Langer; Martin Humenberger
We present an embedded vision sensor to be used for robot soccer in the MiroSot league of the Federation of the International Robotsoccer Association. The vision sensor is based on a DSP/FPGA co-processor system and uses a FireWire-camera. The vision algorithms are specifically designed to optimally utilize the resources of the embedded system. The achieved embedded vision sensor is able to work at full camera framerate (60fps) with full image resolution (640×480 pixels) without the need of any resources of the host computer.
international conference on neural information processing | 2012
Christoph Sulzbachner; Martin Humenberger; Ágoston Srp; Ferenc Vajda
This paper presents an optimized implementation of a neural network for fall detection using a Silicon Retina stereo vision sensor. A Silicon Retina sensor is a bio-inspired optical sensor with special characteristics as it does not capture images, but only detects variations of intensity in a scene. The data processing unit consists of an event-based stereo matcher processed on a field programmable gate array (FPGA), and a neural network that is processed on a digital signal processor (DSP). The initial network used double-precision floating point arithmetic; the optimized version uses fixed-point arithmetic as it should be processed on a low performance embedded system. We focus on the performance optimization techniques for the DSP that have a major impact on the run-time performance of the neural network. In summary, we achieved a speedup of 48 for multiplication, 39.5 for additions, and 194 for the transfer functions and, thus, realized an embedded real-time fall detection system.