Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Litzenberger is active.

Publication


Featured researches published by Martin Litzenberger.


international solid-state circuits conference | 2007

A Dual-Line Optical Transient Sensor with On-Chip Precision Time-Stamp Generation

Christoph Posch; Michael Hofstätter; Daniel Matolin; Guy Vanstraelen; Peter Schön; Nikolaus Donath; Martin Litzenberger

A 120dB dynamic range 2times256 dual-line optical transient sensor uses pixels that respond asynchronously to relative intensity changes. A time stamp with variable resolution down to 100ns is allocated to the events at the pixel level. The pixel address and time stamp are read out via a 3-stage pipelined synchronous arbiter. The chip is fabricated in 0.35mum CMOS, runs at 40MHz and consumes 250mW at 3.3V


Eurasip Journal on Embedded Systems | 2007

Embedded vehicle speed estimation system using an asynchronous temporal contrast vision sensor

Daniel Bauer; Ahmed Nabil Belbachir; Nikolaus Donath; Gerhard Gritsch; Bernhard Kohn; Martin Litzenberger; Christoph Posch; Peter Schön; Stephan Schraml

This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μ s, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensors asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.


international symposium on circuits and systems | 2012

CARE: A dynamic stereo vision sensor system for fall detection

Ahmed Nabil Belbachir; Martin Litzenberger; Stephan Schraml; Michael Hofstätter; Daniel Bauer; Peter Schön; M. Humenberger; C. Sulzbachner; T. Lunden; M. Merne

This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety for elderly at home. The system consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA for interfacing the detectors, early data processing, and stereo matching for depth map reconstruction, (3) a digital signal processor for interpreting the sensor data in real-time for fall recognition, and (4) a wireless communication module for instantly alerting caring institutions. This system was designed for incident detection in private homes of elderly to foster safety and security. The two main advantages of the system, compared to existing wearable systems are from the applications point of view: (a) the stationary installation has a better acceptance for independent living comparing to permanent wearing devices, and (b) the privacy of the system is systematically ensured since the vision detector does not produce real images such as classic video sensors. The system can actually process about 300 kevents per second. It was evaluated using 500 fall cases acquired with a stuntman. More than 90% positive detections were reported. We will show a live demonstration during ISCAS2012 of the sensor system and its capabilities.


international conference on intelligent transportation systems | 2007

Vehicle Counting with an Embedded Traffic Data System using an Optical Transient Sensor

Martin Litzenberger; Bernhard Kohn; Gerhard Gritsch; Nikolaus Donath; C. Posch; N.A. Belbachir; H. Garn

In this paper a sensor system for traffic data acquisition is presented. The embedded system, comprising a motion-sensitive optical sensor and a low-cost, low-power DSP, is capable of detecting, counting and measuring the velocity of passing vehicles. The detection is based on monitoring of the optical sensor output within configurable regions of interest in the sensors field-of-view. In particular in this work we focus on the evaluation of the applied vehicle counting algorithm. The verification of the acquired data is based on manually annotated traffic data of 360 minutes length, containing a total of about 7000 vehicles. The counting error is determined for short (3 minutes) and long (60 minutes) time intervals. The calculated error of 99,2% of the short time intervals and 100% of the long time intervals analyzed, remain within commonly recognized margins of 10% and 3% of detection error respectively.


international conference on distributed smart cameras | 2007

Embedded Smart Camera for High Speed Vision

Martin Litzenberger; Ahmed Nabil Belbachir; Peter Schön; Christoph Posch

The architecture and prototype applications of an embedded vision system containing a neuromorphic temporal contrast vision sensor and a DSP are presented. The asynchronous vision sensor completely suppresses image data redundancy and encodes visual information in sparse address-event-representation (AER) data. Due to the efficient data preprocessing on the focal plane, the sensor delivers high temporal resolution data at a low data rate. Hence, a compact embedded vision system using a low-cost, low-power digital signal processor can be realized. The one millisecond timestamp resolution of the AER data stream allows to acquire and process motion trajectories of fast moving objects in the visual scene. Various post processing algorithms, such as object tracking, vehicle speed measurement and object classification have been implemented on the presented embedded platform. The systems low data rate output, low power operation and Ethernet connectivity make it ideal for use in distributed sensor networks. Results from traffic-monitoring and object tracking applications are presented.


international conference on intelligent transportation systems | 2009

Night-time vehicle classification with an embedded, vision system

Gerhard Gritsch; Nikolaus Donath; Bernhard Kohn; Martin Litzenberger

The paper presents night-time vehicle classification using an embedded vision system based on an optical transient sensor. This neuromorphic sensor features an array of 128×128 pixels that respond to relative light intensity changes with low latency and high dynamic range. The proposed algorithm exploits the temporal resolution and sparse representation of the data, delivered by the sensor in the data-driven Address-Event Representation (AER) format, to efficiently implement a robust classification of vehicles into two classes, car-like and truck-like, during night-time operation. The classification is based on the extraction of the positions and distances of the vehicles head lights to estimate vehicle width. We present the algorithm, test data and an evaluation of the classification accuracy by comparison of the test data with ground truth from video annotation and reference results from a state-of-the-art ultrasonic/radar-combination reference detector. The results show that the difference in total truck counts with respect to a reference detector and to manually annotated video during nighttime operation under dry and wet road conditions is typically below 6%.


IEEE Sensors Journal | 2009

A Microbolometer Asynchronous Dynamic Vision Sensor for LWIR

Christoph Posch; Daniel Matolin; Rainer Wohlgenannt; Thomas Maier; Martin Litzenberger

In this paper, a novel event-based dynamic IR vision sensor is presented. The device combines an uncooled microbolometer array with biology-inspired (ldquoneuromorphicrdquo) readout circuitry to implement an asynchronous, ldquospikingrdquo vision sensor for the 8-15 mum thermal infrared spectral range. The sensors autonomous pixels independently respond to changes in thermal IR radiation and communicate detected variations in the form of asynchronous ldquoaddress-events.rdquo The 64times64 pixel ROIC chip has been fabricated in a 0.35 mum 2P4M standard CMOS process, covers about 4times4 mm2 of silicon area and consumes 8 mW of power. An amorphous silicon (a-Si) microbolometer array has been processed on top of the ROIC and contacted to the pixel circuits. We discuss the bolometer detector properties, describe the pixel circuits and the implemented sensor architecture, and show measurement results of the readout circuits. Subsequently, a DFT-based approach to the characterization of asynchronous, spiking sensor arrays is discussed and applied. Test results and analysis of sensitivity, bandwidth, and noise of the fabricated IR sensor prototype are presented.


international symposium on circuits and systems | 2007

Wide dynamic range, high-speed machine vision with a 2×256 pixel temporal contrast vision sensor

Christoph Posch; Michael Hofstätter; Martin Litzenberger; Daniel Matolin; Nikolaus Donath; Peter Schön; Heinrich Garn

This paper presents a 2times256 pixel dual-line temporal contrast vision sensor and the use of this sensor in exemplary high-speed machine vision applications over a wide range of target illumination. The sensor combines an asynchronous, data-driven pixel circuit with an on-chip precision time-stamp generator and a 3-stage pipelined synchronous bus-arbiter. With a temporal resolution of down to 100ns, corresponding to a line rate of 10MHz, the sensor is ideal for high-speed machine vision tasks that do not rely on conventional image data. The output data rate depends on the dynamic contents of the target scene and is typically orders of magnitude lower than equivalent data output produced by conventional clocked line sensors in this type of applications. 120dB dynamic range makes high-speed operation possible at low lighting levels or uncontrolled lighting conditions. The sensor features two parallel pixel lines with a line separation of 250mum and a pixel pitch of 15mum. A prototype was fabricated in a standard 0.35mum CMOS technology. Results on high-speed edge angle resolution and edge gradient extraction as well as wide dynamic range operation are presented.


computer vision and pattern recognition | 2012

Event-driven embodied system for feature extraction and object recognition in robotic applications

Georg Wiesmann; Stephan Schraml; Martin Litzenberger; Ahmed Nabil Belbachir; Michael Hofstätter; Chiara Bartolozzi

A major challenge in robotic applications is the interaction with a dynamic environment and humans which is typically constrained by the capability of visual sensors and the computational cost of signal processing algorithms. Addressing this problem the paper presents an event-driven based embodied system for feature extraction and object recognition as a novel efficient sensory approach in robotic applications. The system is established for a mobile humanoid robot which provides the infrastructure for interfacing asynchronous vision sensors with the processing unit of the robot. By applying event-feature ”mapping” the address event representation of the sensors is enhanced by additional information that can be used for object recognition. The system is presented in the context of an exemplary application in which the robot has to detect and grasp a ball in an arbitrary state of motion.


international symposium on circuits and systems | 2010

A load-balancing readout method for large event-based PWM imaging arrays

Daniel Matolin; Rainer Wohlgenannt; Martin Litzenberger; Christoph Posch

This paper describes concept and implementation of an asynchronous, column-parallel readout method for event-based pulse-width-modulation (PWM) image sensors. These time-based imaging devices transmit exposure information in the form of asynchronous spike-events (AER) via an arbitrated asynchronous data bus that is common to all pixels. Event-collisions on the bus distort the time information and lead to errors in the instantaneous illumination measurement of the concerned pixel. The effect becomes manifest when imaging uniform, homogenous (parts of) scenes. One method to balance the load on the communication channel is to avoid a global reset signal. The proposed concept spreads the reset times according to the varying local illumination in the array after a global starting point, in fact realizing an asynchronous, column-parallel, light-dependent “rolling-shutter” mode. This novel concept has been realized in the design of a spiking asynchronous, time-based QVGA image sensor. We present theoretical considerations and preliminary measurement results from the chip fabricated in a standard 0.18μm CMOS process.

Collaboration


Dive into the Martin Litzenberger's collaboration.

Top Co-Authors

Avatar

Christoph Posch

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Schön

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Hofstätter

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Donath

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Kohn

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Matolin

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ahmed Nabil Belbachir

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Heinrich Garn

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robin North

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Gerhard Gritsch

Austrian Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge