Christoph Posch
Austrian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christoph Posch.
IEEE Journal of Solid-state Circuits | 2008
Patrick Lichtsteiner; Christoph Posch; Tobi Delbruck
This paper describes a 128 times 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is > 120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency with a minimum of 15 mus at > 1 klux pixel illumination. The sensor is built in a 0.35 mum 4M2P process. It has 40times40 mum2 pixels with 9.4% fill factor. By providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output, this silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements.
IEEE Journal of Solid-state Circuits | 2011
Christoph Posch; Daniel Matolin; Rainer Wohlgenannt
The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of <;0.25% rms. SNR is >56 dB (9.3 bit) for >10 Lx illuminance.
international solid-state circuits conference | 2006
Patrick Lichtsteiner; Christoph Posch; T. Delbruck
A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN
Proceedings of the IEEE | 2014
Christoph Posch; Teresa Serrano-Gotarredona; Bernabé Linares-Barranco; Tobi Delbruck
State-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshot” images, recorded at discrete points in time. Visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene. Furthermore, each recorded frame conveys the information from all pixels, regardless of whether this information, or a part of it, has changed since the last frame had been acquired. This acquisition method limits the temporal resolution, potentially missing important information, and leads to redundancy in the recorded image data, unnecessarily inflating data rate and volume. Biology is leading the way to a more efficient style of image acquisition. Biological vision systems are driven by events happening within the scene in view, and not, like image sensors, by artificially created timing and control signals. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. In this paper, recent developments in bioinspired, neuromorphic optical sensing and artificial vision are presented and discussed. It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency. Demanding vision tasks such as real-time 3-D mapping, complex multiobject tracking, or fast visual feedback loops for sensory-motor action, tasks that often pose severe, sometimes insurmountable, challenges to conventional artificial vision systems, are in reach using bioinspired vision sensing and processing techniques.
international symposium on circuits and systems | 2008
Christoph Posch; Daniel Matolin; Rainer Wohlgenannt
In this paper we propose a fully asynchronous, time- based image sensor, which is characterized by high temporal resolution, low data rate (near complete temporal redundancy suppression), high dynamic range, and low power consumption. Autonomous pixels asynchronously communicate the detection of relative changes in light intensity, and the time from change detection to the threshold crossing of a photocurrent integrator, so encoding the instantaneous pixel illumination shortly after the time of a detected change. The chip is being implemented in a standard 0.18 mum CMOS process and measures less than 10times8 mm2 at 304times240 pixel resolution.
international solid-state circuits conference | 2010
Christoph Posch; Daniel Matolin; Rainer Wohlgenannt
Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data.
IEEE Transactions on Biomedical Circuits and Systems | 2011
Denis Guangyin Chen; Daniel Matolin; Amine Bermak; Christoph Posch
In time-domain or pulse-modulation (PM) imaging, the incident light intensity is not encoded in amounts of charge, voltage, or current as it is in conventional image sensors. Instead, the image data are represented by the timing of pulses or pulse edges. This method of visual information encoding optimizes the phototransduction individually for each pixel by abstaining from imposing a fixed integration time for the entire array. Exceptionally high dynamic range (DR) and improved signal-to-noise ratio (SNR) are immediate benefits of this approach. In particular, DR is no longer limited by the power-supply rails as in conventional complementary metal-oxide semiconductor (CMOS) complementary metal-oxide semiconductor active pixel sensors, thus providing relative immunity to the supply-voltage scaling of modern CMOS technologies. In addition, PM imaging naturally supports pixel-parallel analog-to-digital conversion, thereby enabling high temporal resolution/frame rates or an asynchronous event-based array readout. The applications of PM imaging in emerging areas, such as sensor network, wireless endoscopy, retinal prosthesis, polarization imaging, and energy harvesting are surveyed to demonstrate the effectiveness of PM imaging in low-power, high-performance machine vision, and biomedical applications of the future. The evolving design innovations made in PM imaging, such as high-speed arbitration circuits and ultra-compact processing elements, are expected to have even wider impacts in disciplines beyond CMOS image sensors. This paper thoroughly reviews and classifies all common PM image sensor architectures. Analytical models and a universal figure of merit - image quality and dynamic range to energy complexity factor are proposed to quantitatively assess different PM imagers across the entire spectrum of PM architectures.
international solid-state circuits conference | 2007
Christoph Posch; Michael Hofstätter; Daniel Matolin; Guy Vanstraelen; Peter Schön; Nikolaus Donath; Martin Litzenberger
A 120dB dynamic range 2times256 dual-line optical transient sensor uses pixels that respond asynchronously to relative intensity changes. A time stamp with variable resolution down to 100ns is allocated to the events at the pixel level. The pixel address and time stamp are read out via a 3-stage pipelined synchronous arbiter. The chip is fabricated in 0.35mum CMOS, runs at 40MHz and consumes 250mW at 3.3V
IEEE Transactions on Neural Networks | 2011
Ryad Benosman; Sio-Hoi Ieng; Paul Rogister; Christoph Posch
Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.
Eurasip Journal on Embedded Systems | 2007
Daniel Bauer; Ahmed Nabil Belbachir; Nikolaus Donath; Gerhard Gritsch; Bernhard Kohn; Martin Litzenberger; Christoph Posch; Peter Schön; Stephan Schraml
This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μ s, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensors asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.