Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Lichtsteiner is active.

Publication


Featured researches published by Patrick Lichtsteiner.


IEEE Journal of Solid-state Circuits | 2008

A 128

Patrick Lichtsteiner; Christoph Posch; Tobi Delbruck

This paper describes a 128 times 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is > 120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency with a minimum of 15 mus at > 1 klux pixel illumination. The sensor is built in a 0.35 mum 4M2P process. It has 40times40 mum2 pixels with 9.4% fill factor. By providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output, this silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements.


IEEE Transactions on Neural Networks | 2009

\times

Rafael Serrano-Gotarredona; Matthias Oster; Patrick Lichtsteiner; Alejandro Linares-Barranco; Rafael Paz-Vicente; Francisco Gomez-Rodriguez; Luis A. Camuñas-Mesa; Raphael Berner; Manuel Rivas-Perez; Tobi Delbruck; Shih-Chii Liu; Rodney J. Douglas; Philipp Häfliger; Gabriel Jiménez-Moreno; Anton Civit Ballcels; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco

This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.


international solid-state circuits conference | 2006

128 120 dB 15

Patrick Lichtsteiner; Christoph Posch; T. Delbruck

A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN


IEEE Transactions on Circuits and Systems I-regular Papers | 2007

\mu

Elisabetta Chicca; Adrian M. Whatley; Patrick Lichtsteiner; V. Dante; Tobias Delbrück; P. Del Giudice; Rodney J. Douglas; Giacomo Indiveri

The growing interest in pulse-mode processing by neural networks is encouraging the development of hardware implementations of massively parallel networks of integrate-and-fire neurons distributed over multiple chips. Address-event representation (AER) has long been considered a convenient transmission protocol for spike based neuromorphic devices. One missing, long-needed feature of AER-based systems is the ability to acquire data from complex neuromorphic systems and to stimulate them using suitable data. We have implemented a general-purpose solution in the form of a peripheral component interconnect (PCI) board (the PCI-AER board) supported by software. We describe the main characteristics of the PCI-AER board, and of the related supporting software. To show the functionality of the PCI-AER infrastructure we demonstrate a reconfigurable multichip neuromorphic system for feature selectivity which models orientation tuning properties of cortical neurons


international symposium on circuits and systems | 2007

s Latency Asynchronous Temporal Contrast Vision Sensor

Tobi Delbruck; Patrick Lichtsteiner

Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The balls position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of <10% with a minimum observed latency of 1.8 ms. A 2 GHz Pentium M laptop can process at >1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing


IEEE Transactions on Neural Networks | 2012

CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking

Paul Rogister; Ryad Benosman; Sio-Hoi Ieng; Patrick Lichtsteiner; Tobi Delbruck

We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events.


international symposium on circuits and systems | 2009

A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change

Jörg Conradt; Matthew Cook; Raphael Berner; Patrick Lichtsteiner; Rodney J. Douglas; Tobi Delbruck

Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencils location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencils base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop.


IEEE Transactions on Biomedical Circuits and Systems | 2008

A Multichip Pulse-Based Neuromorphic Infrastructure and Its Application to a Model of Orientation Selectivity

Zhengming Fu; Tobi Delbruck; Patrick Lichtsteiner; Eugenio Culurciello

In this paper, we describe an address-event vision system designed to detect accidental falls in elderly home care applications. The system raises an alarm when a fall hazard is detected. We use an asynchronous temporal contrast vision sensor which features sub-millisecond temporal resolution. The sensor reports a fall at ten times higher temporal resolution than a frame-based camera and shows 84% higher bandwidth efficiency as it transmits fall events. A lightweight algorithm computes an instantaneous motion vector and reports fall events. We are able to distinguish fall events from normal human behavior, such as walking, crouching down, and sitting down. Our system is robust to the monitored persons spatial position in a room and presence of pets.


international symposium on circuits and systems | 2008

Fast sensory motor control based on event-based hybrid neuromorphic-procedural system

Zhengming Fu; Eugenio Culurciello; Patrick Lichtsteiner; Tobi Delbruck

In this paper we describe an address-event vision system designed to detect accidental falls in elderly home care applications. The system raises an alarm when a fall hazard is detected. We use an asynchronous temporal contrast vision sensor which features sub-millisecond temporal resolution. A lightweight algorithm computes an instantaneous motion vector and reports fall events. We are able to distinguish fall events from normal human behavior, such as walking, crouching down, and sitting down. Our system is robust to the monitored persons spatial position in a room and presence of pets.


international conference on electronics circuits and systems | 2004

Asynchronous Event-Based Binocular Stereo Matching

Patrick Lichtsteiner; Tobi Delbruck; Jörg Kramer

We have fabricated an improved version of an imager reported earlier (Kramer, J., Int. Symp. Circuits and Systems, vol.II, p.165-8, 2002), primarily by using a better pixel circuit and better layout principles. The new imager functions over 5 decades of background illumination and has much more symmetrical ON and OFF responses. This imager achieves massive redundancy reduction by temporally differentiating the image contrast. The ideal functions of the pixels are to compute the rectified derivatives /sup d///sub dt/(log I) = (dI/dt)/I, where I is the photocurrent in the pixel. The values of these derivatives are rectified and output as rate-coded events on the AER (address-event representation) bus. The 2.2 mm by 2.2 mm, chip was built in a 0.35 /spl mu/m 4M 2P CMOS process. The pixels are 40 /spl mu/m by 40 /spl mu/m and the array has 32 by 32 pixels, each with ON and OFF outputs. System power consumption is about 30 mA at 3.3 V.

Collaboration


Dive into the Patrick Lichtsteiner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Posch

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge