Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raphael Berner is active.

Publication


Featured researches published by Raphael Berner.


IEEE Transactions on Neural Networks | 2009

CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking

Rafael Serrano-Gotarredona; Matthias Oster; Patrick Lichtsteiner; Alejandro Linares-Barranco; Rafael Paz-Vicente; Francisco Gomez-Rodriguez; Luis A. Camuñas-Mesa; Raphael Berner; Manuel Rivas-Perez; Tobi Delbruck; Shih-Chii Liu; Rodney J. Douglas; Philipp Häfliger; Gabriel Jiménez-Moreno; Anton Civit Ballcels; Teresa Serrano-Gotarredona; Antonio Acosta-Jimenez; Bernabé Linares-Barranco

This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.


IEEE Journal of Solid-state Circuits | 2014

A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor

Christian Brandli; Raphael Berner; Minhao Yang; Shih-Chii Liu; Tobi Delbruck

Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.


international symposium on circuits and systems | 2009

A pencil balancing robot using a pair of AER dynamic vision sensors

Jörg Conradt; Matthew Cook; Raphael Berner; Patrick Lichtsteiner; Rodney J. Douglas; Tobi Delbruck

Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencils location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencils base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop.


international symposium on circuits and systems | 2007

A 5 Meps

Raphael Berner; Tobi Delbruck; A. Civit-Balcells; Alejandro Linares-Barranco

This paper describes a high-speed USB2.0 address-event representation (AER) interface that allows simultaneous monitoring and sequencing of precisely timed AER data. This low-cost (<


international conference on computer vision | 2009

100 USB2.0 Address-Event Monitor-Sequencer Interface

Jörg Conradt; Raphael Berner; Matthew Cook; Tobi Delbruck

100), two chip, bus powered interface can achieve sustained AER event rates of 5 megaevents per second (Meps). Several boards can be electrically synchronized, allowing simultaneous synchronized capture from multiple devices. It has three parallel AER ports, one for sequencing, one for monitoring and one for passing through the monitored events. This paper also describes the host software infrastructure that makes the board usable for a heterogeneous mixture of AER devices and that allows recording and playback of recorded data.


international symposium on circuits and systems | 2010

An embedded AER dynamic vision sensor for low-latency pole balancing

Tobi Delbruck; Raphael Berner; Patrick Lichtsteiner; Carlos Dualibe

Balancing small objects such as a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. Here we describe how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil on its tip. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are processed by a 32-bit fixed-point ARM7 microcontroller (64MHz, 200mW) on the back side of each embedded DVS board (eDVS). Each eDVS updates its estimate of the pencils location and angle in 2d space for each received spike (typically at a rate of 100kHz) by applying a continuous tracking method based on spike-driven fitting to a model of the vertical rod-like shape of the pencil. Every 2ms, each eDVS sends the pencils tracked position to a third ARM7-based controller, which computes pencil location in 3d space and runs a linear PD-controller to adjust X-Y-position and velocity of the table to maintain the pencil balanced upright. The actuated table is built using ordinary high-speed hobby servos. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for minutes, in a wide range of light conditions.


IEEE Transactions on Circuits and Systems | 2011

32-bit Configurable bias current generator with sub-off-current capability

Raphael Berner; Tobi Delbruck

A fully configurable bias current reference is described. The output of the current reference is a gate voltage which produces a desired current. For each daisy-chained bias, 32 bits of configuration are divided into 22 bits of bias current, 6 bits of active-mirror buffer current, and 4 bits of other configuration. Configuration of each bias allows specifying the type of transistor (nfet or pfet), whether the bias is enabled or weakly pulled to the rail, whether the bias is for a cascode, and whether the bias transistor uses a shifted source (SS) voltage for sub-off-current biasing. In addition, the current reference integrates a pair of voltage regulators that generate stable voltage sources near the rails, suitable for the SS current references. Measurements from fabricated current references built in 180 nm CMOS show that the reference achieves at least 110 dB (22-bit) dynamic range and reaches 160dB when power-rail gate biasing is included. Generated bias currents reach at least 30x smaller current than the transistor off-current. Each current reference occupies an area of 620×50 urn2. The design kit schematics and layout are open-sourced.


Frontiers in Neuroscience | 2014

Event-Based Pixel Sensitive to Changes of Color and Brightness

Christian Brandli; Thomas Mantel; Marco Hutter; Markus A. Höpflinger; Raphael Berner; Roland Siegwart; Tobi Delbruck

Vision sensors whose pixels asynchronously generate informative output events are gathering increasing interest because they can reduce the data latency, rate, and redundancy, while also increasing dynamic range. This paper proposes such a dynamic vision sensor (DVS) pixel which is aimed at color vision (cDVS). The pixel combines subthreshold continuous time analog circuits with event-driven switched capacitor amplifiers and asynchronous digital outputs. The cDVS simultaneously detects separate log-intensity and wavelength change events using a single buried double junction (BDJ) photodiode. Chip measurements show that the cDVS color change pathway can detect light wavelength changes as small as 15 nm while the cDVS relative intensity change pathway detects changes as small as 10% of intensity. The circuit is characterized and improvements are proposed.


international symposium on circuits and systems | 2015

Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor

Chenghan Li; Christian Brandli; Raphael Berner; Hongjie Liu; Minhao Yang; Shih-Chii Liu; Tobi Delbruck

Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensors ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.


international symposium on circuits and systems | 2010

Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor

Raphael Berner; Tobi Delbruck

This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.

Collaboration


Dive into the Raphael Berner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernabé Linares-Barranco

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge