Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Brandli is active.

Publication


Featured researches published by Christian Brandli.


IEEE Journal of Solid-state Circuits | 2014

A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor

Christian Brandli; Raphael Berner; Minhao Yang; Shih-Chii Liu; Tobi Delbruck

Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.


intelligent robots and systems | 2013

Low-latency localization by active LED markers tracking using a dynamic vision sensor

Andrea Censi; Jonas Strubel; Christian Brandli; Tobi Delbruck; Davide Scaramuzza

At the current state of the art, the agility of an autonomous flying robot is limited by its sensing pipeline, because the relatively high latency and low sampling frequency limit the aggressiveness of the control strategies that can be implemented. To obtain more agile robots, we need faster sensing pipelines. A Dynamic Vision Sensor (DVS) is a very different sensor than a normal CMOS camera: rather than providing discrete frames like a CMOS camera, the sensor output is a sequence of asynchronous timestamped events each describing a change in the perceived brightness at a single pixel. The latency of such sensors can be measured in the microseconds, thus offering the theoretical possibility of creating a sensing pipeline whose latency is negligible compared to the dynamics of the platform. However, to use these sensors we must rethink the way we interpret visual data. This paper presents a method for low-latency pose tracking using a DVS and Active Led Markers (ALMs), which are LEDs blinking at high frequency (>1 KHz). The sensors time resolution allows distinguishing different frequencies, thus avoiding the need for data association. This approach is compared to traditional pose tracking based on a CMOS camera. The DVS performance is not affected by fast motion, unlike the CMOS camera, which suffers from motion blur.


Frontiers in Neuroscience | 2014

Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor

Christian Brandli; Thomas Mantel; Marco Hutter; Markus A. Höpflinger; Raphael Berner; Roland Siegwart; Tobi Delbruck

Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensors ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.


international symposium on circuits and systems | 2015

Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor

Chenghan Li; Christian Brandli; Raphael Berner; Hongjie Liu; Minhao Yang; Shih-Chii Liu; Tobi Delbruck

This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.


international symposium on circuits and systems | 2014

Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor

Christian Brandli; Lorenz K. Müller; Tobi Delbruck

Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240×180 pixel sensor at sub-Hz frame rates and successfully decompressed yielding an equivalent frame rate of 2kHz. A quantitative analysis of the compression quality resulted in an average pixel error of 0.5DN intensity resolution for non-saturating stimuli. The system exhibits an adaptive compression ratio which depends on the activity in a scene; for stationary scenes it can go up to 1862. The low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks.


international conference on event based control communication and signal processing | 2016

ELiSeD — An event-based line segment detector

Christian Brandli; Jonas Strubel; Susanne Keller; Davide Scaramuzza; Tobi Delbruck

Event-based temporal contrast vision sensors such as the Dynamic Vison Sensor (DVS) have advantages such as high dynamic range, low latency, and low power consumption. Instead of frames, these sensors produce a stream of events that encode discrete amounts of temporal contrast. Surfaces and objects with sufficient spatial contrast trigger events if they are moving relative to the sensor, which thus performs inherent edge detection. These sensors are well-suited for motion capture, but so far suitable event-based, low-level features that allow assigning events to spatial structures have been lacking. A general solution of the so-called event correspondence problem, i.e. inferring which events are caused by the motion of the same spatial feature, would allow applying these sensors in a multitude of tasks such as visual odometry or structure from motion. The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving this problem by parameterizing the event stream as a set of line segments. The event stream which is used to update these low-level features is continuous in time and has a high temporal resolution; this allows capturing even fast motions without the requirement to solve the conventional frame-to-frame motion correspondence problem. The ELiSeD feature detector and tracker runs in real-time on a laptop computer at image speeds of up to 1300 pix/s and can continuously track rotations of up to 720 deg/s. The algorithm is open-sourced in the jAER project.


international symposium on circuits and systems | 2015

Design of a spatiotemporal correlation filter for event-based sensors

Hongjie Liu; Christian Brandli; Chenghan Li; Shih-Chii Liu; Tobias Delbrück

This paper reports the design of a 1mW, 10ns-latency mixed signal system in 0.18μm CMOS which enables filtering out uncorrelated background activity in event-based neuromorphic sensors. Background activity (BA) in the output of dynamic vision sensors is caused by thermal noise and junction leakage current acting on switches connected to floating nodes in the pixels. The reported chip generates a pass flag for spatiotemporally correlated events for post-processing to reduce communication/computation load and improve information rate. A chip with 128×128 array with 20×20μm2 cells has been designed. Each filter cell combines programmable spatial subsampling with a temporal window based on current integration. Power-gating is used to minimize the power consumption by only activating the threshold detection and communication circuits in the cell receiving an input event. This correlation filter chip targets embedded neuromorphic visual and auditory systems, where low average power consumption and low latency are critical.


biomedical circuits and systems conference | 2015

Dynamically reconfigurable silicon array of generalized integrate-and-fire neurons

Vigil Varghese; Jamal Lottier Molin; Christian Brandli; Shoushun Chen; Ralph Etienne Cummings

In this paper we present a highly scalable, dynamically reconfigurable, energy efficient silicon neuron model for large scale neural networks. This model is a simplification of the generalized linear integrate-and-fire neuron model. The presented model is capable of reproducing 9 of the 20 prominent biologically relevant neuron behaviors. The circuits are designed for a 0.5 μm process and occupy an area of 1029 μm2, while only consuming an average power of 0.38 nW at 1 kHz.


international symposium on circuits and systems | 2017

Low-power, low-mismatch, highly-dense array of VLSI Mihalas-Niebur neurons

Jamal Lottier Molin; Adebayo Eisape; Chetan Singh Thakur; Vigil Varghese; Christian Brandli; Ralph Etienne-Cummings

We present an array of Mihalas-Niebur neurons with dynamically reconfigurable synapses implemented in 0.5 μm CMOS technology optimized for low-power, low-mismatch, and high-density. This neural array has two modes of operation: one is each cell in the array operates as independent leaky integrate-and-fire neurons, and the second is two cells work together to model the Mihalas-Niebur neuron dynamics. Depending on the mode of operation, this implementation consists of 2040 Mihalas-Niebur neurons or 4080 I&F neurons within a 3mm χ 3mm area. Each I&F neuron cell consumes an area of 1495μm2 and the neural array dissipates 360pJ of energy per synaptic event measured at 5.0V power supply (∼14pJ at 1.0V estimated from SPICE simulation).


international symposium on circuits and systems | 2014

Live demonstration: The “DAVIS” Dynamic and Active-Pixel Vision Sensor

Christian Brandli; Raphael Berner; Minhao Yang; Shih-Chii Liu; V. Villeneuva; Tobi Delbruck

This demonstration will show the features of the Dynamic and Active-Pixel Vision Sensor (DAVIS) reported at the VLSI Symposium and the International Imager Sensor Workshop in 2013. This sensor concurrently outputs conventional CMOS image sensor frames and sparse, low-latency dynamic vision sensor events from the same pixels, sharing the same photodiodes. The setup will allow visitors to explore the advantages of combining of fast and computationally-efficient neuromorphic event-driven vision with the existing body of methods for frame-based computer and machine vision.

Collaboration


Dive into the Christian Brandli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge