Minhao Yang
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Minhao Yang.
IEEE Journal of Solid-state Circuits | 2014
Christian Brandli; Raphael Berner; Minhao Yang; Shih-Chii Liu; Tobi Delbruck
Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.
IEEE Journal of Solid-state Circuits | 2015
Minhao Yang; Shih-Chii Liu; Tobi Delbruck
A dynamic vision sensor (DVS) encodes temporal contrast (TC) of light intensity into address-events that are asynchronously transmitted for subsequent processing. This paper describes a DVS with improved TC sensitivity and event encoding. To enhance the TC sensitivity, each pixel employs a common-gate photoreceptor for low output noise and a capacitively-coupled programmable gain amplifier for continuous-time signal amplification without sacrificing the intra-scene dynamic range. A proposed in-pixel asynchronous delta modulator (ADM) better preserves signal integrity in event encoding compared with self-timed reset (STR) used in previous DVSs. A 60 × 30 prototype sensor array with a 31.2 μm pixel pitch was fabricated in a 1P6M 0.18 μm CMOS technology. It consumes 720 μW at a 100k event/s output rate. Measurements show that a 1% TC sensitivity with a 35% relative standard deviation is achieved and that the in-pixel ADM is up to 3.5 times less susceptible to signal loss than STR in terms of event number. These improvements can facilitate the application of DVSs in areas like optical neuroimaging which is demonstrated in a simulated experiment.
international symposium on circuits and systems | 2015
Chenghan Li; Christian Brandli; Raphael Berner; Hongjie Liu; Minhao Yang; Shih-Chii Liu; Tobi Delbruck
This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.
international symposium on circuits and systems | 2014
Minhao Yang; Shih-Chii Liu; Tobi Delbruck
Two in-pixel encoding mechanisms to convert analog input to spike output for vision sensors are modeled and compared with the consideration of feedback delay: one is feedback and reset (FAR), and the other is feedback and subtract (FAS). MATLAB simulations of linear signal reconstruction from spike trains generated by the two encoders show that FAR in general has a lower signal-to-distortion ratio (SDR) compared to FAS due to signal loss during the reset phase and hold period, and the SDR merit of FAS increases as the quantization bit number and input signal frequency increases. A 500 μm2 in-pixel circuit implementation of FAS using asynchronous switched capacitors in a UMC 0.18μm 1P6M process is described, and the post-layout simulation results are given to verify the FAS encoding mechanism.
international symposium on circuits and systems | 2012
Minhao Yang; Shih-Chii Liu; Chenghan Li; Tobi Delbruck
Configurable high-performance bias current reference circuits are useful in complex mixed-signal chips. This paper presents the design of a configurable current reference array with ultra wide dynamic range (DR). A coarse-fine architecture using octal coarse current spacing and 8 bits of fine resolution increases the overall current DR with less area compared with the prior work. Compact current multipliers and dividers also save chip areas. Shifted-source current mirrors and an off-current suppression technique improve the accuracy of generated low currents. A buffer with dual-threshold source followers is used to generate the output biasing voltage with a wide DR input current. Biases are individually addressable and configurable. Measurement results of this design in UMC 0.18μm 1P6M CMOS process suggest that over 170dB DR is achieved at room temperature. Each additional bias occupies an incremental area of 360×22μm2, which is smaller by a factor of 4 compared to the previous design.
IEEE Journal of Solid-state Circuits | 2016
Minhao Yang; Chen-Han Chien; Tobi Delbruck; Shih-Chii Liu
Event-driven DSPs have the advantage of activity-dependent power consumption [1], and event-driven neural networks have shown superior power efficiency in real-time recognition tasks [2]. A bio-inspired silicon cochlea [3] functionally transforms sound input into multi-frequency-channel asynchronous event output, and hence is the natural candidate for the audio sensing frontend of event-driven signal processing systems like [1] and [2]. High-quality event encoding can be implemented as level-crossing (LC) ADCs, but the circuits are area- and power-inefficient [1]. Asynchronous delta modulation, the original form of LC sampling, on the other hand can be compactly realized even in small pixels of vision sensors [4]. Traditional audio processing employs digital FFTs and BPFs after signal acquisition by high-precision ADCs. However, it has been shown in [5] that for classification tasks like voice activity detection (VAD), good accuracy can still be attained when filtering is performed using low-power analog BPFs. This paper presents a 0.5V 55μW 64×2-channel binaural silicon cochlea aiming for ultra-low-power IoE applications like event-driven VAD, sound source localization, speaker identification and primitive speech recognition. The source-follower-based BPF and the asynchronous delta modulator (ADM) with adaptive self-oscillating comparison for event encoding are highlighted for the advancement of the system power efficiency.
international solid-state circuits conference | 2016
Minhao Yang; Chen-Han Chien; Tobias Delbrück; Shih-Chii Liu
Event-driven DSPs have the advantage of activity-dependent power consumption [1], and event-driven neural networks have shown superior power efficiency in real-time recognition tasks [2]. A bio-inspired silicon cochlea [3] functionally transforms sound input into multi-frequency-channel asynchronous event output, and hence is the natural candidate for the audio sensing frontend of event-driven signal processing systems like [1] and [2]. High-quality event encoding can be implemented as level-crossing (LC) ADCs, but the circuits are area- and power-inefficient [1]. Asynchronous delta modulation, the original form of LC sampling, on the other hand can be compactly realized even in small pixels of vision sensors [4]. Traditional audio processing employs digital FFTs and BPFs after signal acquisition by high-precision ADCs. However, it has been shown in [5] that for classification tasks like voice activity detection (VAD), good accuracy can still be attained when filtering is performed using low-power analog BPFs. This paper presents a 0.5V 55µW 64×2-channel binaural silicon cochlea aiming for ultra-low-power IoE applications like event-driven VAD, sound source localization, speaker identification and primitive speech recognition. The source-follower-based BPF and the asynchronous delta modulator (ADM) with adaptive self-oscillating comparison for event encoding are highlighted for the advancement of the system power efficiency.
IEEE Transactions on Biomedical Circuits and Systems | 2015
Shih-Chii Liu; Minhao Yang; Andreas Steiner; Rico Moeckel; Tobi Delbruck
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
international symposium on circuits and systems | 2014
Christian Brandli; Raphael Berner; Minhao Yang; Shih-Chii Liu; V. Villeneuva; Tobi Delbruck
This demonstration will show the features of the Dynamic and Active-Pixel Vision Sensor (DAVIS) reported at the VLSI Symposium and the International Imager Sensor Workshop in 2013. This sensor concurrently outputs conventional CMOS image sensor frames and sparse, low-latency dynamic vision sensor events from the same pixels, sharing the same photodiodes. The setup will allow visitors to explore the advantages of combining of fast and computationally-efficient neuromorphic event-driven vision with the existing body of methods for frame-based computer and machine vision.
symposium on vlsi circuits | 2013
Raphael Berner; Christian Brandli; Minhao Yang; Shih-Chii Liu; Tobi Delbruck