Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Hofstätter is active.

Publication


Featured researches published by Michael Hofstätter.


international solid-state circuits conference | 2007

A Dual-Line Optical Transient Sensor with On-Chip Precision Time-Stamp Generation

Christoph Posch; Michael Hofstätter; Daniel Matolin; Guy Vanstraelen; Peter Schön; Nikolaus Donath; Martin Litzenberger

A 120dB dynamic range 2times256 dual-line optical transient sensor uses pixels that respond asynchronously to relative intensity changes. A time stamp with variable resolution down to 100ns is allocated to the events at the pixel level. The pixel address and time stamp are read out via a 3-stage pipelined synchronous arbiter. The chip is fabricated in 0.35mum CMOS, runs at 40MHz and consumes 250mW at 3.3V


international symposium on circuits and systems | 2012

CARE: A dynamic stereo vision sensor system for fall detection

Ahmed Nabil Belbachir; Martin Litzenberger; Stephan Schraml; Michael Hofstätter; Daniel Bauer; Peter Schön; M. Humenberger; C. Sulzbachner; T. Lunden; M. Merne

This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety for elderly at home. The system consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA for interfacing the detectors, early data processing, and stereo matching for depth map reconstruction, (3) a digital signal processor for interpreting the sensor data in real-time for fall recognition, and (4) a wireless communication module for instantly alerting caring institutions. This system was designed for incident detection in private homes of elderly to foster safety and security. The two main advantages of the system, compared to existing wearable systems are from the applications point of view: (a) the stationary installation has a better acceptance for independent living comparing to permanent wearing devices, and (b) the privacy of the system is systematically ensured since the vision detector does not produce real images such as classic video sensors. The system can actually process about 300 kevents per second. It was evaluated using 500 fall cases acquired with a stuntman. More than 90% positive detections were reported. We will show a live demonstration during ISCAS2012 of the sensor system and its capabilities.


computer vision and pattern recognition | 2011

Embedded neuromorphic vision for humanoid robots

Chiara Bartolozzi; Francesco Rea; Charles Clercq; Daniel Bernhard Fasnacht; Giacomo Indiveri; Michael Hofstätter; Giorgio Metta

We are developing an embedded vision system for the humanoid robot iCub, inspired by the biology of the mammalian visual system, including concepts such as stimulus-driven, asynchronous signal sensing and processing. It comprises stimulus-driven sensors, a dedicated embedded processor and an event-based software infrastructure for processing visual stimuli. These components are integrated with the existing standard machine vision modules currently implemented on the robot, in a configuration that exploits the best features of both: the high resolution, color, frame-based vision and the neuromorphic low redundancy, wide dynamic range and high temporal resolution event-based sensors. This approach seeks to combine various styles of vision hardware with sensorimotor systems to complement and extend the current state-of-the art.


international conference on electronics, circuits, and systems | 2006

Multiple Input Digital Arbiter with Timestamp Assignment for Asynchronous Sensor Arrays

Michael Hofstätter; Ahmed Nabil Belbachir; Ernst Bodenstorfer; Peter Schön

This paper introduces a novel concept for arbitrating the access of multiple asynchronous data sources to a shared communication bus, while adding a timestamp, which represents high precision temporal information, to sensor information. This principle is based upon an arbiter serving uncorrelated inputs and generating a stream of data packets. The information contained in the data packets consists of the address information identifying the data source, data from the source and a timestamp value with respect to the occurrence of the bus request (event) generated by the corresponding data source. To enhance the adaptability to particular applications, the time resolution can be varied. The proposed concept has the advantage of delivering a sorted output data stream of nearly concurrent events (labeled with the same timestamp) which is very advantageous for consecutive data processing. Furthermore, this arbitration method is very efficient as it enables the utilization of the maximum output transfer rate for a given clock frequency. A potential usage in asynchronous vision chips is intended. This concept is demonstrated using an asynchronous vision chip containing 512 autonomous optical sensor elements.


international symposium on circuits and systems | 2007

Wide dynamic range, high-speed machine vision with a 2×256 pixel temporal contrast vision sensor

Christoph Posch; Michael Hofstätter; Martin Litzenberger; Daniel Matolin; Nikolaus Donath; Peter Schön; Heinrich Garn

This paper presents a 2times256 pixel dual-line temporal contrast vision sensor and the use of this sensor in exemplary high-speed machine vision applications over a wide range of target illumination. The sensor combines an asynchronous, data-driven pixel circuit with an on-chip precision time-stamp generator and a 3-stage pipelined synchronous bus-arbiter. With a temporal resolution of down to 100ns, corresponding to a line rate of 10MHz, the sensor is ideal for high-speed machine vision tasks that do not rely on conventional image data. The output data rate depends on the dynamic contents of the target scene and is typically orders of magnitude lower than equivalent data output produced by conventional clocked line sensors in this type of applications. 120dB dynamic range makes high-speed operation possible at low lighting levels or uncontrolled lighting conditions. The sensor features two parallel pixel lines with a line separation of 250mum and a pixel pitch of 15mum. A prototype was fabricated in a standard 0.35mum CMOS technology. Results on high-speed edge angle resolution and edge gradient extraction as well as wide dynamic range operation are presented.


computer vision and pattern recognition | 2012

Event-driven embodied system for feature extraction and object recognition in robotic applications

Georg Wiesmann; Stephan Schraml; Martin Litzenberger; Ahmed Nabil Belbachir; Michael Hofstätter; Chiara Bartolozzi

A major challenge in robotic applications is the interaction with a dynamic environment and humans which is typically constrained by the capability of visual sensors and the computational cost of signal processing algorithms. Addressing this problem the paper presents an event-driven based embodied system for feature extraction and object recognition as a novel efficient sensory approach in robotic applications. The system is established for a mobile humanoid robot which provides the infrastructure for interfacing asynchronous vision sensors with the processing unit of the robot. By applying event-feature ”mapping” the address event representation of the sensors is enhanced by additional information that can be used for object recognition. The system is presented in the context of an exemplary application in which the robot has to detect and grasp a ball in an arbitrary state of motion.


biomedical circuits and systems conference | 2009

An integrated 20-bit 33/5M events/s AER sensor interface with 10ns time-stamping and hardware-accelerated event pre-processing

Michael Hofstätter; Peter Schön; Christoph Posch

This paper presents a custom data bridge that interfaces the continuous-time world of asynchronous address-events (AER) to the realm of conventional digital data processing. The main focus in the design of the interface was on precisely maintaining the inherent timing information of AER sensor data while providing robust peak-rate handling, DMA functionality and a novel event-rate dependent system control mechanism. The sensor interface can be integrated with standard CMOS logic in an AER processing system-on-chip and implements hardware-accelerated event pre-processing including pre-FIFO high-resolution time-stamping, address masking for ROI and event-rate dependent IRQ generation without loading a downstream processing device. The sensor interface has been implemented in a 0.18 ¿m CMOS process and achieves peak AER event rates of 33M events/sec and sustained AER event rates of 5.125 M events/sec at 10 ns time-stamp resolution. We discuss design considerations and implementation details and show measurement results from the fabricated chip.


international conference on electronics, circuits, and systems | 2011

Hardware-accelerated address-event processing for high-speed visual object recognition

Michael Hofstätter; Martin Litzenberger; Daniel Matolin; Christoph Posch

This paper presents a hardware implementation for high-speed, event-based data processing. A full-custom Address-Event (AER) processing system (GAEP) features a 10ns-resolution 33M/5.125M events·s−1 peak/sustained event rate sensor data interface for precision time-stamping of asynchronous sensor data and implements hardware-accelerated event pre-processing including rate dependent IRQ generation and address masking for ROI/RONI. The pre-processing functions are implemented in dedicated hardware and operate without loading the actual processor device, a SPARC-compatible general-purpose micro-processor. The complete SoC is implemented in 0.18μm standard CMOS technology. We present a camera system comprising the AER processor and a bio-inspired dynamic vision sensor in an exemplary high-speed vision application related to shape detection / object recognition. Relevant details of the system architecture and performance results characterizing the vision system in a real-world machine vision application are presented.


international symposium on circuits and systems | 2010

A SPARC-compatible general purpose address-event processor with 20-bit l0ns-resolution asynchronous sensor data interface in 0.18μm CMOS

Michael Hofstätter; Peter Schön; Christoph Posch

This paper presents a general purpose address-event (AER) processor based on a SPARC-compatible LEON3 core with a custom data interface for asynchronous sensor data. The main focus in the design of the sensor interface was on precisely maintaining the inherent timing information of AER sensor data while providing robust peak-rate handling, DMA functionality and a novel event-rate dependent system control mechanism. Hardware-accelerated event pre-processing includes pre-FIFO high-resolution time-stamping, address masking for ROI and event-rate dependent IRQ generation without loading the processor core. The System-on-Chip has been implemented in a 0.18um CMOS process and achieves peak AER input event rates of 33M AE/s and sustained event rates of 5.125M AE/s at 10ns time-stamp resolution. The core processes AEs at >1M AE/s sustained rate. We discuss design considerations and implementation details and show measurement results from the fabricated chip.


computer vision and pattern recognition | 2014

A Novel HDR Depth Camera for Real-Time 3D 360° Panoramic Vision

Ahmed Nabil Belbachir; Stephan Schraml; Manfred Mayerhofer; Michael Hofstätter

This paper presents a novel 360° High-Dynamic Range (HDR) camera for real-time 3D 360° panoramic computer vision. The camera consists of (1) a pair of bio-inspired dynamic vision line sensors (1024 pixel each) asynchronously generating events at high temporal resolution with on-chip time stamping (1μs resolution), having a high dynamic range and the sparse visual coding of the information, (2) a high-speed mechanical device rotating at up to 10 revolutions per sec (rps) on which the pair of sensor is mounted and (3) a processing unit for the configuration of the detector chip and transmission of its data through a slip ring and a gigabit Ethernet communication to the user. Within this work, we first present the new camera, its individual components and resulting panoramic edge map. In a second step, we developed a method for reconstructing the intensity images out of the event data generated by the sensors. The algorithm maps the recorded panoramic views into gray-level images by using a transform coefficient. In the last part of this work, anaglyph representation and 3D reconstruction results out of the stereo images are shown. The experimental results show the capabilities of the new camera to generate 10 x 3D panoramic views per second in real-time at an image resolution of 5000x1024 pixel and intra-scene dynamic range of more than 120 dB under natural illuminations. The camera potential for 360° depth imaging and mobile computer vision is briefly highlighted.

Collaboration


Dive into the Michael Hofstätter's collaboration.

Top Co-Authors

Avatar

Peter Schön

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Litzenberger

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Posch

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ahmed Nabil Belbachir

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Donath

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Matolin

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Bauer

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nenad Milosevic

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stephan Schraml

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Kohn

Austrian Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge