Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahmed Nabil Belbachir is active.

Publication


Featured researches published by Ahmed Nabil Belbachir.


Eurasip Journal on Embedded Systems | 2007

Embedded vehicle speed estimation system using an asynchronous temporal contrast vision sensor

Daniel Bauer; Ahmed Nabil Belbachir; Nikolaus Donath; Gerhard Gritsch; Bernhard Kohn; Martin Litzenberger; Christoph Posch; Peter Schön; Stephan Schraml

This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μ s, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensors asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.


computer vision and pattern recognition | 2010

A spatio-temporal clustering method using real-time motion analysis on event-based 3D vision

Stephan Schraml; Ahmed Nabil Belbachir

This paper proposes a method for clustering asynchronous events generated upon scene activities by a dynamic 3D vision system. The inherent detection of moving objects offered by the dynamic stereo vision system comprising a pair of dynamic vision sensors allows event-based stereo vision in real-time and a 3D representation of moving objects. The clustering method exploits the sparse spatio-temporal representation of sensors events for real-time detection and separation between moving objects. The method makes use of density and distance metrics for clustering asynchronous events generated by scene dynamics (changes in the scene). It has been evaluated on clustering the events of moving persons across the sensor field of view. Tests on real scenarios with more than 100 persons show that the resulting asynchronous events can be successfully clustered and the persons can be detected.


international symposium on circuits and systems | 2010

Dynamic stereo vision system for real-time tracking

Stephan Schraml; Ahmed Nabil Belbachir; Nenad Milosevic; Peter Schön

Biologically-inspired dynamic vision sensors have been introduced in 2002 which asynchronously detect the significant relative light intensity changes in a scene and output them in a form of Address-Event representation. These vision sensors capture dynamical discontinuities on-chip for a reduced data volume compared to that from intensity images. Therefore, they support detection, segmentation and tracking of moving objects in the Address-Event space by exploiting the generated events, as a reaction to intensity changes, resulting from the scene dynamics. Object tracking has been previously demonstrated and reported in scientific publications using monocular dynamic vision sensors. This paper contributes with presenting and demonstrating a tracking algorithm using the 3D sensing technology based on the stereo dynamic vision sensor. This system is capable of detecting and tracking persons within a 4m range at an effective refresh rate of the depth map of up to 200 per second. The 3D system is evaluated for people tracking and the tests showed that up to 60k Address-Events/s can be processed for real-time tracking.


international symposium on circuits and systems | 2012

CARE: A dynamic stereo vision sensor system for fall detection

Ahmed Nabil Belbachir; Martin Litzenberger; Stephan Schraml; Michael Hofstätter; Daniel Bauer; Peter Schön; M. Humenberger; C. Sulzbachner; T. Lunden; M. Merne

This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety for elderly at home. The system consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA for interfacing the detectors, early data processing, and stereo matching for depth map reconstruction, (3) a digital signal processor for interpreting the sensor data in real-time for fall recognition, and (4) a wireless communication module for instantly alerting caring institutions. This system was designed for incident detection in private homes of elderly to foster safety and security. The two main advantages of the system, compared to existing wearable systems are from the applications point of view: (a) the stationary installation has a better acceptance for independent living comparing to permanent wearing devices, and (b) the privacy of the system is systematically ensured since the vision detector does not produce real images such as classic video sensors. The system can actually process about 300 kevents per second. It was evaluated using 500 fall cases acquired with a stuntman. More than 90% positive detections were reported. We will show a live demonstration during ISCAS2012 of the sensor system and its capabilities.


IEEE Transactions on Industrial Electronics | 2011

High-Speed Embedded-Object Analysis Using a Dual-Line Timed-Address-Event Temporal-Contrast Vision Sensor

Ahmed Nabil Belbachir; Michael Hofstätter; M Litzenberger; Peter Schön

This paper presents a neuromorphic dual-line vision sensor and signal-processing concepts for object recognition and classification. The system performs ultrahigh speed machine vision with a compact and low-cost embedded-processing architecture. The main innovation of this paper includes efficient edge extraction of moving objects by the vision sensor on pixel level and a novel concept for real-time embedded vision processing based on address-event data. The proposed system exploits the very high temporal resolution and the sparse visual-information representation of the event-based vision sensor. The 2 × 256 pixel dual line temporal-contrast vision sensor asynchronously responds to relative illumination-intensity changes and consequently extracts contours of moving objects. This paper shows data-volume independence from object velocity and evaluates the data quality for object velocities of up to 40 m/s (equivalent to up to 6.25 m/s on the sensors focal plane). Subsequently, an embedded-processing concept is presented for real-time extraction of object contours and for object recognition. Finally, the influence of object velocity on high-performance embedded computer vision is discussed.


international conference on distributed smart cameras | 2007

Embedded Smart Camera for High Speed Vision

Martin Litzenberger; Ahmed Nabil Belbachir; Peter Schön; Christoph Posch

The architecture and prototype applications of an embedded vision system containing a neuromorphic temporal contrast vision sensor and a DSP are presented. The asynchronous vision sensor completely suppresses image data redundancy and encodes visual information in sparse address-event-representation (AER) data. Due to the efficient data preprocessing on the focal plane, the sensor delivers high temporal resolution data at a low data rate. Hence, a compact embedded vision system using a low-cost, low-power digital signal processor can be realized. The one millisecond timestamp resolution of the AER data stream allows to acquire and process motion trajectories of fast moving objects in the visual scene. Various post processing algorithms, such as object tracking, vehicle speed measurement and object classification have been implemented on the presented embedded platform. The systems low data rate output, low power operation and Ethernet connectivity make it ideal for use in distributed sensor networks. Results from traffic-monitoring and object tracking applications are presented.


ieee industry applications society annual meeting | 2005

An automatic optical inspection system for the diagnosis of printed circuits based on neural networks

Ahmed Nabil Belbachir; Mario Lera; Alessandra Fanni; Augusto Montisci

The aim of this work is to define a procedure to develop diagnostic systems for printed circuit boards, based on automated optical inspection with low cost and easy adaptability to different features. A complete system to detect mounting defects in the circuits is presented in this paper. A low-cost image acquisition system with high accuracy has been designed to fit this application. Afterward, the resulting images are processed using the wavelet transform and neural networks, for low computational cost and acceptable precision. The wavelet space represents a compact support for efficient feature extraction with the localization property. The proposed solution is demonstrated on several defects in different kind of circuits.


international conference on electronics, circuits, and systems | 2006

Multiple Input Digital Arbiter with Timestamp Assignment for Asynchronous Sensor Arrays

Michael Hofstätter; Ahmed Nabil Belbachir; Ernst Bodenstorfer; Peter Schön

This paper introduces a novel concept for arbitrating the access of multiple asynchronous data sources to a shared communication bus, while adding a timestamp, which represents high precision temporal information, to sensor information. This principle is based upon an arbiter serving uncorrelated inputs and generating a stream of data packets. The information contained in the data packets consists of the address information identifying the data source, data from the source and a timestamp value with respect to the occurrence of the bus request (event) generated by the corresponding data source. To enhance the adaptability to particular applications, the time resolution can be varied. The proposed concept has the advantage of delivering a sorted output data stream of nearly concurrent events (labeled with the same timestamp) which is very advantageous for consecutive data processing. Furthermore, this arbitration method is very efficient as it enables the utilization of the maximum output transfer rate for a given clock frequency. A potential usage in asynchronous vision chips is intended. This concept is demonstrated using an asynchronous vision chip containing 512 autonomous optical sensor elements.


international conference on computer vision | 2013

Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach

Ewa Piatkowska; Ahmed Nabil Belbachir; Margrit Gelautz

This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10μs) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time - as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera.


computer vision and pattern recognition | 2012

Embedded fall detection with a neural network and bio-inspired stereo vision

Martin Humenberger; Stephan Schraml; Christoph Sulzbachner; Ahmed Nabil Belbachir; Ágoston Srp; Ferenc Vajda

In this paper, we present a bio-inspired, purely passive, and embedded fall detection system for its application towards safety for elderly at home. Bio-inspired means the use of two optical detector chips with event-driven pixels that are sensitive to relative light intensity changes only. The two chips are used as stereo configuration which enables a 3D representation of the observed area with a stereo matching technique. In contrast to conventional digital cameras, this image sensor delivers asynchronous events instead of synchronous intensity or color images, thus, the privacy issue is systematically solved. Another advantage is that stationary installed fall detection systems have a better acceptance for independent living compared to permanently worn devices. The fall detection is done by a trained neural network. First, a meaningful feature vector is calculated from the point clouds, then the neural network classifies the actual event as fall or non-fall. All processing is done on an embedded device consisting of an FPGA for stereo matching and a DSP for neural network calculation achieving several fall evaluations per second. The results evaluation showed that our fall detection system achieves a fall detection rate of more than 96% with false positives below 5% for our prerecorded dataset consisting of 679 fall scenarios.

Collaboration


Dive into the Ahmed Nabil Belbachir's collaboration.

Top Co-Authors

Avatar

Stephan Schraml

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Schön

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Hofstätter

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Litzenberger

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Michael Goebel

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Kohn

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Manfred Mayerhofer

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge