Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryad Benosman is active.

Publication


Featured researches published by Ryad Benosman.


IEEE Transactions on Neural Networks | 2012

Asynchronous Event-Based Binocular Stereo Matching

Paul Rogister; Ryad Benosman; Sio-Hoi Ieng; Patrick Lichtsteiner; Tobi Delbruck

We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events.


IEEE Transactions on Robotics | 2012

Asynchronous Event-Based Visual Shape Tracking for Stable Haptic Feedback in Microrobotics

Zhenjiang Ni; Aude Bolopion; Joël Agnus; Ryad Benosman; Stéphane Régnier

Micromanipulation systems have recently been receiving increased attention. Teleoperated or automated micromanipulation is a challenging task due to the need for high-frequency position or force feedback to guarantee stability. In addition, the integration of sensors within micromanipulation platforms is complex. Vision is a commonly used solution for sensing; unfortunately, the update rate of the frame-based acquisition process of current available cameras cannot ensure-at reasonable costs-stable automated or teleoperated control at the microscale level, where low inertia produces highly unreachable dynamic phenomena. This paper presents a novel vision-based microrobotic system combining both an asynchronous address event representation silicon retina and a conventional frame-based camera. Unlike frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events in a manner similar to the output cells of a biological retina, enabling high update rates. This paper introduces an event-based iterative closest point algorithm to track a microgrippers position at a frequency of 4 kHz. The temporal precision of the asynchronous silicon retina is used to provide a haptic feedback to assist users during manipulation tasks, whereas the frame-based camera is used to retrieve the position of the object that must be manipulated. This paper presents the results of an experiment on teleoperating a sphere of diameter around 50 μm using a piezoelectric gripper in a pick-and-place task.


Neural Networks | 2012

Asynchronous frameless event-based optical flow

Ryad Benosman; Sio-Hoi Ieng; Charles Clercq; Chiara Bartolozzi; Mandyam V. Srinivasan

This paper introduces a process to compute optical flow using an asynchronous event-based retina at high speed and low computational load. A new generation of artificial vision sensors has now started to rely on biologically inspired designs for light acquisition. Biological retinas, and their artificial counterparts, are totally asynchronous and data driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework for processing visual data using asynchronous event-based acquisition, providing a method for the evaluation of optical flow. The paper shows that current limitations of optical flow computation can be overcome by using event-based visual acquisition, where high data sparseness and high temporal resolution permit the computation of optical flow with micro-second accuracy and at very low computational cost.


Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02 | 2002

Calibration of panoramic catadioptric sensors made easier

Jonathan Fabrizio; Jean-Philippe Tarel; Ryad Benosman

We present a new method to calibrate panoramic catadioptric sensors. While many methods exist for planar cameras, it is not the case for panoramic catadioptric sensors. The aim of the proposed calibration is not to estimate the mirror surface parameters which can be known very accurately, but to estimate the intrinsic parameters of the CCD camera and the pose parameters of the CCD camera with respect to the mirror Unless a telecentric lens is used, this pose must be estimated, particularly for sensors that have a unique effective view point. The developed method is based on the original and simple idea that the mirror external and internal boundaries can be used as a 3D calibration pattern. The improvement introduced by our approach is demonstrated on synthetic experiments with incorrectly aligned sensors and validation tests on real images are described. The proposed technique opens new ways for better designed catadioptric sensors where self-calibration can be easily performed in real-time in a completely autonomous way. In particular this should allow to avoid errors due to vibrations one can notice when using catadioptric sensors in practical situations.


american control conference | 2006

Stabilization and location of a four rotor helicopter applying vision

Hugo Romero; Ryad Benosman; Rogelio Lozano

In this paper, we deal with the problem of local positioning and orientation of a rotorcraft in indoor flight using a simple vision system. We apply two different approaches to obtain a navigation system for the flying machine. The first approach is based on the perspective of n-points and the second one follows the plane-based pose technique. Our aim is to obtain a good estimate of variables that are difficult to measure using conventional GPS and inertial sensor in urban environment or indoor. We propose a method to measure translational speed as well as position and orientation in a local frame


british machine vision conference | 2014

Simultaneous mosaicing and tracking with an event camera

Hanme Kim; Ankur Handa; Ryad Benosman; Sio-Hoi Ieng; Andrew J. Davison

© 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering.


Journal of Neural Engineering | 2011

Three-dimensional electrode arrays for retinal prostheses: modeling, geometry optimization and experimental validation

Milan Djilas; Olès C; Henri Lorach; Amel Bendali; Julie Degardin; Elisabeth Dubus; Lissorgues-Bazin G; Lionel Rousseau; Ryad Benosman; Sio-Hoi Ieng; Sébastien Joucla; B. Yvert; P. Bergonzo; José-Alain Sahel; Serge Picaud

Three-dimensional electrode geometries were proposed to increase the spatial resolution in retinal prostheses aiming at restoring vision in blind patients. We report here the results from a study in which finite-element modeling was used to design and optimize three-dimensional electrode geometries. Proposed implants exhibit an array of well-like shapes containing stimulating electrodes at their bottom, while the common return grid electrode surrounds each well on the implant top surface. Extending stimulating electrodes and/or the grid return electrode on the walls of the cavities was also considered. The goal of the optimization was to find model parameters that maximize the focalization of electrical stimulation, and therefore the spatial resolution of the electrode array. The results showed that electrode geometries with a well depth of 30 µm yield a tenfold increase in selectivity compared to the planar structures of similar electrode dimensions. Electrode array prototypes were microfabricated and implanted in dystrophic rats to determine if the tissue would behave as hypothesized in the model. Histological examination showed that retinal bipolar cells integrate the electrode well, creating isolated cell clusters. The modeling analysis showed that the stimulation current is confounded within the electrode well, leading to selective electrical stimulation of the individual bipolar cell clusters and thereby to electrode arrays with higher spatial resolution.


IEEE Transactions on Neural Networks | 2011

Asynchronous Event-Based Hebbian Epipolar Geometry

Ryad Benosman; Sio-Hoi Ieng; Paul Rogister; Christoph Posch

Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

HFirst: A Temporal Approach to Object Recognition

Garrick Orchard; Cedric Meyer; Ralph Etienne-Cummings; Christoph Posch; Nitish V. Thakor; Ryad Benosman

This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.


international conference on pattern recognition | 1996

Multidirectional stereovision sensor, calibration and scenes reconstruction

Ryad Benosman; Thierry Manière; Jean Devars

The observation of an entire 3D space and the reconstruction of an observed unknown scene are very interesting in the field of robot vision. This paper presents a new omni-directional device especially built for binocular peripheral vision. The architecture of the sensor is designed to simplify the computation considerably for real time application. The device needs no calculation of epipolar lines. This paper describes a new method and presents unknown reconstruction scenes based on a dynamic time warping algorithm. The image matching approach exploits the architecture benefits by calculating, in real time, the depth of the image slits of each angular position. The system described allows us to consider omni-directional robotics vision under a new realistic and robust aspect.

Collaboration


Dive into the Ryad Benosman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Posch

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean Devars

Pierre-and-Marie-Curie University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Garrick Orchard

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chiara Bartolozzi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

P. Bergonzo

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge