Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benoît Deville is active.

Publication


Featured researches published by Benoît Deville.


Eurasip Journal on Image and Video Processing | 2007

Transforming 3D coloured pixels into musical instrument notes for vision substitution applications

Guido Bologna; Benoît Deville; Thierry Pun; Michel Vinckenbosch

The goal of the See ColOr project is to achieve a noninvasive mobility aid for blind users that will use the auditory pathway to represent in real-time frontal image scenes. We present and discuss here two image processing methods that were experimented in this work: image simplification by means of segmentation, and guiding the focus of attention through the computation of visual saliency. A mean shift segmentation technique gave the best results, but for real-time constraints we simply implemented an image quantification method based on the HSL colour system. More particularly, we have developed two prototypes which transform HSL coloured pixels into spatialised classical instrument sounds lasting for 300 ms. Hue is sonified by the timbre of a musical instrument, saturation is one of four possible notes, and luminosity is represented by bass when luminosity is rather dark and singing voice when it is relatively bright. The first prototype is devoted to static images on the computer screen, while the second has been built up on a stereoscopic camera which estimates depth by triangulation. In the audio encoding, distance to objects was quantified into four duration levels. Six participants with their eyes covered by a dark tissue were trained to associate colours with musical instruments and then asked to determine on several pictures, objects with specific shapes and colours. In order to simplify the protocol of experiments, we used a tactile tablet, which took the place of the camera. Overall, colour was helpful for the interpretation of image scenes. Moreover, preliminary results with the second prototype consisting in the recognition of coloured balloons were very encouraging. Image processing techniques such as saliency could accelerate in the future the interpretation of sonified image scenes.


Neurocomputing | 2011

Toward local and global perception modules for vision substitution

Guido Bologna; Benoît Deville; Juan Diego Gomez; Thierry Pun

Although retinal neural implants have considerably progressed they raise a number of questions concerning user acceptance, risk rejection, and cost. For the time being we support a low cost approach based on the transmission of limited vision information by means of the auditory channel. The See ColOr mobility aid for visually impaired individuals transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL colour system. Our purpose is to provide blind people with a capability of perception of the environment in real time. In this work the novelty is the simultaneous sonification of colour and depth, the last parameter being coded by sound rhythm. The main drawback of our approach is that the sonification of a limited portion of a captured image involves limited perception. As a consequence, we propose to extend the local perception module by introducing a new global perception module aiming at providing the user with a clear picture of the entire scene characteristics. Finally, we present several experiments to illustrate the limited perception module, such as: (1) detecting an open door in order to go out from the office; (2) walking in a hallway and looking for a blue cabinet; (3) walking in a hallway and looking for a red tee shirt; (4) avoiding two red obstacles; (5) moving outside and avoiding a parked car. Videos of experiments are available on http://www.youtube.com/guidobologna.


international symposium on neural networks | 2008

A perceptual interface for vision substitution in a color matching experiment

Guido Bologna; Benoît Deville; Michel Vinckenbosch; Thierry Pun

In the context of vision substitution by the auditory channel several systems have been introduced. One such system that is presented here, See ColOr, is a dedicated interface part of a mobility aid for visually impaired people. It transforms a small portion of a colored video image into spatialized instrument sounds. In this work the purpose is to verify the hypothesis that sounds from musical instruments provide an alternative way to vision for obtaining color information from the environment. We introduce an experiment in which several participants try to match pairs of colored socks by pointing a head mounted camera and by listening to the generated sounds. Our experiments demonstrated that blindfolded individuals were able to accurately match pairs of colored socks. The advantage of the See ColOr interface is that it allows the user to receive a feed-back auditory signal from the environment and its colors, promptly. Our perceptual auditory coding of pixel values opens the opportunity to achieve more complicated experiments related to vision tasks, such as perceiving the environment by interpreting its colors.


Human Machine Interaction | 2009

See ColOr: Seeing Colours with an Orchestra

Benoît Deville; Guido Bologna; Michel Vinckenbosch; Thierry Pun

The See Color interface transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL (Hue, Saturation and Luminosity) colour system. Our purpose is to provide visually impaired individuals with a capability of perception of the environment in real time. In this work we present the systems principles of design and several experiments that have been carried out by several blindfolded persons. The goal of the first experiment was to identify the colours of main features in static pictures in order to interpret the image scenes. Participants found that colours were helpful to limit the possible image interpretations. Afterwards, two experiments based on a head mounted camera have been performed. The first experiment pertains to object manipulation. It is based on the pairing of coloured socks, while the second experiment is related to outdoor navigation with the goal of following a coloured sinuous path painted on the ground. The socks experiment demonstrated that blindfolded individuals were able to accurately match pairs of coloured socks. The same participants successfully followed a red serpentine path painted on the ground for more than 80 meters. Finally, we propose an original approach for a real time alerting system, based on the detection of visual salient parts in videos. The particularity of our approach lies in the use of a new feature map constructed from the depth gradient. From the computed feature maps we infer conspicuity maps that indicate areas that are appreciably different from their surrounding. Then a specific distance function is described, which takes into account both stereoscopic camera limitations and users choices. We also report how we automatically estimate the relative contribution of each conspicuity map, which enables the unsupervised determination of the final saliency map, indicating the visual salience of all points in the image. We demonstrate here that this additional depth-based feature map allows the system to detect salient regions with good accuracy in most situations, even in the presence of noisy disparity maps.


conference on computers and accessibility | 2010

Detecting objects and obstacles for visually impaired individuals using visual saliency

Benoît Deville; Guido Bologna; Thierry Pun

In this demo, we present the detection module of the See ColOr (Seeing Colors with an Orchestra) mobility aid for visually impaired persons. This module points out areas that present either particular interest or potential threat. In order to detect object and obstacles, we propose a bottom-up approach based on visual saliency: objects that would attract the visual attention of a non-disabled individual are pointed out by the system as areas of interest for the user. The device uses a stereoscopic camera, a laptop, and standard headphones. Given the type of scene and/or scenario, specific feature maps are computed in order to indicate areas of interest in real-time. This demonstration shows that the module indicates objects and obstacles as accurately as a system using all available feature maps.


Neurocomputing | 2009

On the use of the auditory pathway to represent image scenes in real-time

Guido Bologna; Benoît Deville; Thierry Pun


international conference on computer vision theory and applications | 2008

Depth-based detection of salient moving objects in sonified videos for blind users

Benoît Deville; Guido Bologna; Michel Vinckenbosch; Thierry Pun


Archive | 2006

Multimodal tools and interfaces for the intercommunication between visually impaired and "deaf and mute" people

Konstantinos Moustakas; Georgios Nikolakis; Dimitrios Tzovaras; Benoît Deville; Guido Bologna; Ioannis Marras; Jakov Pavlek


Archive | 2008

Pairing colored socks and following a red serpentine with sounds of musical instruments

Guido Bologna; Benoît Deville; Thierry Pun


information and communication technologies and accessibility | 2009

The multi-touch See ColOr interface

Guido Bologna; Stéphane Malandain; Benoît Deville; Thierry Pun

Collaboration


Dive into the Benoît Deville's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios Nikolakis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Marras

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitrios Tzovaras

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge