Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guido Bologna is active.

Publication


Featured researches published by Guido Bologna.


Eurasip Journal on Image and Video Processing | 2007

Transforming 3D coloured pixels into musical instrument notes for vision substitution applications

Guido Bologna; Benoît Deville; Thierry Pun; Michel Vinckenbosch

The goal of the See ColOr project is to achieve a noninvasive mobility aid for blind users that will use the auditory pathway to represent in real-time frontal image scenes. We present and discuss here two image processing methods that were experimented in this work: image simplification by means of segmentation, and guiding the focus of attention through the computation of visual saliency. A mean shift segmentation technique gave the best results, but for real-time constraints we simply implemented an image quantification method based on the HSL colour system. More particularly, we have developed two prototypes which transform HSL coloured pixels into spatialised classical instrument sounds lasting for 300 ms. Hue is sonified by the timbre of a musical instrument, saturation is one of four possible notes, and luminosity is represented by bass when luminosity is rather dark and singing voice when it is relatively bright. The first prototype is devoted to static images on the computer screen, while the second has been built up on a stereoscopic camera which estimates depth by triangulation. In the audio encoding, distance to objects was quantified into four duration levels. Six participants with their eyes covered by a dark tissue were trained to associate colours with musical instruments and then asked to determine on several pictures, objects with specific shapes and colours. In order to simplify the protocol of experiments, we used a tactile tablet, which took the place of the camera. Overall, colour was helpful for the interpretation of image scenes. Moreover, preliminary results with the second prototype consisting in the recognition of coloured balloons were very encouraging. Image processing techniques such as saliency could accelerate in the future the interpretation of sonified image scenes.


Eurasip Journal on Image and Video Processing | 2007

Image and video processing for visually handicapped people

Thierry Pun; Patrick Roth; Guido Bologna; Konstantinos Moustakas; Dimitrios Tzovaras

This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available users sensory channels.


soft computing | 2017

Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning

Guido Bologna; Yoichi Hayashi

Abstract Rule extraction from neural networks is a fervent research topic. In the last 20 years many authors presented a number of techniques showing how to extract symbolic rules from Multi Layer Perceptrons (MLPs). Nevertheless, very few were related to ensembles of neural networks and even less for networks trained by deep learning. On several datasets we performed rule extraction from ensembles of Discretized Interpretable Multi Layer Perceptrons (DIMLP), and DIMLPs trained by deep learning. The results obtained on the Thyroid dataset and the Wisconsin Breast Cancer dataset show that the predictive accuracy of the extracted rules compare very favorably with respect to state of the art results. Finally, in the last classification problem on digit recognition, generated rules from the MNIST dataset can be viewed as discriminatory features in particular digit areas. Qualitatively, with respect to rule complexity in terms of number of generated rules and number of antecedents per rule, deep DIMLPs and DIMLPs trained by arcing give similar results on a binary classification problem involving digits 5 and 8. On the whole MNIST problem we showed that it is possible to determine the feature detectors created by neural networks and also that the complexity of the extracted rulesets can be well balanced between accuracy and interpretability.


soft computing | 2016

Recursive-Rule Extraction Algorithm With J48graft And Applications To Generating Credit Scores

Yoichi Hayashi; Yuki Tanaka; Tomohiro Takagi; Takamichi Saito; Hideaki Iiduka; Hiroaki Kikuchi; Guido Bologna

Abstract The purpose of this study was to generate more concise rule extraction from the Recursive-Rule Extraction (Re-RX) algorithm by replacing the C4.5 program currently employed in Re-RX with the J48graft algorithm. Experiments were subsequently conducted to determine rules for six different two-class mixed datasets having discrete and continuous attributes and to compare the resulting accuracy, comprehensibility and conciseness. When working with the CARD1, CARD2, CARD3, German, Bene1 and Bene2 datasets, Re-RX with J48graft provided more concise rules than the original Re-RX algorithm. The use of Re-RX with J48graft resulted in 43.2%, 37% and 21% reductions in rules in the case of the German, Bene1 and Bene2 datasets compared to Re-RX. Furthermore, the Re-RX with J48graft showed 8.87% better accuracy than the Re-RX algorithm for the German dataset. These results confirm that the application of Re-RX in conjunction with J48graft has the capacity to facilitate migration from existing data systems toward new concise analytic systems and Big Data.


international work conference on the interplay between natural and artificial computation | 2009

Blind Navigation along a Sinuous Path by Means of the See ColOr Interface

Guido Bologna; Benoı̂t Deville; Thierry Pun

The See ColOr interface transforms a small portion of a coloured video image into sound sources represented by spatialized musical instruments. This interface aims at providing visually impaired people with a capability of perception of the environment. In this work, the purpose is to verify the hypothesis that it is possible to use sounds from musical instruments to replace colour. Compared to state of the art devices, a quality of the See ColOr interface is that it allows the user to receive a feed-back auditory signal from the environment and its colours, promptly. An experiment based on a head mounted camera has been performed. Specifically, this experiment is related to outdoor navigation for which the purpose is to follow a sinuous path. Our participants successfully went along a red serpentine path for more than 80 meters.


Neurocomputing | 2011

Toward local and global perception modules for vision substitution

Guido Bologna; Benoît Deville; Juan Diego Gomez; Thierry Pun

Although retinal neural implants have considerably progressed they raise a number of questions concerning user acceptance, risk rejection, and cost. For the time being we support a low cost approach based on the transmission of limited vision information by means of the auditory channel. The See ColOr mobility aid for visually impaired individuals transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL colour system. Our purpose is to provide blind people with a capability of perception of the environment in real time. In this work the novelty is the simultaneous sonification of colour and depth, the last parameter being coded by sound rhythm. The main drawback of our approach is that the sonification of a limited portion of a captured image involves limited perception. As a consequence, we propose to extend the local perception module by introducing a new global perception module aiming at providing the user with a clear picture of the entire scene characteristics. Finally, we present several experiments to illustrate the limited perception module, such as: (1) detecting an open door in order to go out from the office; (2) walking in a hallway and looking for a blue cabinet; (3) walking in a hallway and looking for a red tee shirt; (4) avoiding two red obstacles; (5) moving outside and avoiding a parked car. Videos of experiments are available on http://www.youtube.com/guidobologna.


international symposium on neural networks | 2008

A perceptual interface for vision substitution in a color matching experiment

Guido Bologna; Benoît Deville; Michel Vinckenbosch; Thierry Pun

In the context of vision substitution by the auditory channel several systems have been introduced. One such system that is presented here, See ColOr, is a dedicated interface part of a mobility aid for visually impaired people. It transforms a small portion of a colored video image into spatialized instrument sounds. In this work the purpose is to verify the hypothesis that sounds from musical instruments provide an alternative way to vision for obtaining color information from the environment. We introduce an experiment in which several participants try to match pairs of colored socks by pointing a head mounted camera and by listening to the generated sounds. Our experiments demonstrated that blindfolded individuals were able to accurately match pairs of colored socks. The advantage of the See ColOr interface is that it allows the user to receive a feed-back auditory signal from the environment and its colors, promptly. Our perceptual auditory coding of pixel values opens the opportunity to achieve more complicated experiments related to vision tasks, such as perceiving the environment by interpreting its colors.


international work-conference on the interplay between natural and artificial computation | 2005

Eye tracking in coloured image scenes represented by ambisonic fields of musical instrument sounds

Guido Bologna; Michel Vinckenbosch

We present our recent project on visual substitution by Ambisonic 3D-sound fields. Ideally, our system should be used by blind or visually impaired subjects having already seen. The original idea behind our targeted prototype is the use of an eye tracker and musical instrument sounds encoding coloured pixels. The role of the eye tracker is to activate the process of attention inherent in the vision and to restore by simulation the mechanisms of central and peripheral vision. Moreover, we advocate the view that cerebral areas devoted to the integration of information will play a role by rebuilding a global image of the environment. Finally, the role of colour itself is to help subjects distinguishing coloured objects or perceiving textures, such as sky, walls, grass and trees, etc ...


Human Machine Interaction | 2009

See ColOr: Seeing Colours with an Orchestra

Benoît Deville; Guido Bologna; Michel Vinckenbosch; Thierry Pun

The See Color interface transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL (Hue, Saturation and Luminosity) colour system. Our purpose is to provide visually impaired individuals with a capability of perception of the environment in real time. In this work we present the systems principles of design and several experiments that have been carried out by several blindfolded persons. The goal of the first experiment was to identify the colours of main features in static pictures in order to interpret the image scenes. Participants found that colours were helpful to limit the possible image interpretations. Afterwards, two experiments based on a head mounted camera have been performed. The first experiment pertains to object manipulation. It is based on the pairing of coloured socks, while the second experiment is related to outdoor navigation with the goal of following a coloured sinuous path painted on the ground. The socks experiment demonstrated that blindfolded individuals were able to accurately match pairs of coloured socks. The same participants successfully followed a red serpentine path painted on the ground for more than 80 meters. Finally, we propose an original approach for a real time alerting system, based on the detection of visual salient parts in videos. The particularity of our approach lies in the use of a new feature map constructed from the depth gradient. From the computed feature maps we infer conspicuity maps that indicate areas that are appreciably different from their surrounding. Then a specific distance function is described, which takes into account both stereoscopic camera limitations and users choices. We also report how we automatically estimate the relative contribution of each conspicuity map, which enables the unsupervised determination of the final saliency map, indicating the visual salience of all points in the image. We demonstrate here that this additional depth-based feature map allows the system to detect salient regions with good accuracy in most situations, even in the presence of noisy disparity maps.


international work-conference on the interplay between natural and artificial computation | 2007

Identifying Major Components of Pictures by Audio Encoding of Colours

Guido Bologna; Benoı̂t Deville; Thierry Pun; Michel Vinckenbosch

The goal of the See ColOr project is to achieve a non-invasive mobility aid for blind users that will use the auditory pathway to represent in real-time frontal image scenes. More particularly, we have developed a prototype which transforms HSL coloured pixels into spatialized classical instrument sounds lasting for 300 ms. Hue is sonified by the timbre of a musical instrument, saturation is one of four possible notes, and luminosity is represented by bass when luminosity is rather dark and singing voice when it is relatively bright. Our first experiments are devoted to static images on the computer screen. Six participants with their eyes covered by a dark tissue were trained to associate colours with musical instruments and then asked to determine on several pictures, objects with specific shapes and colours. In order to simplify the protocol of experiments, we used a tactile tablet, which took the place of the camera. Overall, experiment participants found that colour was helpful for the interpretation of image scenes.

Collaboration


Dive into the Guido Bologna's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge