Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luca Giulio Brayda is active.

Publication


Featured researches published by Luca Giulio Brayda.


international conference on human haptic sensing and touch enabled computer applications | 2016

Localized Magnification in Vibrotactile HMDs for Accurate Spatial Awareness

Victor Adriel de Jesus Oliveira; Luciana Porcher Nedel; Anderson Maciel; Luca Giulio Brayda

Actuator density is an important parameter in the design of vibrotactile displays. When it comes to obstacle detection or navigation tasks, a high number of tactors may provide more information, but not necessarily better performance. Depending on the body site and vibration parameters adopted, high density can make it harder to detect tactors in an array. In this paper, we explore the trade-off between actuator density and precision by comparing three kinds of directional cues. After performing a within-subject naive search task using a head-mounted vibrotactile display, we found that increasing the density of the array locally provides higher performance in detecting directional cues.


IEEE Transactions on Visualization and Computer Graphics | 2017

Designing a Vibrotactile Head-Mounted Display for Spatial Awareness in 3D Spaces

Victor Adriel de Jesus Oliveira; Luca Giulio Brayda; Luciana Porcher Nedel; Anderson Maciel

Due to the perceptual characteristics of the head, vibrotactile Head-mounted Displays are built with low actuator density. Therefore, vibrotactile guidance is mostly assessed by pointing towards objects in the azimuthal plane. When it comes to multisensory interaction in 3D environments, it is also important to convey information about objects in the elevation plane. In this paper, we design and assess a haptic guidance technique for 3D environments. First, we explore the modulation of vibration frequency to indicate the position of objects in the elevation plane. Then, we assessed a vibrotactile HMD made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. Results have shown that frequencies modulated with a quadratic growth function allowed a more accurate, precise, and faster target localization in an active head pointing task. The technique presented high usability and a strong learning effect for a haptic search across different scenarios in an immersive VR setup.


IEEE Transactions on Haptics | 2013

Predicting Successful Tactile Mapping of Virtual Objects

Luca Giulio Brayda; Claudio Campus; Monica Gori

Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.


ieee haptics symposium | 2016

Spatial discrimination of vibrotactile stimuli around the head

Victor Adriel de Jesus Oliveira; Luciana Porcher Nedel; Anderson Maciel; Luca Giulio Brayda

Several studies evaluated vibrotactile stimuli on the head to aid orientation and communication. However, the acuity for vibration of the heads skin still needs to be explored. In this paper, we report the assessment of the spatial resolution on the head. We performed a 2AFC psychophysical experiment systematically varying the distance between pairs of stimuli in a standard-comparison approach. We took into consideration not only the perceptual thresholds but also the reaction times and subjective factors, like workload and vibration pleasantness. Results show that the region around the forehead is not only the most sensitive, with thresholds under 5mm, but it is also the region wherein the spatial discrimination was felt to be easier to perform. We also have found that it is possible to describe acuity on the head for vibrating stimulus as a function of skin type (hairy or glabrous) and of the distance of the stimulated loci from the head midline.


IEEE Transactions on Haptics | 2015

The Importance of Visual Experience, Gender, and Emotion in the Assessment of an Assistive Tactile Mouse

Luca Giulio Brayda; Claudio Campus; Mariacarla Memeo; Laura Lucagrossi

Tactile maps are efficient tools to improve spatial understanding and mobility skills of visually impaired people. Their limited adaptability can be compensated with haptic devices which display graphical information, but their assessment is frequently limited to performance-based metrics only which can hide potential spatial abilities in O&M protocols. We assess a low-tech tactile mouse able to deliver three-dimensional content considering how performance, mental workload, behavior, and anxiety status vary with task difficulty and gender in congenitally blind, late blind, and sighted subjects. Results show that task difficulty coherently modulates the efficiency and difficulty to build mental maps, regardless of visual experience. Although exhibiting attitudes that were similar and gender-independent, the females had lower performance and higher cognitive load, especially when congenitally blind. All groups showed a significant decrease in anxiety after using the device. Tactile graphics with our device seems therefore to be applicable with different visual experiences, with no negative emotional consequences of mentally demanding spatial tasks. Going beyond performance-based assessment, our methodology can help with better targeting technological solutions in orientation and mobility protocols.


human factors in computing systems | 2011

An investigation of search behaviour in a tactile exploration task for sighted and non-sighted adults.

Luca Giulio Brayda; Claudio Campus; Ryad Chellali; Guido Rodriguez; Cristina Martinoli

In this work in progress we propose a new method for evaluating objectively the process of performing a tactile exploration with a visuo-tactile sensory substitution system. Both behavioral and neurophysiological cues are considered to evaluate the identification process of virtual objects and surrounding environments. Our experiments suggest that both sighted and visually impaired users integrated spatial information and developed similar behavioural and neurophysiological patterns. The proposed method could also serve as a tool to evaluate touch-based interfaces for application in orientation and mobility programs.


advances in computer-human interaction | 2009

Virtual Environments and Scenario Languages for Advanced Teleoperation of Groups of Real Robots: Real Case Application

Nicolas Mollet; Luca Giulio Brayda; Ryad Chellali; Jean-Guy Fontaine

This paper deals with the usage of Virtual Reality and Scenario Languages in the field of teleoperation: how to enable a group of teleoperators to control, in a collaborative way, groups of real robots, in turn collaborating with each other to achieve complex tasks; such tasks include inspecting a dangerous area or exploring a partially unknown environment. The main goal is to obtain efficient, natural and innovative interactions in such a context.We first present the usage of Collaborative Virtual Environments (CVE) to obtain a unified, simplified,virtual abstraction of distributed, complex, real robots. We show how this virtual environment offers a peculiar ability: to free teleoperators from space and time constraints. Then we present our original usage of Scenario Languages to describe complex and collaborative tasks in a natural and flexible way. Finally, we validate the proposed framework through our Teleoperation platform ViRAT.


PLOS ONE | 2016

Depth Echolocation Learnt by Novice Sighted People.

Alessia Tonelli; Luca Giulio Brayda; Monica Gori

Some blind people have developed a unique technique, called echolocation, to orient themselves in unknown environments. More specifically, by self-generating a clicking noise with the tongue, echolocators gain knowledge about the external environment by perceiving more detailed object features. It is not clear to date whether sighted individuals can also develop such an extremely useful technique. To investigate this, here we test the ability of novice sighted participants to perform a depth echolocation task. Moreover, in order to evaluate whether the type of room (anechoic or reverberant) and the type of clicking sound (with the tongue or with the hands) influences the learning of this technique, we divided the entire sample into four groups. Half of the participants produced the clicking sound with their tongue, the other half with their hands. Half of the participants performed the task in an anechoic chamber, the other half in a reverberant room. Subjects stood in front of five bars, each of a different size, and at five different distances from the subject. The dimension of the bars ensured a constant subtended angle for the five distances considered. The task was to identify the correct distance of the bar. We found that, even by the second session, the participants were able to judge the correct depth of the bar at a rate greater than chance. Improvements in both precision and accuracy were observed in all experimental sessions. More interestingly, we found significantly better performance in the reverberant room than in the anechoic chamber. The type of clicking did not modulate our results. This suggests that the echolocation technique can also be learned by sighted individuals and that room reverberation can influence this learning process. More generally, this study shows that total loss of sight is not a prerequisite for echolocation skills this suggests important potential implications on rehabilitation settings for persons with residual vision.


cyberworlds | 2008

Standardization and Integration in Robotics: Case of Virtual Reality Tools

Nicolas Mollet; Luca Giulio Brayda; Baizid Khelifa; Ryad Chellali

Robotics needs realistic simulators and environments to display not only physical interactions between robots and the environment, but also more and more complex processes. Sensing procedures, communications and other processes like cognitive reasoning may be first run on adapted and competitive simulation platforms to deal with optimal design. Virtual reality (VR) left the ghetto of visual rendering tool to become a more complete mean to handle basic simulation problems and to enable aggregation of complex functions for complex interactions. VR and robotics have met through Teleoperation. Historically, the first teleoperation systems were used as interfaces to control remotely robots by providing input tools and sensory feedback information. With technological improvements, VR shifted to be the main kernel for first, studies on advanced robotics systems studies like bio-inspired robots behavior, multi-robot systems, etc., and second, to provide advanced multi-modal interfaces for real time interactions with a useful level of abstraction. We present in this paper some basic robotics problems and the way VR can solve them. The first part of this paper discuss about the benefits of standardization and VR for complex robots systems. Then, we present some existing solutions and we analyze their drawbacks and advantages. Finally, we introduce an application where we take advantage of those features, in terms of efficiency in the development cycle and of reusability.


Frontiers in Systems Neuroscience | 2015

Task-dependent calibration of auditory spatial perception through environmental visual observation

Alessia Tonelli; Luca Giulio Brayda; Monica Gori

Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

Collaboration


Dive into the Luca Giulio Brayda's collaboration.

Top Co-Authors

Avatar

Ryad Chellali

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Claudio Campus

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Monica Gori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Alessia Tonelli

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Nicolas Mollet

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Mariacarla Memeo

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Anderson Maciel

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Luciana Porcher Nedel

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Victor Adriel de Jesus Oliveira

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Fabrizio Leo

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge