Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Galit Buchs is active.

Publication


Featured researches published by Galit Buchs.


PLOS ONE | 2016

Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

Shachar Maidenbaum; Galit Buchs; Sami Abboud; Ori Lavi-Rotbain; Amir Amedi

Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.


mediterranean conference on control and automation | 2014

Vision through other senses: Practical use of Sensory Substitution devices as assistive technology for visual rehabilitation

Shachar Maidenbaum; Roni Arbel; Galit Buchs; Shani Shapira; Amir Amedi

Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.


international conference on human haptic sensing and touch enabled computer applications | 2014

Obstacle Identification and Avoidance Using the ‘EyeCane’: a Tactile Sensory Substitution Device for Blind Individuals

Galit Buchs; Shachar Maidenbaum; Amir Amedi

One of the main challenges facing the blind and visually impaired is independent mobility without being obtrusive to their environment. We developed a tactile low-cost finger-size sensory substitution device, the EyeCane, to aid the Blind in obstacle identification and avoidance in an unobtrusive manner. A simplified version of the EyeCane was tested on 6 sighted blindfolded participants who were naive to the device. After a short (2–3 min) training period they were asked to identify and avoid knee-to-waist-high (Side) and sidewalk-height (Floor) obstacles using the EyeCane. Avoidance included walking around or stepping over the obstacles. We show that in the fifth trial, participants correctly identified 87 ± 13.6 % (mean ± SD) and correctly avoided 63 ± 15 % of the side obstacles compared to 14 % in the control condition (p < 4E-10 and p < 1.1E-05 respectively). For Floor obstacles, participants correctly identified 79 ± 18.8 % and correctly avoided 41 ± %37.6 compared to the control’s 10 % (p < 0.002 and p < 0.06 respectively).


ieee virtual reality conference | 2015

Blind in a virtual world: Using sensory substitution for generically increasing the accessibility of graphical virtual environments

Shachar Maidenbaum; Sami Abboud; Galit Buchs; Amir Amedi

Graphical virtual environments are currently far from accessible to the blind as most of their content is visual. While several previous environment-specific tools have indeed increased accessibility to specific environments they do not offer a generic solution. This is especially unfortunate as such environments hold great potential for the blind, e.g., for safe orientation and learning. Visual-to-audio Sensory Substitution Devices (SSDs) can potentially increase their accessibility in such a generic fashion by sonifying the on-screen content regardless of the specific environment. Using SSDs also taps into the skills gained from using these same SSDs for completely different tasks, including in the real world. However, whether congenitally blind users will be able to use this information to perceive and interact successfully virtually is currently unclear. We tested this using the EyeMusic SSD, which conveys shape and color information, to perform virtual tasks otherwise not possible without vision. We show that these tasks can be accomplished by the congenitally blind.


Restorative Neurology and Neuroscience | 2017

Waist-up protection for blind individuals using the EyeCane as a primary and secondary mobility aid

Galit Buchs; Noa Simon; Shachar Maidenbaum; Amir Amedi

Background: One of the most stirring statistics in relation to the mobility of blind individuals is the high rate of upper body injuries, even when using the white-cane. Objective: We here addressed a rehabilitation- oriented challenge of providing a reliable tool for blind people to avoid waist-up obstacles, namely one of the impediments to their successful mobility using currently available methods (e.g., white-cane). Methods: We used the EyeCane, a device we developed which translates distances from several angles to haptic and auditory cues in an intuitive and unobtrusive manner, serving both as a primary and secondary mobility aid. We investigated the rehabilitation potential of such a device in facilitating visionless waist-up body protection. Results: After ∼5 minutes of training with the EyeCane blind participants were able to successfully detect and avoid obstacles waist-high and up. This was significantly higher than their success when using the white-cane alone. As avoidance of obstacles required participants to perform an additional cognitive process after their detection, the avoidance rate was significantly lower than the detection rate. Conclusion: Our work has demonstrated that the EyeCane has the potential to extend the sensory world of blind individuals by expanding their currently accessible inputs, and has offered them a new practical rehabilitation tool.


Restorative Neurology and Neuroscience | 2015

Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach

Galit Buchs; Shachar Maidenbaum; Shelly Levy-Tzedek; Amir Amedi

Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a ‘visual’ scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information –perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic’s zooming mechanism. Results: After specialized training of just 6–10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P <  1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.


augmented human international conference | 2016

Social Sensing: a Wi-Fi based Social Sense for Perceiving the Surrounding People

Yoni Halperin; Galit Buchs; Shachar Maidenbaum; Maya Amenou; Amir Amedi

People who are blind or have social disabilities can encounter difficulties in properly sensing and interacting with surrounding people. We suggest here the use of a sensory augmentation approach, which will offer the user perceptual input via properly functioning sensory channels (e.g. visual, tactile) for this purpose. Specifically, we created a Wi-Fi signal based system to help the user determine the presence of one or more people in the room. The signals strength determines the distance of the people in near proximity. These distances are sonified and played sequentially. The Wi-Fi signal arises from common Smartphones, and can therefore be adapted for everyday use in a simple manner. We demonstrate the use of this system by showing its significance in determining the presence of others. Specifically, we show that it allows to determine the location (i.e. close, inside or outside) and amount of people at each distance. This system can be further adopted for purposes such as locating ones group in a crowd, following a group in a new location, enhancing identification for people with prosopagnosia, raising awareness for the presence of others as part of a rehabilitation behavioral program for people with ASD, or for real-life social networking.


augmented human international conference | 2015

Augmented non-visual distance sensing with the EyeCane

Galit Buchs; Shachar Maidenbaum; Amir Amedi

How can we sense distant objects without vision? Vision is the main distal sense used by humans, thus impairing distance and spatial perception for sighted individuals in the dark or for people with visual impairments. We suggest augmenting distance perception via other senses such as using auditory or haptic cues, and have created the EyeCane for this purpose. The EyeCane is a minimal Sensory Substitution Device that enables users to perform tasks such as distance estimation, and obstacle detection and avoidance up to 5m away on-visually. In the demonstration, visitors will receive a brief training with the device, and then use it to detect objects and estimate distances while blindfolded.


Restorative Neurology and Neuroscience | 2014

The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning

Shachar Maidenbaum; Shlomi Hanassy; Sami Abboud; Galit Buchs; Daniel-Robert Chebat; Shelly Levy-Tzedek; Amir Amedi


Investigative Ophthalmology & Visual Science | 2014

Returning Sensory Substitution to practical visual rehabilitation

Amir Amedi; Daniel-Robert Chebat; Shelly Levy-Tzedek; Galit Buchs; Shachar Maidenbaum

Collaboration


Dive into the Galit Buchs's collaboration.

Top Co-Authors

Avatar

Amir Amedi

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shachar Maidenbaum

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shelly Levy-Tzedek

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Sami Abboud

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maya Amenou

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Roni Arbel

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shani Shapira

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shlomi Hanassy

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yoni Halperin

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge