Brian F. G. Katz
University of Paris-Sud
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian F. G. Katz.
Virtual Reality | 2012
Brian F. G. Katz; Slim Kammoun; Gaëtan Parseihian; Olivier Gutierrez; Adrien Brilhault; Malika Auvray; Philippe Truillet; Michel Denis; Simon J. Thorpe; Christophe Jouffrais
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.
Frontiers in Neuroscience | 2014
Gaëtan Parseihian; Christophe Jouffrais; Brian F. G. Katz
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments.
tests and proofs | 2012
Marc Rébillat; Xavier Boutillon; Etienne Corteel; Brian F. G. Katz
We present a study on audio, visual, and audio-visual egocentric distance perception by moving subjects in virtual environments. Audio-visual rendering is provided using tracked passive visual stereoscopy and acoustic wave field synthesis (WFS). Distances are estimated using indirect blind-walking (triangulation) under each rendering condition. Experimental results show that distances perceived in the virtual environment are systematically overestimated for rendered distances closer than the position of the audio-visual rendering system and underestimated for farther distances. Interestingly, subjects perceived each virtual object at a modality-independent distance when using the audio modality, the visual modality, or the combination of both. WFS was able to synthesise perceptually meaningful sound fields. Dynamic audio-visual cues were used by subjects when estimating the distances in the virtual world. Moving may have provided subjects with a better visual distance perception of close distances than if they were static. No correlation between the feeling of presence and the visual distance underestimation has been found. To explain the observed perceptual distance compression, it is proposed that, due to conflicting distance cues, the audio-visual rendering system physically anchors the virtual world to the real world. Virtual objects are thus attracted by the physical audio-visual rendering system.
international conference on haptic and audio interaction design | 2012
Lorenzo Picinali; Christopher Feakes; Davide A. Mauro; Brian F. G. Katz
Individuals with normal hearing are generally able to discriminate auditory stimuli that have the same fundamental frequency but different spectral content. This study concerns to what extent it is possible to perform the same differentiation considering vibratory tactile stimuli. Three perceptual experiments have been carried out in an attempt to compare discrimination thresholds in terms of spectral differences between auditory and vibratory tactile stimulations. The first test consists of assessing the subjects ability in discriminating between three signals with distinct spectral content. The second test focuses on the measurement of the discrimination threshold between a pure tone signal and a signal composed of two pure tones, varying the amplitude and frequency of the second tone. Finally, in the third test the discrimination threshold is measured between a tone with even harmonic components and a tone with odd ones. The results show that it is indeed possible to discriminate between haptic signals having the same fundamental frequency but different spectral. The threshold of sensitivity for detection is markedly less than for audio stimuli.
ieee international workshop on haptic audio visual environments and games | 2012
Lorenzo Picinali; Christopher Feakes; Davide A. Mauro; Brian F. G. Katz
To investigating the capabilities of human beings to differentiate between tactile-vibratory stimuli with the same fundamental frequency but with different spectral content, this study concerns discrimination tasks comparing audio and haptic performances. Using an up-down 1 dB step adaptive procedure, the experimental protocol consists of measuring the discrimination threshold between a pure tone signal and a stimulus composed of two concurrent pure tones, changing the amplitude and frequency of the second tone. The task is performed employing exactly the same experimental apparatus (computer, AD-DA converters, amplifiers and drivers) for both audio and tactile modalities. The results show that it is indeed possible to discriminate between signals having the same fundamental frequency but different spectral content for both haptic and audio modalities, the latter being notably more sensitive. Furthermore, particular correlations have been found between the frequency of the second tone and the discrimination threshold values, for both audio and tactile modalities.
International Journal of Human-computer Interaction | 2014
Mehdi Ammi; Brian F. G. Katz
Applications requiring processing of complex environments present real challenges for simultaneous management of numerous constraints (multiple degrees of freedom, search criterion, etc.). Target searching is probably among the most critical tasks, consisting of finding configurations corresponding to various criteria (e.g., maximum, minimum, reference). During the search, users need to be aware of proximal results to compare values and make decisions. A new audio-haptic coupling strategy is proposed to improve the search for targets in complex environments, enabling the simultaneous use of both audio and haptic channels for value comparisons at different spatial configurations. This is accomplished through the use of tempo in both sensorial signals creating a connection between the two channels enabling an intuitive and efficient comparison. Including spatialized audio improves a user’s situation awareness. The benefit of this intermodal metaphor is evaluated for a 2D nonvisual and abstract environment. Results show improvements relative to simple haptic exploration.
international conference on ultra modern telecommunications | 2012
David Poirier-Quinot; P. Duvaut; L. Girardeau; Brian F. G. Katz
This paper introduces a method to optimize the configuration of a 3D helmet-mounted antenna array carried by emergency rescuers expected to locate natural disaster survivors. Configuration optimization is based on metrics extracted from the 3D Fisher Information Matrix (FIM) relative to the considered array architecture for single source Direction Of Arrival (DOA) estimation. Comprehensive simulations illustrate the optimization metrics behaviour with regard to survivors emitter 3D DOA and array configuration. Design constraints related to helmet-mounted antenna requirements reduce the usually exhausting FIM based design optimization to a 2 degrees of freedom exploration of the array DOA estimation performances.
Seeing and Perceiving | 2012
Tifanie Bouchara; Brian F. G. Katz
This study concerns stimuli-driven perceptual processes involved in target search among concurrent distractors with a focus on comparing auditory, visual, and audio–visual search tasks. Previous works, concerning unimodal search tasks, highlighted different preattentive features that can enhance target saliency, making it ‘pop-out’, e.g., a visually sharp target among blurred distractors. A cue from another modality can also help direct attention towards the target. Our study investigates a new kind of search task, where stimuli consist of audio–visual objects presented using both audio and visual modalities simultaneously. Redundancy effects are evaluated, first from the combination of audio and visual modalities, second from the combination of each unimodal cue in such a bimodal search task. A perceptual experiment was performed where the task was to identify an audio–visual object from a set of six competing stimuli. We employed static visual blur and developed an auditory blur analogue to cue the search. Results show that both visual and auditory blurs render distractors less prominent and automatically attracts attention toward a sharp target. The combination of both unimodal blurs, i.e., audio–visual blur, also proved to be an efficient cue to facilitate bimodal search task. Results also showed that search tasks were performed faster in redundant bimodal conditions than in unimodal ones. That gain was due to redundant target effect only without any redundancy gain due to the cue combination, as solely cueing the visual component was sufficient, with no improvement found by the addition of the redundant audio cue in bimodal search tasks.
Irbm | 2012
Slim Kammoun; Gaëtan Parseihian; Olivier Gutierrez; Adrien Brilhault; Antonio Serpa; Mathieu Raynal; Bernard Oriola; Marc J.-M. Macé; Malika Auvray; Michel Denis; Simon J. Thorpe; Philippe Truillet; Brian F. G. Katz; Christophe Jouffrais
Archive | 2005
Amandine Afonso; Alan Blum; Christian Jacquemin; Michel Denis; Brian F. G. Katz