Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fakheredine Keyrouz is active.

Publication


Featured researches published by Fakheredine Keyrouz.


international symposium on signal processing and information technology | 2006

An Enhanced Binaural 3D Sound Localization Algorithm

Fakheredine Keyrouz; Klaus Diepold

For sound localization methods to be useful in real-time scenarios, the processing power requirements must be low enough to allow real time processing of audio inputs. We propose a new binaural sound source localization technique based on using only two microphones placed inside the ear canal of a robot dummy head. The head is equipped with artificial ears and is mounted on a torso. In contrast to existing 3D sound source localization methods using microphone arrays, our novel method employs two microphone and is based on a simple correlation approach using a generic set of head related transfer functions (HRTFs). The proposed method is demonstrated through simulation and is further tested in a household environment. This set up proves to be very noise-tolerant and is able to localize sound sources in free space with high precision and low computational complexity


international conference on acoustics, speech, and signal processing | 2006

A New Method for Binaural 3-D Localization Based on Hrtfs

Fakheredine Keyrouz; Youssef Naous; Klaus Diepold

A modern technique for robotic sound source detection using a dataset head-related transfer functions (HRTFs) is presented. To ensure fast detection, the HRTFs are reduced using three different techniques, namely; diffuse-field equalization, balanced model truncation, and principle component analysis. A new criterion introduced is to be satisfied by a set of output signals from the microphones of a dummy robot head. This criterion is then used to find the sound source location in accordance with the reduced HRTF datasets. The suggested method is verified through simulated examples and further tested in a household environment. This novel technique provides estimates of azimuth and elevation angles in free space by using only two microphones. It also uses a simple algorithm compared to the more complicated algorithms used in similar localization processes


Presence: Teleoperators & Virtual Environments | 2007

Binaural Source Localization and Spatial Audio Reproduction for Telepresence Applications

Fakheredine Keyrouz; Klaus Diepold

Telepresence is generally described as the feeling of being immersed in a remote environment, be it virtual or real. A multimodal telepresence environment, equipped with modalities such as vision, audition, and haptic, improves immersion and augments the overall perceptual presence. The present work focuses on acoustic telepresence at both the teleoperator and operator sites. On the teleoperator side, we build a novel binaural sound source localizer using generic Head Related Transfer Functions (HRTFs). This new localizer provides estimates for the direction of a single sound source given in terms of azimuth and elevation angles in free space by using only two microphones. It also uses an algorithm that is efficient compared to the currently known algorithms used in similar localization processes. On the operator side, the paper addresses the problem of spatially interpolating HRTFs for densely sampled high-fidelity 3D sound synthesis. In our telepresence application scenario the synthesized 3D sound is presented to the operator over headphones and shall achieve a high-fidelity acoustic immersion. Using measured HRTF data, we create interpolated HRTFs between the existing functions using a matrix-valued interpolation function. The comparison with existing interpolation methods reveals that our new method offers superior performance and is capable of achieving high-fidelity reconstructions of HRTFs.


2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing | 2007

Robotic Localization and Separation of Concurrent Sound Sources using Self-Splitting Competitive Learning

Fakheredine Keyrouz; Werner Maier; Klaus Diepold

We combine binaural sound-source localization and separation techniques for an effective deployment in humanoid-like robotic hearing systems. Relying on the concept of binaural hearing, where the human auditory 3D percepts are predominantly formed on the basis of the sound-pressure signals at the two eardrums, our robotic 3D localization system uses only two microphones placed inside the ear canals of a robot head equipped with artificial ears and mounted on a torso. The proposed localization algorithm exploits all the binaural cues encapsulated within the so-called head related transfer functions (HRTFs). Taking advantage of the sparse representations of the ear input signals, the 3D positions of two concurrent sound sources is extracted. The location of the sources is extracted after identifying which HRTFs they have been filtered with using a well-known self-splitting competitive learning clustering algorithm. Once the location of the sources are identified, they are separated using a generic HRTF dataset. Simulation results demonstrated highly accurate 3D localization of the two concurrent sound sources, and a very high signal-to-interference ratio (SIR) for the separated sound signals


international conference on digital signal processing | 2006

A Rational Hrtf Interpolation Approach for Fast Synthesis of Moving Sound

Fakheredine Keyrouz; Klaus Diepold

For the purpose of a realistic rendering of moving sound with no noticeable discontinuity, binaural synthesis of the time-varying sound field is performed by updating a dense grid of head related transfer functions, HRTFs. Unless the differences in HRTFs are sufficiently small, a direct switching between them will cause an audible artifact that is heard as a click. To avoid this problem and ensure the availability of enough HRTFs, we have first reduced the HRTFs using PCA and then proposed a binaural impulse-response interpolation algorithm based on the solution for the rational minimal state-space interpolation problem. Compared with existing interpolation techniques, this method allowed very precise reconstruction of HRTFs in the horizontal plane and proved to have superior performance for a wide range of azimuths


advanced video and signal based surveillance | 2007

High performance 3D sound localization for surveillance applications

Fakheredine Keyrouz; Klaus Diepold; Shady Keyrouz

One of the key features of the human auditory system, is its nearly constant omni-directional sensitivity, e.g., the system reacts to alerting signals coming from a direction away from the sight of focused visual attention. In many surveillance situations where visual attention completely fails since the robot cameras have no direct line of sight with the sound sources, the ability to estimate the direction of the sources of danger relying on sound becomes extremely important. We present in this paper a novel method for sound localization in azimuth and elevation based on a humanoid head. The method was tested in simulations as well as in a real reverberant environment. Compared to state-of-the-art localization techniques the method is able to localize with high accuracy 3D sound sources even in the presence of reflections and high distortion.


ieee-ras international conference on humanoid robots | 2006

A Novel Humanoid Binaural 3D Sound Localization and Separation Algorithm

Fakheredine Keyrouz; Werner Maier; Klaus Diepold

In this paper, we combine blind sound separation (BSS) and binaural localization for fast tracking of two concurrent sound sources. Using only two microphones placed inside the ear canals of a robot dummy head mounted on a torso and equipped with artificial ears, a generic set of head-related transfer functions (HRTFs) is used for accurate 3D source localization. This novel approach of combining the two BSS and system identification processes, and executing them simultaneously, is optimized for moving sound sources. Simulation results demonstrated precise localization of both elevation and azimuth angles of the two concurrent sound sources. The proposed method relies purely on auditive cues and uses a simple algorithm which, compared to the current localization algorithms, provides an easy implementation on robotic platforms and allows accurate as well as fast localization of concurrent sources


international workshop on robot motion and control | 2007

Humanoid Binaural Sound Tracking Using Kalman Filtering and HRTFs

Fakheredine Keyrouz; Klaus Diepold; Shady Keyrouz

Audio and visual perceptions are two crucial elements in the design of mobile humanoids. In unknown and uncontrolled environments, humanoids are supposed to navigate safely and to explore their surroundings autonomously. While robotic visual perception, e.g. stereo vision, has significantly evolved during the last few years, robotic audio perception, especially binaural audition, is still in its early stages. However, non-binaural sound source localization methods based on multiple microphone arrays, like multiple signal classification and estimation of Time-Delay Of Arrivals (TDOAs) between all microphone pairs, have been very actively explored [1], [2].


international conference on signal processing | 2008

Real time humanoid sound source localization and tracking in a highly reverberant environment

Muhammad Usman; Fakheredine Keyrouz; Klaus Diepold

An algorithm for real time humanoid sound localization and tracking using only two microphones in a highly reverberant environment is proposed. Several recently developed 3D humanoid sound localization algorithms require the environment to be anechoic. Also, the resolution of front-back ambiguity problem during sound localization requires the knowledge about the reference signals. Using HRTF (head related transfer functions) based sound localization together with extended Kalman filtering, we are able to accurately track moving sound sources in real time in a highly reverberant environment. This algorithm uses only two microphones and requires no prior knowledge of the reference signals.


intelligent robots and systems | 2008

Multi-modal multi-user telepresence and teleaction system

Martin Buss; Angelika Peer; Thomas Schauss; Nikolay Stefanov; Ulrich Unterhinninghofen; Stephan Behrendt; Georg Färber; Jan Leupold; Klaus Diepold; Fakheredine Keyrouz; Michel Sarkis; Peter Hinterseer; Eckehard G. Steinbach; Berthold Färber; Helena Pongrac

The video shows a rich multi-modal multi-user telepresence system, which was developed within the SFB453 funded by the German Research Foundation (www.sfb453.de). As a complex application scenario, the remote repairing of a broken pipe is presented in this paper. The system basically consists of two operator-teleoperator- pairs. While one of the operators interacts with a stationary human- system-interface, the other operator uses a mobile one. Both systems provide visual, auditory, and haptic feedback and enable to control the motion of head, arms, and grippers as well as the locomotion of the corresponding teleoperator.

Collaboration


Dive into the Fakheredine Keyrouz's collaboration.

Top Co-Authors

Avatar

Shady Keyrouz

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wen Xu

Infineon Technologies

View shared research outputs
Top Co-Authors

Avatar

Patrick Dewilde

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge