Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin W. Wilson is active.

Publication


Featured researches published by Kevin W. Wilson.


international conference on acoustics, speech, and signal processing | 2004

Multiple person and speaker activity tracking with a particle filter

Neal Checka; Kevin W. Wilson; Michael R. Siracusa; Trevor Darrell

In this paper, we present a system that combines sound and vision to track multiple people. In a cluttered or noisy scene, multi-person tracking estimates have a distinctly non-Gaussian distribution. We apply a particle filter with audio and video state components, and derive observation likelihood methods based on both audio and video measurements. Our state includes the number of people present, their positions, and whether each person is talking. We show experiments in an environment with sparse microphones and monocular cameras. Our results show that our system can accurately track the locations and speech activity of a varying number of people.


international conference on computer vision | 2005

Visual speech recognition with loosely synchronized feature streams

Kate Saenko; Karen Livescu; Michael R. Siracusa; Kevin W. Wilson; James R. Glass; Trevor Darrell

We present an approach to detecting and recognizing spoken isolated phrases based solely on visual input. We adopt an architecture that first employs discriminative detection of visual speech and articulate features, and then performs recognition using a model that accounts for the loose synchronization of the feature streams. Discriminative classifiers detect the subclass of lip appearance corresponding to the presence of speech, and further decompose it into features corresponding to the physical components of articulate production. These components often evolve in a semi-independent fashion, and conventional viseme-based approaches to recognition fail to capture the resulting co-articulation effects. We present a novel dynamic Bayesian network with a multi-stream structure and observations consisting of articulate feature classifier scores, which can model varying degrees of co-articulation in a principled way. We evaluate our visual-only recognition system on a command utterance task. We show comparative results on lip detection and speech/non-speech classification, as well as recognition performance against several baseline systems


international conference on acoustics, speech, and signal processing | 2002

Audio-video array source localization for intelligent environments

Kevin W. Wilson; Trevor Darrell

Steerable microphone arrays provide a flexible infrastructure for audio source separation. In order for them to be used effectively in intelligent environments, there must be a mechanism in place for steering the focus of the array to the sound source. Audio-only steering techniques often perform poorly in the presence of multiple sound sources or strong reverberation. Video-only techniques can achieve high spatial precision but require that the audio and video subsystems be accurately calibrated to preserve this precision. We present an audio-video localization technique that combines the benefits of the two modalities. We implement our technique in a test environment containing multiple stereo cameras and a room-sized microphone array. Our technique achieves an 8.9 dB improvement over a single far-field microphone, a 6.7 dB improvement over source separation based on video-only localization, and a 0.3 dB improvement over separation based on audio-only localization.


computer vision and pattern recognition | 2003

A Probabilistic Framework for Multi-modal Multi-Person Tracking

Neal Checka; Kevin W. Wilson; Vibhav Rangarajan; Trevor Darrell

In this paper, we present a probabilistic tracking framework that combines sound and vision to achieve more robust and accurate tracking of multiple objects. In a cluttered or noisy scene, our measurements have a non-Gaussian, multi-modal distribution. We apply a particle filter to track multiple people using combined audio and video observations. We have applied our algorithm to the domain of tracking people with a stereo-based visual foreground detection algorithm and audio localization using a beamforming technique. Our model also accurately reflects the number of people present. We test the efficacy of our system on a sequence of multiple people moving and speaking in an indoor environment.


IEEE Transactions on Audio, Speech, and Language Processing | 2006

Learning a Precedence Effect-Like Weighting Function for the Generalized Cross-Correlation Framework

Kevin W. Wilson; Trevor Darrell

Speech source localization in reverberant environments has proved difficult for automated microphone array systems. Because of its nonstationary nature, certain features observable in the reverberant speech signal, such as sudden increases in audio energy, provide cues to indicate time-frequency regions that are particularly useful for audio localization. We exploit these cues by learning a mapping from reverberated signal spectrograms to localization precision using ridge regression. Using the learned mappings in the generalized cross-correlation framework, we demonstrate improved localization performance. Additionally, the resulting mappings exhibit behavior consistent with the well-known precedence effect from psychoacoustic studies


international conference on multimodal interfaces | 2003

A multi-modal approach for determining speaker location and focus

Michael R. Siracusa; Louis-Philippe Morency; Kevin W. Wilson; John W. Fisher; Trevor Darrell

This paper presents a multi-modal approach to locate a speaker in a scene and determine to whom he or she is speaking. We present a simple probabilistic framework that combines multiple cues derived from both audio and video information. A purely visual cue is obtained using a head tracker to identify possible speakers in a scene and provide both their 3-D positions and orientation. In addition, estimates of the audio signals direction of arrival are obtained with the help of a two-element microphone array. A third cue measures the association between the audio and the tracked regions in the video. Integrating these cues provides a more robust solution than using any single cue alone. The usefulness of our approach is shown in our results for video sequences with two or more people in a prototype interactive kiosk environment.


workshop on perceptive user interfaces | 2001

Audio-video array source separation for perceptual user interfaces

Kevin W. Wilson; Neal Checka; David Demirdjian; Trevor Darrell

Steerable microphone arrays provide a flexible infrastructure for audio source separation. In order for them to be used effectively in perceptual user interfaces, there must be a mechanism in place for steering the focus of the array to the sound source. Audio-only steering techniques often perform poorly in the presence of multiple sound sources or strong reverberation. Video-only techniques can achieve high spatial precision but require that the audio and video subsystems be accurately calibrated to preserve this precision. We present an audio-video localization technique that combines the benefits of the two modalities. We implement our technique in a test environment containing multiple stereo cameras and a room-sized microphone array. Our technique achieves an 8.9 dB improvement over a single far-field microphone and a 6.7 dB improvement over source separation based on video-only localization.


international conference on acoustics, speech, and signal processing | 2005

Improving audio source localization by learning the precedence effect

Kevin W. Wilson; Trevor Darrell

Audio source localization in reverberant environments is difficult for automated microphone array systems. Certain features observable in the audio signal, such as sudden increases in audio energy, provide cues to indicate time-frequency regions that are particularly useful for audio localization, but previous approaches have not systematically exploited these cues. We learn a mapping from reverberated signal spectrograms to localization precision using ridge regression. The resulting mappings exhibit behavior consistent with the well-known precedence effect from psychoacoustic studies. Using the learned mappings, we demonstrate improved localization performance.


international conference on multimodal interfaces | 2002

Audiovisual arrays for untethered spoken interfaces

Kevin W. Wilson; Vibhav Rangarajan; Neal Checka; Trevor Darrell

When faced with a distant speaker at a known location in a noisy environment, a microphone array can provide a significantly improved audio signal for speech recognition. Estimating the location of a speaker in a reverberant environment from audio information alone can be quite difficult, so we use an array of video cameras to aid localization. Stereo processing techniques are used on pairs of cameras, and foreground 3-D points are grouped to estimate the trajectory of people as they move in an environment. These trajectories are used to guide a microphone array beamformer. Initial results using this system for speech recognition demonstrate increased recognition rates compared to non-array processing techniques.


international conference on multimodal interfaces | 2004

Real-time audio-visual tracking for meeting analysis

David Demirdjian; Kevin W. Wilson; Michael R. Siracusa; Trevor Darrell

We demonstrate an audio-visual tracking system for meeting analysis. A stereo camera and a microphone array are used to track multiple people and their speech activity in real-time. Our system can estimate the location of multiple people, detect the current speaker and build a model of interaction between people in a meeting.

Collaboration


Dive into the Kevin W. Wilson's collaboration.

Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael R. Siracusa

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Neal Checka

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Demirdjian

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John W. Fisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael H. Coen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vibhav Rangarajan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Howard E. Shrobe

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James R. Glass

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Karen Livescu

Toyota Technological Institute at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge