Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Françoise Beaufays is active.

Publication


Featured researches published by Françoise Beaufays.


IEEE Transactions on Signal Processing | 1995

Transform-domain adaptive filters: an analytical approach

Françoise Beaufays

Transform-domain adaptive filters refer to LMS filters whose inputs are preprocessed with a unitary data-independent transformation followed by a power normalization stage. The transformation is typically chosen to be the discrete Fourier transform (DFT), although other transformations, such as the cosine transform (DCT), the Hartley transform (DHT), or the Walsh-Hadamard transform, have also been proposed in the literature. The resulting algorithms are generally called DFT-LMS, DCT-LMS, etc. This preprocessing improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter and, as a consequence, ameliorates its convergence speed. In this paper, we start with a brief intuitive explanation of transform-domain algorithms. We then analyze the effects of the preprocessing performed in DFT-LMS and DCT-LMS for first-order Markov inputs. In particular, we show that for Markov-1 inputs of correlation parameter /spl rho//spl isin/[0,1], the eigenvalue spread after DFT and power normalization tends to (1+/spl rho/)l(1-/spl rho/) as the size of the filter gets large, whereas after DCT and power normalization, it reduces to (1+/spl rho/). For comparison, the eigenvalue spread before transformation is asymptotically equal to (1+/spl rho/)/sup 2//(1-/spl rho/)/sup 2/. The analytical method used in the paper provides additional insight into how the algorithms work and is expected to extend to other input signal classes and other transformations. >


Neural Networks | 1994

Application of neural networks to load-frequency control in power systems

Françoise Beaufays; Youssef Lotfy Abdel-Magid; Bernard Widrow

Abstract This paper describes an application of layered neural networks to nonlinear power systems control. A single generator unit feeds a power line to various users whose power demand can vary over time. As a consequence of load variations, the frequency of the generator changes over time. A feedforward neural network is trained to control the steam admission valve of the turbine that drives the generator, thereby restoring the frequency to its nominal value. Frequency transients are minimized and zero steady-state error is obtained. The same technique is then applied to control a system composed of two single units tied together through a power line. Electric load variations can happen independently in both units. Both neural controllers are trained with the back propagation-through-time algorithm. Use of a neural network to model the dynamic system is avoided by introducing the Jacobian matrices of the system in the back propagation chain used in controller training.


Archive | 2010

“Your Word is my Command”: Google Search by Voice: A Case Study

Johan Schalkwyk; Doug Beeferman; Françoise Beaufays; Bill Byrne; Ciprian Chelba; Mike Cohen; Maryam Kamvar; Brian Strope

An important goal at Google is to make spoken access ubiquitously available. Achieving ubiquity requires two things: availability (i.e., built into every possible interaction where speech input or output can make sense) and performance (i.e., works so well that the modality adds no friction to the interaction).


Neural Computation | 1996

Diagrammatic derivation of gradient algorithms for neural networks

Eric A. Wan; Françoise Beaufays

Deriving gradient algorithms for time-dependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagation-through-time without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach.


Neural Computation | 1994

Relating real-time backpropagation and backpropagation-through-time: an application of flow graph interreciprocity

Françoise Beaufays; Eric A. Wan

We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, real-time backpropagation and backpropagation-through-time. Starting with the flow graph for real-time backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the backpropagation-through-time algorithm. Interreciprocity provides a theoretical argument to verify that both flow graphs implement the same overall weight update.


IEEE Transactions on Circuits and Systems I-regular Papers | 1995

On the advantages of the LMS spectrum analyzer over nonadaptive implementations of the sliding-DFT

Françoise Beaufays; Bernard Widrow

Based on the least mean squares (LMS) algorithm, the LMS spectrum analyzer can be used to recursively calculate the discrete Fourier transform (DFT) of a sliding window of data. In this paper, we compare the LMS spectrum analyzer with the straightforward nonadaptive implementation of the recursive DFT. In particular, we demonstrate the robustness of the LMS spectrum analyzer to the propagation of roundoff errors, a property that is not shared by other recursive DFT algorithms. >


international conference on acoustics, speech, and signal processing | 1997

Model transformation for robust speaker recognition from telephone data

Françoise Beaufays; Mitch Weintraub

In the context of automatic speaker recognition, we propose a model transformation technique that renders speaker models more robust to acoustic mismatches and to data scarcity by appropriately increasing their variances. We use a stereo database containing speech recorded simultaneously under different acoustic conditions to derive a synthetic variance distribution. This distribution is then used to modify the variances of other speaker models from other telephone databases. The technique is illustrated with experiments conducted on a locally collected database and on the NIST95 and 96 subsets of the Switchboard corpus.


international conference on acoustics, speech, and signal processing | 2009

Revisiting graphemes with increasing amounts of data

Yun-hsuan Sung; Thad Hughes; Françoise Beaufays; Brian Strope

Letter units, or graphemes, have been reported in the literature as a surprisingly effective substitute to the more traditional phoneme units, at least in languages that enjoy a strong correspondence between pronunciation and orthography. For English however, where letter symbols have less acoustic consistency, previously reported results fell short of systems using highly-tuned pronunciation lexicons. Grapheme units simplify system design, but since graphemes map to a wider set of acoustic realizations than phonemes, we should expect grapheme-based acoustic models to require more training data to capture these variations. In this paper, we compare the rate of improvement of grapheme and phoneme systems trained with datasets ranging from 450 to 1200 hours of speech. We consider various grapheme unit configurations, including using letter-specific, onset, and coda units. We show that the grapheme systems improve faster and, depending on the lexicon, reach or surpass the phoneme baselines with the largest training set.


international conference on acoustics speech and signal processing | 1999

Discriminative mixture weight estimation for large Gaussian mixture models

Françoise Beaufays; Mitchel Weintraub; Yochai Konig

This paper describes a new approach to acoustic modeling for large vocabulary continuous speech recognition (LVCSR) systems. Each phone is modeled with a large Gaussian mixture model (GMM) whose context-dependent mixture weights are estimated with a sentence-level discriminative training criterion. The estimation problem is cast in a neural network framework, which enables the incorporation of the appropriate constraints on the mixture weight vectors, and allows a straight-forward training procedure, based on steepest descent. Experiments conducted on the Callhome-English and Switchboard databases show a significant improvement of the acoustic model performance, and a somewhat lesser improvement with the combined acoustic and language models.


international symposium on neural networks | 1993

Learning algorithms for adaptive processing and control

Bernard Widrow; Michael A. Lehr; Françoise Beaufays; Eric A. Wan; M. Bileillo

Linear and nonlinear adaptive filtering algorithms are described, along with applications to signal processing and control problems. Specific topics addressed include adaptive least mean square (LMS) filtering, adaptive filtering with discrete cosine transform LMS (DCT/LMS), adaptive noise cancelling, fetal electrocardiography, adaptive echo cancelling, inverse plant modeling, adaptive inverse control, adaptive equalization, adaptive linear prediction, and nonlinear filtering and prediction.<<ETX>>

Collaboration


Dive into the Françoise Beaufays's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge