Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jake V. Bouvrie is active.

Publication


Featured researches published by Jake V. Bouvrie.


Foundations of Computational Mathematics | 2010

Mathematics of the Neural Response

Steve Smale; Lorenzo Rosasco; Jake V. Bouvrie; Andrea Caponnetto; Tomaso Poggio

We propose a natural image representation, the neural response, motivated by the neuroscience of the visual cortex. The inner product defined by the neural response leads to a similarity measure between functions which we call the derived kernel. Based on a hierarchical architecture, we give a recursive definition of the neural response and associated derived kernel. The derived kernel can be used in a variety of application domains such as classification of images, strings of text and genomics data.


international conference on acoustics, speech, and signal processing | 2008

Localized spectro-temporal cepstral analysis of speech

Jake V. Bouvrie; Tony Ezzat; Tomaso Poggio

Drawing on recent progress in auditory neuroscience, we present a novel speech feature analysis technique based on localized spectro- temporal cepstral analysis of speech. We proceed by extracting localized 2D patches from the spectrogram and project onto a 2D discrete cosine (2D-DCT) basis. For each time frame, a speech feature vector is then formed by concatenating low-order 2D- DCT coefficients from the set of corresponding patches. We argue that our framework has significant advantages over standard one- dimensional MFCC features. In particular, we find that our features are more robust to noise, and better capture temporal modulations important for recognizing plosive sounds. We evaluate the performance of the proposed features on a TIMIT classification task in clean, pink, and babble noise conditions, and show that our feature analysis outperforms traditional features based on MFCCs.


international conference on acoustics, speech, and signal processing | 2007

Noise Robust Phonetic Classificationwith Linear Regularized Least Squares and Second-Order Features

Ryan Rifkin; Ken Schutte; Michelle Saad; Jake V. Bouvrie; James R. Glass

We perform phonetic classification with an architecture whose elements are binary classifiers trained via linear regularized least squares (RLS). RLS is a simple yet powerful regularization algorithm with the desirable property that a good value of the regularization parameter can be found efficiently by minimizing leave-one-out error on the training set. Our system achieves state-of-the-art single classifier performance on the TIMIT phonetic classification task, (slightly) beating other recent systems. We also show that in the presence of additive noise, our model is much more robust than a well-trained Gaussian mixture model.


conference on decision and control | 2012

Continuous-time stochastic Mirror Descent on a network: Variance reduction, consensus, convergence

Maxim Raginsky; Jake V. Bouvrie

The method of Mirror Descent (MD), originally proposed by Nemirovski and Yudin in the late 1970s, has recently seen a major resurgence in the fields of large-scale optimization and machine learning. In a nutshell, MD is a primal-dual method that can be adapted to the geometry of the optimization problem at hand through the choice of a suitable strongly convex potential function. We study a stochastic, continuous-time variant of MD performed by a network of coupled noisy agents (processors). The overall dynamics is described by a system of stochastic differential equations, coupled linearly through the network Laplacian. We address the impact of the network topology (encoded in the spectrum of the Laplacian) on the speed of convergence of the “mean-field” component to the optimum. We show that this convergence is particularly rapid whenever the potential function can be chosen in such a way that the resulting mean-field dynamics in the dual space follows an Ornstein-Uhlenbeck process.


international conference on acoustics, speech, and signal processing | 2007

AM-FM Demodulation of Spectrograms using Localized 2D Max-Gabor Analysis

Tony Ezzat; Jake V. Bouvrie; Tomaso Poggio

We present a method that de-modulates a narrowband magnitude spectrogram S(f, t) into a frequency modulation term cos(Φ)(f,t)) which represents the underlying harmonic carrier, and an amplitude modulation term A(f,t) which represents the spectral envelope. Our method operates by performing a two-dimensional local patch analysis of the spectrogram, in which each patch is factored into a local carrier term and a local amplitude envelope term using a Max-Gabor analysis. We demonstrate the technique over a wide variety of speakers, and show how the spectrograms in each case may be adequately reconstructed as S(f, t) = A(f, t)cos(Φ(f, t)).


WImBI'06 Proceedings of the 1st WICI international conference on Web intelligence meets brain informatics | 2006

Biophysical models of neural computation: max and tuning circuits

Ulf Knoblich; Jake V. Bouvrie; Tomaso Poggio

Pooling under a softmax operation and Gaussian-like tuning in the form of a normalized dot-product were proposed as the key operations in a recent model of object recognition in the ventral stream of visual cortex. We investigate how these two operations might be implemented by plausible circuits of a few hundred neurons in cortex. We consider two different sets of circuits whose different properties may correspond to the conditions in visual and barrel cortices, respectively. They constitute a plausibility proof that stringent timing and accuracy constraints imposed by the neuroscience of object recognition can be satisfied with standard spiking and synaptic mechanisms. We provide simulations illustrating the performance of the circuits, and discuss the relevance of our work to neurophysiology as well as what bearing it may have on the search for maximum and tuning circuits in cortex.


Neural Computation | 2011

Synchronization and redundancy: Implications for robustness of neural learning and decision making

Jake V. Bouvrie; Jean-Jacques E. Slotine

Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by nonideal biological building blocks that can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error, which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. We discuss range of situations in which the mechanisms we model arise in brain science and draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.


allerton conference on communication, control, and computing | 2010

Balanced reduction of nonlinear control systems in reproducing kernel Hilbert space

Jake V. Bouvrie; Boumediene Hamzi

We introduce a novel data-driven order reduction method for nonlinear control systems, drawing on recent progress in machine learning and statistical dimensionality reduction. The method rests on the assumption that the nonlinear system behaves linearly when lifted into a high (or infinite) dimensional feature space where balanced truncation may be carried out implicitly. This leads to a nonlinear reduction map which can be combined with a representation of the system belonging to a reproducing kernel Hilbert space to give a closed, reduced order dynamical system which captures the essential input-output characteristics of the original model. Empirical simulations illustrating the approach are also provided.


Siam Journal on Control and Optimization | 2017

Kernel Methods for the Approximation of Nonlinear Systems

Jake V. Bouvrie; Boumediene Hamzi

We introduce a data-driven order reduction method for nonlinear control systems, drawing on recent progress in machine learning and statistical dimensionality reduction. The method rests on the assumption that the nonlinear system behaves linearly when lifted into a high (or infinite) dimensional feature space where balanced truncation may be carried out implicitly. This leads to a nonlinear reduction map which can be combined with a representation of the system belonging to a reproducing kernel Hilbert space to give a closed, reduced order dynamical system which captures the essential input-output characteristics of the original model. Empirical simulations illustrating the approach are also provided.


conference on decision and control | 2012

Geometric multiscale reduction for autonomous and controlled nonlinear systems

Jake V. Bouvrie; Mauro Maggioni

Most generic approaches to empirical reduction of dynamical systems, controlled or otherwise, are global in nature. Yet interesting systems often exhibit multiscale structure in time or in space, suggesting that localized reduction techniques which take advantage of this multiscale structure might provide better approximations with lower complexity. We introduce a snapshot-based framework for localized analysis and reduction of nonlinear systems, based on a systematic multiscale decomposition of the statespace induced by the geometry of empirical trajectories. A given system is approximated by a piecewise collection of low-dimensional systems at different scales, each of which is suited to and responsible for a particular region of the statespace. Within this framework, we describe localized, multiscale variants of the proper orthogonal decomposition (POD) and empirical balanced truncation methods for model order reduction of nonlinear systems. The inherent locality of the treatment further motivates control strategies involving collections of simple, local controllers and raises decentralized control possibilities. We illustrate the localized POD approach in the context of a high-dimensional fluid mechanics problem involving incompressible flow over a bluff body.

Collaboration


Dive into the Jake V. Bouvrie's collaboration.

Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tony Ezzat

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Rosasco

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean-Jacques E. Slotine

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steve Smale

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andre Wibisono

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrea Caponnetto

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge