Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bradley M. Bell is active.

Publication


Featured researches published by Bradley M. Bell.


IEEE Transactions on Automatic Control | 1993

The iterated Kalman filter update as a Gauss-Newton method

Bradley M. Bell; F.W. Cathey

It is shown that the iterated Kalman filter (IKF) update is an application of the Gauss-Newton method for approximating a maximum likelihood estimate. An example is presented in which the iterated Kalman filter update and maximum likelihood estimate show correct convergence behavior as the observation becomes more accurate, whereas the extended Kalman filter update does not. >


Computational Statistics & Data Analysis | 1996

A relative weighting method for estimating parameters and variances in multiple data sets

Bradley M. Bell; James V. Burke; Alan Schumitzky

We are given multiple data sets and a nonlinear model function for each data value. Each data value is the sum of its error and its model function evaluated at an unknown parameter vector. The data errors are mean zero, finite variance, independent, are not necessarily normal, and are identically distributed within each data set. We consider the problem of estimating the data variance as well as the parameter vector via an extended least-squares technique motivated by maximum likelihood estimation. We prove convergence of an algorithm that generalizes a standard successive approximation algorithm from nonlinear programming. This generalization reduces the estimation problem to a sequence of linear least-squares problems. It is shown that the parameter and variance estimators converge to their true values as the number of data values goes to infinity. Moreover, if the constraints are not active, the parameter estimates converge in distribution. This convergence does not depend on the data errors being normally distributed.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1986

Separating multipaths by global optimization of a multidimensional matched filter

Bradley M. Bell; Terry E. Ewart

A transmitted signal can arrive at a receiver via several refracted Fermat paths. If the paths are independent in the Fresnel sense, then the received signal can be modeled as the sum of amplitude scaled and time shifted copies of a predetermined replica plus white noise. We present an algorithm that uses the replica to determine the time shifts and amplitudes for each path. It is referred to as an n-dimensional matched filter algorithm by analogy with the well-known matched filter algorithm. The cross correlation between the received signal and the replica oscillates near the center frequency of the transmitted signal. This causes the n-dimensional matched filter output to have many local maxima that are not globally optimal. The time shifts and amplitude scalings for the Fermat paths are determined by maximizing the output of the n-dimensional matched filter. The algorithm is more robust and efficient than others currently available. Simulated realizations of received signals were generated with multipath and noise characteristics similar to an ocean acoustic transmission case. These realizations were then separated into arrival times and corresponding amplitudes by the algorithm. The results of these tests and the general limitations of the algorithm are discussed.


Automatica | 2009

An inequality constrained nonlinear Kalman-Bucy smoother by interior point likelihood maximization

Bradley M. Bell; James V. Burke; Gianluigi Pillonetto

Kalman-Bucy smoothers are often used to estimate the state variables as a function of time in a system with stochastic dynamics and measurement noise. This is accomplished using an algorithm for which the number of numerical operations grows linearly with the number of time points. All of the randomness in the model is assumed to be Gaussian. Including other available information, for example a bound on one of the state variables, is non trivial because it does not fit into the standard Kalman-Bucy smoother algorithm. In this paper we present an interior point method that maximizes the likelihood with respect to the sequence of state vectors satisfying inequality constraints. The method obtains the same decomposition that is normally obtained for the unconstrained Kalman-Bucy smoother, hence the resulting number of operations grows linearly with the number of time points. We present two algorithms, the first is for the affine case and the second is for the nonlinear case. Neither algorithm requires the optimization to start at a feasible sequence of state vector values. Both the unconstrained affine and unconstrained nonlinear Kalman-Bucy smoother are special cases of the class of problems that can be handled by these algorithms.


IEEE Transactions on Automatic Control | 2011

An

Aleksandr Y. Aravkin; Bradley M. Bell; James V. Burke; Gianluigi Pillonetto

Robustness is a major problem in Kalman filtering and smoothing that can be solved using heavy tailed distributions; e.g., ℓ1-Laplace. This paper describes an algorithm for finding the maximum a posteriori (MAP) estimate of the Kalman smoother for a nonlinear model with Gaussian process noise and ℓ1 -Laplace observation noise. The algorithm uses the convex composite extension of the Gauss-Newton method. This yields convex programming subproblems to which an interior point path-following method is applied. The number of arithmetic operations required by the algorithm grows linearly with the number of time points because the algorithm preserves the underlying block tridiagonal structure of the Kalman smoother problem. Excellent fits are obtained with and without outliers, even though the outliers are simulated from distributions that are not ℓ1 -Laplace. It is also tested on actual data with a nonlinear measurement model for an underwater tracking experiment. The ℓ1-Laplace smoother is able to construct a smoothed fit, without data removal, from data with very large outliers.


IEEE Transactions on Signal Processing | 2000

\ell _{1}

Bradley M. Bell

We use Laplaces method to approximate the marginal likelihood for parameters in a Gauss-Markov process. This approximation requires the determinant of a matrix whose dimensions are equal to the number of state variables times the number of time points. We reduce this to sequential evaluation of determinants and inverses of smaller matrices, we show this is a numerically stable method.


Archive | 2008

-Laplace Robust Kalman Smoother

Bradley M. Bell; James V. Burke

In applied optimization, an understanding of the sensitivity of the optimal value to changes in structural parameters is often essential. Applications include parametric optimization, saddle point problems, Benders decompositions, and multilevel optimization. In this paper we adapt a known automatic differentiation (AD) technique for obtaining derivatives of implicitly defined functions for application to optimal value functions. The formulation we develop is well suited to the evaluation of first and second derivatives of optimal values. The result is a method that yields large savings in time and memory. The savings are demonstrated by a Benders decomposition example using both the ADOL-C and CppAD packages. Some of the source code for these comparisons is included to aid testing with other hardware and compilers, other AD packages, as well as future versions of ADOL-C and CppAD. The source code also serves as an aid in the implementation of the method for actual applications. In addition, it demonstrates how multiple C++ operator overloading AD packages can be used with the same source code. This provides motivation for the coding numerical routines where the floating point type is a C++ template parameter.


The Journal of Physiology | 2005

The marginal likelihood for parameters in a discrete Gauss-Markov process

R. K. Powers; Yue Dai; Bradley M. Bell; Donald B. Percival; M. D. Binder

The principal computational operation of neurones is the transformation of synaptic inputs into spike train outputs. The probability of spike occurrence in neurones is determined by the time course and magnitude of the total current reaching the spike initiation zone. The features of this current that are most effective in evoking spikes can be determined by injecting a Gaussian current waveform into a neurone and using spike‐triggered reverse correlation to calculate the average current trajectory (ACT) preceding spikes. The time course of this ACT (and the related first‐order Wiener kernel) provides a general description of a neurones response to dynamic stimuli. In many different neurones, the ACT is characterized by a shallow hyperpolarizing trough followed by a more rapid depolarizing peak immediately preceding the spike. The hyperpolarizing phase is thought to reflect an enhancement of excitability by partial removal of sodium inactivation. Alternatively, this feature could simply reflect the fact that interspike intervals that are longer than average can only occur when the current is lower than average toward the end of the interspike interval. Thus, the ACT calculated for the entire spike train displays an attenuated version of the hyperpolarizing trough associated with the long interspike intervals. This alternative explanation for the characteristic shape of the ACT implies that it depends upon the time since the previous spike, i.e. the ACT reflects both previous stimulus history and previous discharge history. The present study presents results based on recordings of noise‐driven discharge in rat hypoglossal motoneurones that support this alternative explanation. First, we show that the hyperpolarizing trough is larger in ACTs calculated from spikes preceded by long interspike intervals, and minimal or absent in those based on short interspike intervals. Second, we show that the trough is present for ACTs calculated from the discharge of a threshold‐crossing neurone model with a postspike afterhyperpolarization (AHP), but absent from those calculated from the discharge of a model without an AHP. We show that it is possible to represent noise‐driven discharge using a two‐component linear model that predicts discharge probability based on the sum of a feedback kernel and a stimulus kernel. The feedback kernel reflects the influence of prior discharge mediated by the AHP, and it increases in amplitude when AHP amplitude is increased by pharmacological manipulations. Finally, we show that the predictions of this model are virtually identical to those based on the first‐order Wiener kernel. This suggests that the Wiener kernels derived from standard white‐noise analysis of noise‐driven discharge in neurones actually reflect the effects of both stimulus and discharge history.


Inverse Problems | 2004

Algorithmic Differentiation of Implicit Functions and Optimal Values

Bradley M. Bell; Gianluigi Pillonetto

Estimating an unknown function of one variable from a finite set of measurements is an ill-posed inverse problem. Placing a Bayesian prior on a function space is one way to make this problem well-posed. This problem can turn out well-posed even if the relationship between the unknown function and the measurements, as well as the function space prior, contains unknown parameters. We present a method for estimating the unknown parameters by maximizing an approximation of the marginal likelihood where the unknown function has been integrated out. This is an extension of marginal likelihood estimators for the regularization parameter because we allow for a nonlinear relationship between the unknown function and the measurements. The estimate of the function is then obtained by maximizing its a posteriori probability density function given the parameters and the data. We present a computational method that uses eigenfunctions to represent the function space. The continuity properties of the function estimate are characterized. Proofs of the convergence of the method are included. The importance of allowing for a nonlinear transformation is demonstrated by a stochastic sum of exponentials example.


Applied Mathematics and Computation | 2001

Contributions of the input signal and prior activation history to the discharge behaviour of rat motoneurones

Bradley M. Bell

Often a model for the mean and variance of a measurement set is naturally expressed in terms of both deterministic and random parameters. Each of the deterministic parameters has one fixed value while the random parameters come from a distribution of values. We restrict our attention to the case where the random parameters and the measurement error have a Gaussian distribution. In this case, the joint likelihood of the data and random parameters is an extended least squares function. The likelihood of the data alone is the integral of this extended least squares function with respect to the random parameters. This is the likelihood that we would like to optimize, but we do not have a closed form expression for the integral. We use Laplaces method to obtain an approximation for the likelihood of the data alone. Maximizing this approximation is less computationally demanding than maximizing the integral expression, but this yields a different estimator. In addition, evaluation of the approximation requires second derivatives of the original model functions. If we were to use this approximation as our objective function, evaluation of the derivative of the objective would require third derivatives of the original model functions. We present modified approximations that are expressed using only values of the original model functions. Evaluation of the derivative of the modified approximations only requires first derivatives of the original model functions. We use Monte Carlo techniques to approximate the difference between an arbitrary estimator and the estimator that maximizes the likelihood of the data alone. In addition, we approximate the information matrix corresponding to the estimator that maximizes the likelihood of the data alone.

Collaboration


Dive into the Bradley M. Bell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James V. Burke

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Vicini

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Schumitzky

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David Aldrich

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge