Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor Solo is active.

Publication


Featured researches published by Victor Solo.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1989

The limiting behavior of LMS

Victor Solo

A realization-oriented analysis is given of the gradient noise misadjustment and lag misadjustment performance of the LMS (least-mean-square) algorithm. New formulas are given for both of these components of excess mean-square error. It is shown that the traditional formula for lag misadjustment needs to be modified by adding further terms due to gradient noise and noise variance. To perform the analysis, it is necessary to study the convergence (with probability one) of the noise-free, fixed-parameter LMS algorithm. Convergence is found under simple conditions that improve on those previously obtained. >


IEEE Transactions on Signal Processing | 2008

Dimension Estimation in Noisy PCA With SURE and Random Matrix Theory

Magnus O. Ulfarsson; Victor Solo

Principal component analysis (PCA) is one of the best known methods for dimensionality reduction. Perhaps the most important problem in using PCA is to determine the number of principal components (PCs) or equivalently choose the rank of the loading matrix. Many methods have been proposed to deal with this problem but almost all of them fail in the important practical case when the number of observation is comparable to the number of variables, i.e., the realm of random matrix theory (RMT). In this paper we propose to use Steins unbiased risk estimator (SURE) to estimate, with some assistance from RMT, the number of principal components. The method is applied both on simulated and real functional magnetic resonance imaging (fMRI) data, and compared to BIC and the Laplace method.


IEEE Transactions on Signal Processing | 2008

Sparse Variable PCA Using Geodesic Steepest Descent

Magnus O. Ulfarsson; Victor Solo

Principal component analysis (PCA) is a dimensionality reduction technique used in most fields of science and engineering. It aims to find linear combinations of the input variables that maximize variance. A problem with PCA is that it typically assigns nonzero loadings to all the variables, which in high dimensional problems can require a very large number of coefficients. But in many applications, the aim is to obtain a massive reduction in the number of coefficients. There are two very different types of sparse PCA problems: sparse loadings PCA (slPCA) which zeros out loadings (while generally keeping all of the variables) and sparse variable PCA which zeros out whole variables (typically leaving less than half of them). In this paper, we propose a new svPCA, which we call sparse variable noisy PCA (svnPCA). It is based on a statistical model, and this gives access to a range of modeling and inferential tools. Estimation is based on optimizing a novel penalized log-likelihood able to zero out whole variables rather than just some loadings. The estimation algorithm is based on the geodesic steepest descent algorithm. Finally, we develop a novel form of Bayesian information criterion (BIC) for tuning parameter selection. The svnPCA algorithm is applied to both simulated data and real functional magnetic resonance imaging (fMRI) data.


IEEE Transactions on Automatic Control | 1990

Stochastic adaptive control and Martingale limit theory

Victor Solo

Recently, S.P. Meyn and P.E. Caines (ibid., vol.AC-32, p.220-6, 1987) have used ergodic theory for Markov processes to give the first asymptotic stability analysis of a nontrivial stochastic adaptive control problem. By nontrivial is meant a stochastic adaptive control problem whose parameter variation has finite nonzero power. They correctly observed that the stochastic Lyapunov function methods fail here, because there is no almost sure parameter convergence. It is shown here how Martingale asymptotics can be used to produce many results close to those of Meyn and Caines, as well as to supply some new observations. Strengths and weaknesses of both approaches are discussed. >


IEEE Transactions on Signal Processing | 1992

The error variance of LMS with time-varying weights

Victor Solo

A calculation of weight error variance of the LMS (least mean square) algorithm is made in the presence of time-varying true weights. With time-varying weights the LMS error system has in general several time scales operating at once. This causes difficulties in the variance calculation which seem hitherto to have passed unnoticed. To handle this problem a sort of perturbation expansion is developed based on weak convergence methods or stochastic averaging. The main concern in carrying out the error variance calculation is to study the effect on LMS performance of adaptation speed as it relates to true speed of parameter change. Three cases are covered with reference to considering adaptation speed with respect to true speed: first, where the adaptation speed is too slow; second, where it is matched: and third, where it is faster. >


IEEE Transactions on Image Processing | 2001

Errors-in-variables modeling in optical flow estimation

Lydia Ng; Victor Solo

Gradient-based optical flow estimation methods typically do not take into account errors in the spatial derivative estimates. The presence of these errors causes an errors-in-variables (EIV) problem. Moreover, the use of finite difference methods to calculate these derivatives ensures that the errors are strongly correlated between pixels. Total least squares (TLS) has often been used to address this EIV problem. However, its application in this context is flawed as TLS implicitly assumes that the errors between neighborhood pixels are independent. In this paper, a new optical flow estimation method (EIVM) is formulated to properly treat the EIV problem in optical flow. EIVM is based on Sprents (1966) procedure which allows the incorporation of a general EIV model in the estimation process. In EIVM, the neighborhood size acts as a smoothing parameter. Due to the weights in the EIVM objective function, the effect of changing the neighborhood size is more complex than in other local model methods such as Lucas and Kanade (1981). These weights, which are functions of the flow estimate, can alter the effective size and orientation of the neighborhood. In this paper, we also present a data-driven method for choosing the neighborhood size based on Steins unbiased risk estimators (SURE).


international conference on image processing | 2004

fMRI signal modeling using Laguerre polynomials

Victor Solo; Christopher J. Long; Emery N. Brown; Elissa Aminoff; Moshe Bar; Supratim Saha

In order to construct spatial activation plots from functional magnetic resonance imaging (fMRI) data, a complex spatio-temporal modeling problem must be solved. A crucial part of this process is the estimation of the hemodynamic response (HR) function, an impulse response relating the stimulus signal to the measured noisy response. The estimation of the HR is complicated by the presence of low frequency colored noise. The standard approach to modeling the HR is to use simple parametric models, although FIR models have been used. We pursue a nonparametric approach using orthonormal causal Laguerre polynomials which have become popular in the system identification literature. It also happens that the shape of the basis elements is similar to that of a typical HR. We thus expect to achieve a compact and so bias reduced and low noise representation of the HR. This is not the case in FIR modeling, because a low FIR order is unable to cover the whole length of the HR over its region of support while a high FIR order results in overestimation of signal and underestimation of noise leading to misleading interpretations.


IEEE Transactions on Automatic Control | 1989

Adaptive spectral factorization

Victor Solo

An on-line spectral factorization algorithm is used to devise a globally convergent self-tuning identifier that does not suffer from restrictions that amount to knowledge of the true system (e.g. the positive real condition). The method developed uses two ideas. One idea, an old one which might be called the method of split recursions, is used to estimate the parameters in blocks. Thus, one block might get the transfer function parameters while the other gets the noise parameters. The other idea is to use spectral factorization to estimate moving average parameters. The algorithm does have its own weaknesses (e.g. transient behavior may not be good, and it relies on a condition that is only generically true), but it does not need a positive real condition to be satisfied for global convergence. >


International Journal of Control | 1996

Deterministic adaptive control with slowly varying parameters : an averaging analysis

Victor Solo

In this work averaging methods are used to analyse the behaviour of an adaptive controller with slowly varying parameters. The incorporation of time varying parameters into a stability analysis of an adaptive controller leads to certain perturbation terms in the error system. The type of averaging results so far applied to adaptive control cannot deal with these perturbations. Thus, some new averaging theorems are needed. These results are known in the averaging literature. However, here a very simple proof technique has been found which gives finite and infinite interval averaging results under weaker conditions. The averaging analysis shows that, with regard to parameter tracking, the adaptive algorithm behaves like a first-order filter. It is also clear that successful tracking then requires knowledge of the true speed of parameter change.


international conference on acoustics speech and signal processing | 1998

Errors-in-variables modelling in optical flow problems

Lydia Ng; Victor Solo

Although still in practice, the use of total least squares (TLS) in optical flow estimation is unreliable. The TLS implicitly assumes that the error terms affecting the partial derivatives of the image intensities are independent. The usual methods for estimating the partial derivatives ensures that the errors are strongly correlated. Due to this correlation, an alternative method is required to treat the resulting errors-in-variables (EN) problem. We propose a new method for estimating optical flow based on Sprents (1966, 1969) procedure. This method incorporates a general EIV model and provides a far simpler computational procedure than found in previous solutions.

Collaboration


Dive into the Victor Solo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc J. Piggott

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher J. Long

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Elissa Aminoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Emery N. Brown

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

X. Kong

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge