Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron D'Souza is active.

Publication


Featured researches published by Aaron D'Souza.


Neural Computation | 2005

Incremental Online Learning in High Dimensions

Sethu Vijayakumar; Aaron D'Souza; Stefan Schaal

Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number ofpossibly redundantinputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.


intelligent robots and systems | 2001

Learning inverse kinematics

Aaron D'Souza; Sethu Vijayakumar; Stefan Schaal

Real-time control of the end-effector of a humanoid robot in external coordinates requires computationally efficient solutions of the inverse kinematics problem. In this context, this paper investigates inverse kinematics learning for resolved motion rate control (RMRC) employing an optimization criterion to resolve kinematic redundancies. Our learning approach is based on the key observations that learning an inverse of a nonuniquely invertible function can be accomplished by augmenting the input representation to the inverse model and by using a spatially localized learning approach. We apply this strategy to inverse kinematics learning and demonstrate how a recently developed statistical learning algorithm, locally weighted projection regression, allows efficient learning of inverse kinematic mappings in an incremental fashion even when input spaces become rather high dimensional. Our results are illustrated with a 30-DOF humanoid robot.


Autonomous Robots | 2002

Statistical Learning for Humanoid Robots

Sethu Vijayakumar; Aaron D'Souza; Tomohiro Shibata; Jörg Conradt; Stefan Schaal

The complexity of the kinematic and dynamic structure of humanoid robots make conventional analytical approaches to control increasingly unsuitable for such systems. Learning techniques offer a possible way to aid controller design if insufficient analytical knowledge is available, and learning approaches seem mandatory when humanoid systems are supposed to become completely autonomous. While recent research in neural networks and statistical learning has focused mostly on learning from finite data sets without stringent constraints on computational efficiency, learning for humanoid robots requires a different setting, characterized by the need for real-time learning performance from an essentially infinite stream of incrementally arriving data. This paper demonstrates how even high-dimensional learning problems of this kind can successfully be dealt with by techniques from nonparametric regression and locally weighted learning. As an example, we describe the application of one of the most advanced of such algorithms, Locally Weighted Projection Regression (LWPR), to the on-line learning of three problems in humanoid motor control: the learning of inverse dynamics models for model-based control, the learning of inverse kinematics of redundant manipulators, and the learning of oculomotor reflexes. All these examples demonstrate fast, i.e., within seconds or minutes, learning convergence with highly accurate final peformance. We conclude that real-time learning for complex motor system like humanoid robots is possible with appropriately tailored algorithms, such that increasingly autonomous robots with massive learning abilities should be achievable in the near future.


international conference on machine learning | 2004

The Bayesian backfitting relevance vector machine

Aaron D'Souza; Sethu Vijayakumar; Stefan Schaal

Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, are usually computationally prohibitive. This paper makes several important contributions that allow Bayesian learning to scale to more complex, real-world learning scenarios. Firstly, we show that backfitting --- a traditional non-parametric, yet highly efficient regression tool --- can be derived in a novel formulation within an expectation maximization (EM) framework and thus can finally be given a probabilistic interpretation. Secondly, we show that the general framework of sparse Bayesian learning and in particular the relevance vector machine (RVM), can be derived as a highly efficient algorithm using a Bayesian version of backfitting at its core. As we demonstrate on several regression and classification benchmarks, Bayesian backfitting offers a compelling alternative to current regression methods, especially when the size and dimensionality of the data challenge computational resources.


Archive | 2005

LWPR: A Scalable Method for Incremental Online Learning in High Dimensions

Sethu Vijayakumar; Aaron D'Souza; Stefan Schaal


International Journal of Artificial Intelligence in Education | 2001

An Automated Lab Instructor for Simulated Science Experiments

Aaron D'Souza; Jeff Rickel; Bruno Herreros; W. Lewis Johnson


neural information processing systems | 2005

Predicting EMG Data from M1 Neurons with Variational Bayesian Least Squares

Jo-Anne Ting; Aaron D'Souza; Kenji Yamamoto; Toshinori Yoshioka; Donna S. Hoffman; Shinji Kakei; Lauren E. Sergio; John F. Kalaska; Mitsuo Kawato


The Society for Neuroscience Abstracts | 2001

Are internal models of the entire body learnable

Aaron D'Souza; Sethu Vijayakumar; Stefan Schaal


Archive | 2004

Towards tractable parameter-free statistical learning

Stefan Schaal; Aaron D'Souza


Archive | 2003

Bayesian Backfitting for High Dimensional Regression

Aaron D'Souza; Sethu Vijayakumar; Stefan Schaal

Collaboration


Dive into the Aaron D'Souza's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Herreros

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Rickel

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar

Jo-Anne Ting

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Kenji Yamamoto

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

W. Lewis Johnson

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge