Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias W. Seeger is active.

Publication


Featured researches published by Matthias W. Seeger.


Advanced Robotics | 2009

Model Learning with Local Gaussian Process Regression

Duy Nguyen-Tuong; Matthias W. Seeger; Jan Peters

Precise models of robot inverse dynamics allow the design of significantly more accurate, energy-efficient and compliant robot control. However, in some cases the accuracy of rigid-body models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-support vector regression. The applicability of the proposed LGP method is demonstrated by real-time online learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.


american control conference | 2008

Computed torque control with nonparametric regression models

Duy Nguyen-Tuong; Matthias W. Seeger; Jan Peters

Computed torque control allows the design of considerably more precise, energy-efficient and compliant controls for robots. However, the major obstacle is the requirement of an accurate model for torque generation, which cannot be obtained in some cases using rigid-body formulations due to unmodeled nonlinearities, such as complex friction or actuator dynamics. In such cases, models approximated from robot data present an appealing alternative. In this paper, we compare two non- parametric regression methods for model approximation, i.e., locally weighted projection regression (LWPR) and Gaussian process regression (GPR). While locally weighted regression was employed for real-time model estimation in learning adaptive control, Gaussian process regression has not been used in control to-date due to high computational requirements. The comparison includes the assessment of model approximation for both regression methods using data originated from SARCOS robot arm, as well as an evaluation of the robot tracking performance in computed torque control employing the approximated models. Our results show that GPR can be applied for realtime control achieving higher accuracy. However, for the online learning LWPR is superior by reason of lower computational requirements.


Magnetic Resonance in Medicine | 2009

Optimization of k‐space trajectories for compressed sensing by Bayesian experimental design

Matthias W. Seeger; Hannes Nickisch; R Pohmann; Bernhard Schölkopf

The optimization of k‐space sampling for nonlinear sparse MRI reconstruction is phrased as a Bayesian experimental design problem. Bayesian inference is approximated by a novel relaxation to standard signal processing primitives, resulting in an efficient optimization algorithm for Cartesian and spiral trajectories. On clinical resolution brain image data from a Siemens 3T scanner, automatically optimized trajectories lead to significantly improved images, compared to standard low‐pass, equispaced, or variable density randomized designs. Insights into the nonlinear design optimization problem for MRI are given. Magn Reson Med, 2010.


Siam Journal on Imaging Sciences | 2011

Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models

Matthias W. Seeger; Hannes Nickisch

Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the densitys mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images. Parts of this work have been presented at conferences [M. Seeger, H. Nickisch, R. Pohmann, and B. Scholkopf, in Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, eds., Curran Associates, Red Hook, NY, 2009, pp. 1441-1448; H. Nickisch and M. Seeger, in Proceedings of the 26th International Conference on Machine Learning, L. Bottou and M. Littman, eds., Omni Press, Madison, WI, 2009, pp. 761-768].


BMC Systems Biology | 2007

Experimental design for efficient identification of gene regulatory networks using sparse Bayesian models

Florian Steinke; Matthias W. Seeger; Koji Tsuda

BackgroundIdentifying large gene regulatory networks is an important task, while the acquisition of data through perturbation experiments (e.g., gene switches, RNAi, heterozygotes) is expensive. It is thus desirable to use an identification method that effectively incorporates available prior knowledge – such as sparse connectivity – and that allows to design experiments such that maximal information is gained from each one.ResultsOur main contributions are twofold: a method for consistent inference of network structure is provided, incorporating prior knowledge about sparse connectivity. The algorithm is time efficient and robust to violations of model assumptions. Moreover, we show how to use it for optimal experimental design, reducing the number of required experiments substantially. We employ sparse linear models, and show how to perform full Bayesian inference for these. We not only estimate a single maximum likelihood network, but compute a posterior distribution over networks, using a novel variant of the expectation propagation method. The representation of uncertainty enables us to do effective experimental design in a standard statistical setting: experiments are selected such that the experiments are maximally informative.ConclusionFew methods have addressed the design issue so far. Compared to the most well-known one, our method is more transparent, and is shown to perform qualitatively superior. In the former, hard and unrealistic constraints have to be placed on the network structure for mere computational tractability, while such are not required in our method. We demonstrate reconstruction and optimal experimental design capabilities on tasks generated from realistic non-linear network simulators.The methods described in the paper are available as a Matlab package athttp://www.kyb.tuebingen.mpg.de/sparselinearmodel.


IEEE Signal Processing Magazine | 2010

Variational Bayesian Inference Techniques

Matthias W. Seeger; David P. Wipf

Milestones in sparse signal reconstruction and compressive sensing can be understood in a probabilistic Bayesian context, fusing underdetermined measurements with knowledge about low-level signal properties in the posterior distribution, which is maximized for point estimation. We review recent progress to advance beyond this setting. If the posterior is used as a distribution to be integrated over instead of merely an optimization criterion, sparse estimators with better properties may be obtained, and applications beyond point reconstruction from fixed data can be served. We describe novel variational relaxations of Bayesian integration, characterized as well as posterior maximization, which can be solved robustly for very large models by algorithms unifying convex reconstruction and Bayesian graphical model technology. They excel in difficult real-world imaging problems where posterior maximization performance is often unsatisfactory.


IEEE Transactions on Information Theory | 2008

Information Consistency of Nonparametric Gaussian Process Methods

Matthias W. Seeger; Sham M. Kakade; Dean P. Foster

Bayesian nonparametric models are widely and successfully used for statistical prediction. While posterior consistency properties are well studied in quite general settings, results have been proved using abstract concepts such as metric entropy, and they come with subtle conditions which are hard to validate and not intuitive when applied to concrete models. Furthermore, convergence rates are difficult to obtain. By focussing on the concept of information consistency for Bayesian Gaussian process (GP)models, consistency results and convergence rates are obtained via a regret bound on cumulative log loss. These results depend strongly on the covariance function of the prior process, thereby giving a novel interpretation to penalization with reproducing kernel Hilbert space norms and to commonly used covariance function classes and their parameters. The proof of the main result employs elementary convexity arguments only. A theorem of Widom is used in order to obtain precise convergence rates for several covariance functions widely used in practice.


european conference on machine learning | 2007

Bayesian Inference for Sparse Generalized Linear Models

Matthias W. Seeger; Sebastian Gerwinn; Matthias Bethge

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.


international conference on machine learning | 2009

Convex variational Bayesian inference for large scale generalized linear models

Hannes Nickisch; Matthias W. Seeger

We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary super-Gaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps.


From Motor Learning to Interaction Learning in Robots | 2010

Real-Time Local GP Model Learning

Duy Nguyen-Tuong; Matthias W. Seeger; Jan Peters

For many applications in robotics, accurate dynamics models are essential. However, in some applications, e.g., in model-based tracking control, precise dynamics models cannot be obtained analytically for sufficiently complex robot systems. In such cases, machine learning offers a promising alternative for approximating the robot dynamics using measured data. However, standard regression methods such as Gaussian process regression (GPR) suffer from high computational complexity which prevents their usage for large numbers of samples or online learning to date. In this paper, we propose an approximation to the standard GPR using local Gaussian processes models inspired by [Vijayakumar et al(2005)Vijayakumar, D’Souza, and Schaal, Snelson and Ghahramani(2007)]. Due to reduced computational cost, local Gaussian processes (LGP) can be applied for larger sample-sizes and online learning. Comparisons with other nonparametric regressions, e.g., standard GPR, support vector regression (SVR) and locally weighted projection regression (LWPR), show that LGP has high approximation accuracy while being sufficiently fast for real-time online learning.

Collaboration


Dive into the Matthias W. Seeger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge