Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian Bitzer is active.

Publication


Featured researches published by Sebastian Bitzer.


international conference on robotics and automation | 2006

Learning EMG control of a robotic hand: towards active prostheses

Sebastian Bitzer; P. van der Smagt

We introduce a method based on support vector machines which can detect opening and closing actions of the human thumb, index finger, and other fingers recorded via surface EMG only. The method is shown to be robust across sessions and can be used independently of the position of the arm. With these stability criteria, the method is ideally suited for the control of active prosthesis with a high number of active degrees of freedom. The method is successfully demonstrated on a robotic four-finger hand, and can be used to grasp objects


Frontiers in Human Neuroscience | 2014

Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model.

Sebastian Bitzer; Hame Park; Felix Blankenburg; Stefan J. Kiebel

Behavioral data obtained with perceptual decision making experiments are typically analyzed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence toward a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses.


ieee-ras international conference on humanoid robots | 2009

Latent spaces for dynamic movement primitives

Sebastian Bitzer; Sethu Vijayakumar

Dynamic movement primitives (DMPs) have been proposed as a powerful, robust and adaptive tool for planning robot trajectories based on demonstrated example movements. Adaptation of DMPs to new task requirements becomes difficult when demonstrated trajectories are only available in joint space, because their parameters do not in general correspond to variables meaningful for the task. This problem becomes more severe with increasing number of degrees of freedom and hence is particularly an issue for humanoid movements. It has been shown that DMP parameters can directly relate to task variables, when DMPs are learned in latent spaces resulting from dimensionality reduction of demonstrated trajectories. As we show here, however, standard dimensionality reduction techniques do not in general provide adequate latent spaces which need to be highly regular. In this work we concentrate on learning discrete (point-to-point) movements and propose a modification of a powerful nonlinear dimensionality reduction technique (Gaussian Process Latent Variable Model). Our modification makes the GPLVM more suitable for the use of DMPs by favouring latent spaces with highly regular structure. Even though in this case the user has to provide a structure hypothesis we show that its precise choice is not important in order to achieve good results. Additionally, we can overcome one of the main disadvantages of the GPLVM with this modification: its dependence on the initialisation of the latent space. We motivate our approach on data from a 7-DoF robotic arm and demonstrate its feasibility on a high-dimensional human motion capture data set.


intelligent robots and systems | 2010

Using dimensionality reduction to exploit constraints in reinforcement learning

Sebastian Bitzer; Matthew Howard; Sethu Vijayakumar

Reinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations of individual postures into learning, by extracting the inherent problem structure to find an efficient state representation. In particular, we use probabilistic, nonlinear dimensionality reduction to capture latent constraints present in the data. By learning policies in the learnt latent space, we are able to solve the planning problem in a reduced space that automatically satisfies task constraints. As shown in our experiments, this reduces the exploration needed and greatly accelerates the learning. We demonstrate our approach for learning a bimanual reaching task on the 19-DOF KHR-1HV humanoid.


NeuroImage | 2015

Comparison of variants of canonical correlation analysis and partial least squares for combined analysis of MRI and genetic data

Claudia Grellmann; Sebastian Bitzer; Jane Neumann; Lars T. Westlye; Ole A. Andreassen; Arno Villringer; Annette Horstmann

The standard analysis approach in neuroimaging genetics studies is the mass-univariate linear modeling (MULM) approach. From a statistical view, however, this approach is disadvantageous, as it is computationally intensive, cannot account for complex multivariate relationships, and has to be corrected for multiple testing. In contrast, multivariate methods offer the opportunity to include combined information from multiple variants to discover meaningful associations between genetic and brain imaging data. We assessed three multivariate techniques, partial least squares correlation (PLSC), sparse canonical correlation analysis (sparse CCA) and Bayesian inter-battery factor analysis (Bayesian IBFA), with respect to their ability to detect multivariate genotype-phenotype associations. Our goal was to systematically compare these three approaches with respect to their performance and to assess their suitability for high-dimensional and multi-collinearly dependent data as is the case in neuroimaging genetics studies. In a series of simulations using both linearly independent and multi-collinear data, we show that sparse CCA and PLSC are suitable even for very high-dimensional collinear imaging data sets. Among those two, the predictive power was higher for sparse CCA when voxel numbers were below 400 times sample size and candidate SNPs were considered. Accordingly, we recommend Sparse CCA for candidate phenotype, candidate SNP studies. When voxel numbers exceeded 500 times sample size, the predictive power was the highest for PLSC. Therefore, PLSC can be considered a promising technique for multivariate modeling of high-dimensional brain-SNP-associations. In contrast, Bayesian IBFA cannot be recommended, since additional post-processing steps were necessary to detect causal relations. To verify the applicability of sparse CCA and PLSC, we applied them to an experimental imaging genetics data set provided for us. Most importantly, application of both methods replicated the findings of this data set.


Biological Cybernetics | 2012

Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

Sebastian Bitzer; Stefan J. Kiebel

Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a ‘recognizing RNN’ (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.


Cognition & Emotion | 2015

Human preferences are biased towards associative information.

Sabrina Trapp; Amitai Shenhav; Sebastian Bitzer; Moshe Bar

There is ample evidence that the brain generates predictions that help interpret sensory input. To build such predictions the brain capitalizes upon learned statistical regularities and associations (e.g., “A” is followed by “B”; “C” appears together with “D”). The centrality of predictions to mental activities gave rise to the hypothesis that associative information with predictive value is perceived as intrinsically valuable. Such value would ensure that this information is proactively searched for, thereby promoting certainty and stability in our environment. We therefore tested here whether, all else being equal, participants would prefer stimuli that contained more rather than less associative information. In Experiments 1 and 2 we used novel, meaningless visual shapes and showed that participants preferred associative shapes over shapes that had not been associated with other shapes during training. In Experiment 3 we used pictures of real-world objects and again demonstrated a preference for stimuli that elicit stronger associations. These results support our proposal that predictive information is affectively tagged, and enhance our understanding of the formation of everyday preferences.


PLOS Computational Biology | 2015

A bayesian attractor model for perceptual decision making

Sebastian Bitzer; Jelle Bruineberg; Stefan J. Kiebel

Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks.


PLOS ONE | 2014

Hysteresis as an implicit prior in tactile spatial decision making

Sabrina D. Thiel; Sebastian Bitzer; Till Nierhaus; Christian Kalberlah; Sven Preusser; Jane Neumann; Vadim V. Nikulin; Elke van der Meer; Arno Villringer; Burkhard Pleger

Perceptual decisions not only depend on the incoming information from sensory systems but constitute a combination of current sensory evidence and internally accumulated information from past encounters. Although recent evidence emphasizes the fundamental role of prior knowledge for perceptual decision making, only few studies have quantified the relevance of such priors on perceptual decisions and examined their interplay with other decision-relevant factors, such as the stimulus properties. In the present study we asked whether hysteresis, describing the stability of a percept despite a change in stimulus property and known to occur at perceptual thresholds, also acts as a form of an implicit prior in tactile spatial decision making, supporting the stability of a decision across successively presented random stimuli (i.e., decision hysteresis). We applied a variant of the classical 2-point discrimination task and found that hysteresis influenced perceptual decision making: Participants were more likely to decide ‘same’ rather than ‘different’ on successively presented pin distances. In a direct comparison between the influence of applied pin distances (explicit stimulus property) and hysteresis, we found that on average, stimulus property explained significantly more variance of participants’ decisions than hysteresis. However, when focusing on pin distances at threshold, we found a trend for hysteresis to explain more variance. Furthermore, the less variance was explained by the pin distance on a given decision, the more variance was explained by hysteresis, and vice versa. Our findings suggest that hysteresis acts as an implicit prior in tactile spatial decision making that becomes increasingly important when explicit stimulus properties provide decreasing evidence.


Scientific Reports | 2016

Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making.

Hame Park; Jan-Matthis Lueckmann; Katharina von Kriegstein; Sebastian Bitzer; Stefan J. Kiebel

Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models.

Collaboration


Dive into the Sebastian Bitzer's collaboration.

Top Co-Authors

Avatar

Stefan J. Kiebel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arno Villringer

Humboldt State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge