Sebastian Urban
Technische Universität München
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sebastian Urban.
international conference on neural information processing | 2013
Christian Osendorfer; Justin Bayer; Sebastian Urban; Patrick van der Smagt
We investigate if a deep Convolutional Neural Network can learn representations of local image patches that are usable in the important task of keypoint matching. We examine several possible loss functions for this correspondance task and show emprically that a newly suggested loss formulation allows a Convolutional Neural Network to find compact local image descriptors that perform comparably to state-of-the-art approaches.
international conference on robotics and automation | 2014
Nutan Chen; Sebastian Urban; Christian Osendorfer; Justin Bayer; Patrick van der Smagt
Estimating human fingertip forces is required to understand force distribution in grasping and manipulation. Human grasping behavior can then be used to develop force-and impedance-based grasping and manipulation strategies for robotic hands. However, estimating human grip force naturally is only possible with instrumented objects or unnatural gloves, thus greatly limiting the type of objects used. In this paper we describe an approach which uses images of the human fingertip to reconstruct grip force and torque at the finger. Our approach does not use finger-mounted equipment, but instead a steady camera observing the fingers of the hand from a distance. This allows for finger force estimation without any physical interference with the hand or object itself, and is therefore universally applicable. We construct a 3-dimensional finger model from 2D images. Convolutional Neural Networks (CNN) are used to predict the 2D image to a 3D model transformation matrix. Two methods of CNN are designed for separate and combined outputs of orientation and position. After learning, our system shows an alignment accuracy over 98% on unknown data. In the final step, a Gaussian process estimates finger force and torque from the aligned images based on color changes and deformations of the nail and its surrounding skin. Experimental results shows that the accuracy achieves about 95% in the force estimation and 90% in the torque.
Neural Computation | 2013
Jan-moritz Peter Franosch; Sebastian Urban; J. Leo van Hemmen
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as “supervisor.” Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
ieee-ras international conference on humanoid robots | 2015
Nutan Chen; Justin Bayer; Sebastian Urban; Patrick van der Smagt
Predictive modeling of human or humanoid movement becomes increasingly complex as the dimensionality of those movements grows. Dynamic Movement Primitives (DMP) have been shown to be a powerful method of representing such movements, but do not generalize well when used in configuration or task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AE-DMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalize. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit. To further improve the model for multiple movements, sparsity is added for the feature layer neurons; therefore, various movements can be observed clearly in the feature space. After training, the model finds a single hidden neuron from the sparsity that can efficiently generate new movements. Our experiments clearly demonstrate the efficiency of missing data imputation using 50-dimensional human movement data.
international conference on neural information processing | 2013
Justin Bayer; Christian Osendorfer; Sebastian Urban; Patrick van der Smagt
We present a novel method to train predictive Gaussian distributions pz|x for regression problems with neural networks. While most approaches either ignore or explicitly model the variance as another response variable, it is trained implicitly in our case. Establishing stochasticty by the injection of noise into the input and hidden units, the outputs are approximated with a Gaussian distribution by the forward propagation method introduced for fast dropout [1]. We have designed our method to respect that probabilistic interpretation of the output units in the loss function. The method is evaluated on a synthetic and a inverse robot dynamics task, yielding superior performance to plain neural networks, Gaussian processes and LWPR in terms of likelihood.
intelligent robots and systems | 2013
Sebastian Urban; Justin Bayer; Christian Osendorfer; G. Westling; Benoni B. Edin; Patrick van der Smagt
We demonstrate a simple approach with which finger force can be measured from nail coloration. By automatically extracting features from nail images of a finger-mounted CCD camera, we can directly relate these images to the force measured by a force-torque sensor. The method automatically corrects orientation and illumination differences. Using Gaussian processes, we can relate preprocessed images of the finger nail to measured force and torque of the finger, allowing us to predict the finger force at a level of 95%-98% accuracy at force ranges up to 10N, and torques around 90% accuracy, based on training data gathered in 90s.
intelligent robots and systems | 2015
Nutan Chen; Sebastian Urban; Justin Bayer; Patrick van der Smagt
Robust fingertip force detection from fingernail image is a critical strategy that can be applied in many areas. However, prior research fixed many variables that influence the finger color change. This paper analyzes the effect of the finger joint on the force detection in order to deal with the constrained finger position setting. A force estimator method is designed: a model to predict the fingertip force from finger joints measured from 2D cameras and 3 rectangular markers in cooperation with the fingernail images are trained. Then the error caused by the color changes of the joint bending can be avoided. This strategy is a significant step forward from a finger force estimator that requires tedious finger joint setting. The approach is evaluated experimentally. The result shows that it increases the accuracy over 10% for the force in conditions of the finger joint free movement. The estimator is used to demonstrate lifting and replacing objects with various weights.
IEEE Sensors Journal | 2015
Sebastian Urban; Marvin Ludersdorfer; Patrick van der Smagt
We deal with the problem of estimating the true measured scalar quantity from the output signal of a sensor that is afflicted with hysteresis and noise. We use a probabilistic, nonparametric sensor model based on heteroscedastic Gaussian processes (GPs), which is trained using a data set of sensor output and ground truth pairs. The inference problem is formulated as state estimation in a dynamical system. We exploit the low dimensionality of the latent state space of the sensor to perform exact probabilistic inference of the measured quantity from a time series of the sensors output. Compared with the state-of-the-art assumed density filtering algorithm for GPs, which analytically approximates the posterior by a normal distribution during inference, our method reduces the prediction error by 33% on a data set obtained from a novel flexible tactile sensor based on carbon-black filled elastomers. The proposed model can be applied, but is not limited, to any sensor for which the Preisach model of hysteresis holds. The use of probabilistic modeling and inference not only provides a most likely estimate of the measured quantity but also the corresponding confidence interval.
Archive | 2014
Andreas N. Vollmayr; Stefan Sosnowski; Sebastian Urban; Sandra Hirche; J. Leo van Hemmen
In this work we present Snookie, an autonomous underwater vehicle with an artificial lateral-line system. Integration of the artificial lateral-line system with other sensory modalities is to enable the robot to perform behaviors as observed in fish, such as obstacle detection and geometrical-shape reconstruction by means of hydrodynamic images. The present chapter consists of three sections devoted to design of the robot, its lateral-line system, and processing of the ensuing flow-sensory data. The artificial lateral-line system of Snookie is presented in detail, together with a simple version of a flow reconstruction algorithm applicable to both the artificial lateral-line system and, e.g. the blind Mexican cave fish. More in particular, the first section deals with the development of the autonomous underwater vehicle Snookie, which provides the functionality and is tailored to the requirements of the artificial lateral-line system. The second section is devoted to the implementation of the artificial lateral-line system that consists of an array of hot thermistor anemometers to be integrated in the nozzle. In the final section, the information processing ensuing from the flow sensors and leading to conclusions about the environment is presented. The measurement of the tangential velocities at the artificial lateral-line system together with the no-penetration condition provides the robot with Cauchy boundary conditions, so that the hydrodynamic mapping of potential flow onto the lateral line can be inverted. Through this inversion information is accessible from the flow around the artificial lateral line about objects in the neighbourhood, which alter the flow field.
international conference on learning representations | 2014
Justin Bayer; Christian Osendorfer; Daniela Korhammer; Nutan Chen; Sebastian Urban; Patrick van der Smagt