Matej Hoffmann
Istituto Italiano di Tecnologia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matej Hoffmann.
IEEE Transactions on Autonomous Mental Development | 2010
Matej Hoffmann; Hugo Gravato Marques; Alejandro Hernandez Arieta; Hidenobu Sumioka; Max Lungarella; Rolf Pfeifer
How is our body imprinted in our brain? This seemingly simple question is a subject of investigations of diverse disciplines, psychology, and philosophy originally complemented by neurosciences more recently. Despite substantial efforts, the mysteries of body representations are far from uncovered. The most widely used notions-body image and body schema-are still waiting to be clearly defined. The mechanisms that underlie body representations are coresponsible for the admiring capabilities that humans or many mammals can display: combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These features are also desirable in robots. This paper surveys the body representations in biology from a functional or computational perspective to set ground for a review of the concept of body schema in robotics. First, we examine application-oriented research: how a robot can improve its capabilities by being able to automatically synthesize, extend, or adapt a model of its body. Second, we summarize the research area in which robots are used as tools to verify hypotheses on the mechanisms underlying biological body representations. We identify trends in these research areas and propose future research directions.
international conference on robotics and automation | 2011
Michal Reinstein; Matej Hoffmann
It is an important ability for any mobile robot to be able to estimate its posture and to gauge the distance it travelled. The information can be obtained from various sources. In this work, we have addressed this problem in a dynamic quadruped robot. We have designed and implemented a navigation algorithm for full body state (position, velocity, and attitude) estimation that does not use any external reference (such as GPS, or visual landmarks). Extended Kalman Filter was used to provide error estimation and data fusion from two independent sources of information: Inertial Navigation System mechanization algorithm processing raw inertial data, and legged odometry, which provided velocity aiding. We present a novel data-driven architecture for legged odometry that relies on a combination of joint sensor signals and pressure sensors. Our navigation system ensures precise tracking of a running robots posture (roll and pitch), and satisfactory tracking of its position over medium time intervals. We have shown our method to work for two different dynamic turning gaits and on two terrains with significantly different friction. We have also successfully demonstrated how our method generalizes to different velocities.
IEEE Transactions on Robotics | 2013
Michal Reinstein; Matej Hoffmann
It is an important ability for any mobile robot to be able to estimate its posture and to gauge the distance it traveled. In this paper, we have addressed this problem in a dynamic quadruped robot by combining traditional state estimation methods with machine learning. We have designed and implemented a navigation algorithm for full body state (position, velocity, and attitude) estimation that uses no external reference but relies on multimodal proprioceptive sensory information only. The extended Kalman filter (EKF) was used to provide error estimation and data fusion from two independent sources of information: 1) strapdown mechanization algorithm processing raw inertial data and 2) legged odometry. We have devised a novel legged odometer that combines information from a multimodal combination of sensors (joint and pressure). We have shown our method to work for a dynamic turning gait, and we have also successfully demonstrated how it generalizes to different velocities and terrains. Furthermore, our solution proved to be immune to substantial slippage of the robots feet.
Advances in Complex Systems | 2013
Nico M. Schmidt; Matej Hoffmann; Kohei Nakajima; Rolf Pfeifer
Animals and humans engage in an enormous variety of behaviors which are orchestrated through a complex interaction of physical and informational processes: The physical interaction of the bodies with the environment is intimately coupled with informational processes in the animals brain. A crucial step toward the mastery of all these behaviors seems to be to understand the flows of information in the sensorimotor networks. In this study, we have performed a quantitative analysis in an artificial agent — a running quadruped robot with multiple sensory modalities — using tools from information theory (transfer entropy). Starting from very little prior knowledge, through systematic variation of control signals and environment, we show how the agent can discover the structure of its sensorimotor space, identify proprioceptive and exteroceptive sensory modalities, and acquire a primitive body schema. In summary, we show how the analysis of directed information flows in an agents sensorimotor networks can be used to bootstrap its perception and development.
Artificial Life | 2017
Vincent C. Müller; Matej Hoffmann
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “offloading computation from the brain to the body,” where the body is said to perform “morphological computation.” Our investigation of four characteristic cases of morphological computation in animals and robots shows that the “offloading” perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that facilitates control, (2) morphology that facilitates perception, and the rare cases of (3) morphological computation proper, such as reservoir computing, where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control—how it contributes to the overall orchestration of intelligent behavior.
international conference on robotics and automation | 2011
Marc Ziegler; Matej Hoffmann; Juan Pablo Carbajal; Rolf Pfeifer
Fish excel in their swimming capabilities. These result from a dynamic interplay of actuation, passive properties of fish body, and interaction with the surrounding fluid. In particular, fish are able to exploit wakes that are generated by objects in flowing water. A powerful demonstration that this is largely due to passive body properties are studies on dead trout. Inspired by that, we developed a multi joint swimming platform that explores the potential of a passive dynamic mechanism. The platform has one actuated joint only, followed by three passive joints whose stiffness can be changed online, individually, and can be set to an almost arbitrary nonlinear stiffness profile. In a set of experiments, using online optimization, we investigated how the platform can discover optimal stiffness distribution along its body in response to different frequency and amplitude of actuation. We show that a heterogeneous stiffness distribution - each joint having a different value - outperforms a homogeneous one in producing thrust. Furthermore, different gaits emerged in different settings of the actuated joint. This work illustrates the potential of online adaption of passive body properties, leading to optimized swimming, especially in an unsteady environment.
simulation of adaptive behavior | 2012
Matej Hoffmann; Nico M. Schmidt; Rolf Pfeifer; Andreas K. Engel; Alexander Maye
In conventional “sense-think-act” control architectures, perception is reduced to a passive collection of sensory information, followed by a mapping onto a prestructured internal world model. For biological agents, Sensorimotor Contingency Theory (SMCT) posits that perception is not an isolated processing step, but is constituted by knowing and exercising the law-like relations between actions and resulting changes in sensory stimulation. We present a computational model of SMCT for controlling the behavior of a quadruped robot running on different terrains. Our experimental study demonstrates that: (i) Sensory-Motor Contingencies (SMC) provide better discrimination capabilities of environmental properties than conventional recognition from the sensory signals alone; (ii) discrimination is further improved by considering the action context on a longer time scale; (iii) the robot can utilize this knowledge to adapt its behavior for maximizing its stability.
PLOS ONE | 2016
Alessandro Roncone; Matej Hoffmann; Ugo Pattacini; Luciano Fadiga; Giorgio Metta
This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.
intelligent robots and systems | 2015
Alessandro Roncone; Matej Hoffmann; Ugo Pattacini; Giorgio Metta
With robots leaving factory environments and entering less controlled domains, possibly sharing living space with humans, safety needs to be guaranteed. To this end, some form of awareness of their body surface and the space surrounding it is desirable. In this work, we present a unique method that lets a robot learn a distributed representation of space around its body (or peripersonal space) by exploiting a whole-body artificial skin and through physical contact with the environment. Every taxel (tactile element) has a visual receptive field anchored to it. Starting from an initially blank state, the distance of every object entering this receptive field is visually perceived and recorded, together with information whether the object has eventually contacted the particular skin area or not. This gives rise to a set of probabilities that are updated incrementally and that carry information about the likelihood of particular events in the environment contacting a particular set of taxels. The learned representation naturally serves the purpose of predicting contacts with the whole body of the robot, which is of clear behavioral relevance. Furthermore, we devised a simple avoidance controller that is triggered by this representation, thus endowing a robot with a “margin of safety” around its body. Finally, simply reversing the sign in the controller we used gives rise to simple “reaching” for objects in the robots vicinity, which automatically proceeds with the most activated (closest) body part.
IEEE Transactions on Cognitive and Developmental Systems | 2018
Matej Hoffmann; Zdenek Straka; Igor Farkaš; Michal Vavrečka; Giorgio Metta
Using the iCub humanoid robot with an artificial pressure-sensitive skin, we investigate how representations of the whole skin surface resembling those found in primate primary somatosensory cortex can be formed from local tactile stimulations traversing the body of the physical robot. We employ the well-known self-organizing map algorithm and introduce its modification that makes it possible to restrict the maximum receptive field (MRF) size of neuron groups at the output layer. This is motivated by findings from biology where basic somatotopy of the cortical sheet seems to be prescribed genetically and connections are localized to particular regions. We explore different settings of the MRF and the effect of activity-independent (input-output connections constraints implemented by MRF) and activity-dependent (learning from skin stimulations) mechanisms on the formation of the tactile map. The framework conveniently allows one to specify prior knowledge regarding the skin topology and thus to effectively seed a particular representation that training shapes further. Furthermore, we show that the MRF modification facilitates learning in situations when concurrent stimulation at nonadjacent places occurs (“multitouch”). The procedure was sufficiently robust and not intensive on the data collection and can be applied to any robots where representation of their “skin” is desirable.