Alexander V. Terekhov
University of Paris
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander V. Terekhov.
Neuron | 2014
Henrik Jörntell; Fredrik Bengtsson; Pontus Geborek; Anton Spanne; Alexander V. Terekhov; Vincent Hayward
Summary Our tactile perception of external objects depends on skin-object interactions. The mechanics of contact dictates the existence of fundamental spatiotemporal input features—contact initiation and cessation, slip, and rolling contact—that originate from the fact that solid objects do not interpenetrate. However, it is unknown whether these features are represented within the brain. We used a novel haptic interface to deliver such inputs to the glabrous skin of finger/digit pads and recorded from neurons of the cuneate nucleus (the brain’s first level of tactile processing) in the cat. Surprisingly, despite having similar receptive fields and response properties, each cuneate neuron responded to a unique combination of these inputs. Hence, distinct haptic input features are encoded already at subcortical processing stages. This organization maps skin-object interactions into rich representations provided to higher cortical levels and may call for a re-evaluation of our current understanding of the brain’s somatosensory systems.
Journal of Mathematical Biology | 2010
Alexander V. Terekhov; Yakov Pesin; Xun Niu; Mark L. Latash; Vladimir M. Zatsiorsky
We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.
Journal of the Royal Society Interface | 2014
Vincent Hayward; Alexander V. Terekhov; Sheng-Chao Wong; Pontus Geborek; Fredrik Bengtsson; Henrik Jörntell
A common method to explore the somatosensory function of the brain is to relate skin stimuli to neurophysiological recordings. However, interaction with the skin involves complex mechanical effects. Variability in mechanically induced spike responses is likely to be due in part to mechanical variability of the transformation of stimuli into spiking patterns in the primary sensors located in the skin. This source of variability greatly hampers detailed investigations of the response of the brain to different types of mechanical stimuli. A novel stimulation technique designed to minimize the uncertainty in the strain distributions induced in the skin was applied to evoke responses in single neurons in the cat. We show that exposure to specific spatio-temporal stimuli induced highly reproducible spike responses in the cells of the cuneate nucleus, which represents the first stage of integration of peripheral inputs to the brain. Using precisely controlled spatio-temporal stimuli, we also show that cuneate neurons, as a whole, were selectively sensitive to the spatial and to the temporal aspects of the stimuli. We conclude that the present skin stimulation technique based on localized differential tractions greatly reduces response variability that is exogenous to the information processing of the brain and hence paves the way for substantially more detailed investigations of the brains somatosensory system.
conference on biomimetic and biohybrid systems | 2015
Alexander V. Terekhov; Guglielmo Montone; J. Kevin O'Regan
Although deep neural networks DNNs have demonstrated impressive results during the last decade, they remain highly specialized tools, which are trained --- often from scratch --- to solve each particular task. The human brain, in contrast, significantly re-uses existing capacities when learning to solve new tasks. In the current study we explore a block-modular architecture for DNNs, which allows parts of the existing network to be re-used to solve a new task without a decrease in performance when solving the original task. We show that networks with such architectures can outperform networks trained from scratch, or perform comparably, while having to learn nearly 10 times fewer weights than the networks trained from scratch.
intelligent robots and systems | 2013
Alban Laflaquière; Alexander V. Terekhov; Bruno Gas; J. Kevin O'Regan
Current machine learning techniques proposed to automatically discover a robots kinematics usually rely on a priori information about the robots structure, sensor properties or end-effector position. This paper proposes a method to estimate a certain aspect of the forward kinematics model with no such information. An internal representation of the end-effector configuration is generated from unstructured proprioceptive and exteroceptive data flow under very limited assumptions. A mapping from the proprioceptive space to this representational space can then be used to control the robot.
Journal of Motor Behavior | 2013
Joel R. Martin; Alexander V. Terekhov; Mark L. Latash; Vladimir M. Zatsiorsky
ABSTRACT The neural control of movement has been described using different sets of elemental variables. Two possible sets of elemental variables have been suggested for finger pressing tasks: the forces of individual fingers and the finger commands (also called finger modes or central commands). The authors analyzed which of the 2 sets of the elemental variables is more likely used in the optimization of the finger force sharing and which set is used for the stabilization of performance. They used two recently developed techniques—the analytical inverse optimization (ANIO) and the uncontrolled manifold (UCM) analysis—to evaluate each set of elemental variables with respect to both aspects of performance. The results of the UCM analysis favored the finger commands as the elemental variables used for performance stabilization, while ANIO worked equally well on both sets of elemental variables. A simple scheme is suggested as to how the CNS could optimize a cost function dependent on the finger forces, but for the sake of facilitation of the feed forward control it substitutes the original cost function by a cost function, which is convenient to optimize in the space of finger commands.
Robotics and Autonomous Systems | 2015
Alban Laflaquière; J. Kevin O'Regan; Sylvain Argentieri; Bruno Gas; Alexander V. Terekhov
The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robots sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agents exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina. Autonomous robots should develop perceptual notions from raw sensorimotor data.Environment-dependency of visual inputs complicates acquisition of spatial notions.Agent can learn its spatial configuration through invariants in sensorimotor laws.Approach is illustrated on a simulated planar multijoint agent with a mobile retina.
Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014
Alexander V. Terekhov; J. Kevin O'Regan
Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized (“sensible”) rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed ad hoc, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.
Proceedings of the Royal Society B: Biological Sciences | 2015
Alexander V. Terekhov; Vincent Hayward
A fundamental problem faced by the brain is to estimate whether a touched object is rigidly attached to a ground reference or is movable. A simple solution to this problem would be for the brain to test whether pushing on the object with a limb is accompanied by limb displacement. The mere act of pushing excites large populations of mechanoreceptors, generating a sensory response that is only weakly sensitive to limb displacement if the movements are small, and thus can hardly be used to determine the mobility of the object. In the mechanical world, displacement or deformation of objects frequently co-occurs with microscopic fluctuations associated with the frictional sliding of surfaces in contact or with micro-failures inside an object. In this study, we provide compelling evidence that the brain relies on these microscopic mechanical events to estimate the displacement of the limb in contact with an object, and hence the mobility of the touched object. We show that when pressing with a finger on a stiff surface, fluctuations that resemble the mechanical response of granular solids provoke a sensation of limb displacement. Our findings suggest that when acting on an external object, prior knowledge about the sensory consequences of interacting with the object contributes to proprioception.
international conference on human haptic sensing and touch enabled computer applications | 2014
Alessandro Moscatelli; Matteo Bianchi; Alessandro Serio; Omar Al Atassi; Simone Fani; Alexander V. Terekhov; Vincent Hayward; Marc O. Ernst; Antonio Bicchi
Imagine you are pushing your finger against a deformable, compliant object. The change in the area of contact can provide an estimate of the relative displacement of the finger, such that the larger is the area of contact, the larger is the displacement. Does the human haptic system use this as a cue for estimating the displacement of the finger with respect to the external object? Here we conducted a psychophysical experiment to test this hypothesis. Participants compared the passive displacement of the index finger between a reference and a comparison stimulus. The compliance of the contacted object changed between the two stimuli, thus producing a different area-displacement relationship. In accordance with the hypothesis, the modulation of the area-displacement relationship produced a bias in the perceived displacement of the finger.