Michele Tagliabue
Paris Descartes University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michele Tagliabue.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2010
Nathanaël Jarrassé; Michele Tagliabue; Johanna Robertson; Amina Maiza; Vincent Crocher; Agnès Roby-Brami; Guillaume Morel
While a large number of robotic exoskeletons have been designed by research teams for rehabilitation, it remains rather difficult to analyse their ability to finely interact with a human limb: no performance indicators or general methodology to characterize this capacity really exist. This is particularly regretful at a time when robotics are becoming a recognized rehabilitation method and when complex problems such as 3-D movement rehabilitation and joint rotation coordination are being addressed. The aim of this paper is to propose a general methodology to evaluate, through a reduced set of simple indicators, the ability of an exoskeleton to interact finely and in a controlled way with a human. The method involves measurement and recording of positions and forces during 3-D point to point tasks. It is applied to a 4 degrees-of-freedom limb exoskeleton by way of example.
The Journal of Neuroscience | 2011
Michele Tagliabue; Joseph McIntyre
When aligning the hand to grasp an object, the CNS combines multiple sensory inputs encoded in multiple reference frames. Previous studies suggest that when a direct comparison of target and hand is possible via a single sensory modality, the CNS avoids performing unnecessary coordinate transformations that add noise. But when target and hand do not share a common sensory modality (e.g., aligning the unseen hand to a visual target), at least one coordinate transformation is required. Similarly, body movements may occur between target acquisition and manual response, requiring that egocentric target information be updated or transformed to external reference frames to compensate. Here, we asked subjects to align the hand to an external target, where the target could be presented visually or kinesthetically and feedback about the hand was visual, kinesthetic, or both. We used a novel technique of imposing conflict between external visual and gravito-kinesthetic reference frames when subjects tilted the head during an instructed memory delay. By comparing experimental results to analytical models based on principles of maximum likelihood, we showed that multiple transformations above the strict minimum may be performed, but only if the task precludes a unimodal comparison of egocentric target and hand information. Thus, for cross-modal tasks, or when head movements are involved, the CNS creates and uses both kinesthetic and visual representations. We conclude that the necessity of producing at least one coordinate transformation activates multiple, concurrent internal representations, the functionality of which depends on the alignment of the head with respect to gravity.
Frontiers in Computational Neuroscience | 2014
Michele Tagliabue; Joseph McIntyre
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the targets position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
Experimental Brain Research | 2012
Claudia Casellato; Michele Tagliabue; Alessandra Pedrocchi; Charalambos Papaxanthis; Giancarlo Ferrigno; Thierry Pozzo
Many studies showed that both arm movements and postural control are characterized by strong invariants. Besides, when a movement requires simultaneous control of the hand trajectory and balance maintenance, these two movement components are highly coordinated. It is well known that the focal and postural invariants are individually tightly linked to gravity, much less is known about the role of gravity in their coordination. It is not clear whether the effect of gravity on different movement components is such as to keep a strong movement–posture coordination even in different gravitational conditions or whether gravitational information is necessary for maintaining motor synergism. We thus set out to analyze the movements of eleven standing subjects reaching for a target in front of them beyond arm’s length in normal conditions and in microgravity. The results showed that subjects quickly adapted to microgravity and were able to successfully accomplish the task. In contrast to the hand trajectory, the postural strategy was strongly affected by microgravity, so to become incompatible with normo-gravity balance constraints. The distinct effects of gravity on the focal and postural components determined a significant decrease in their reciprocal coordination. This finding suggests that movement–posture coupling is affected by gravity, and thus, it does not represent a unique hardwired and invariant mode of control. Additional kinematic and dynamic analyses suggest that the new motor strategy corresponds to a global oversimplification of movement control, fulfilling the mechanical and sensory constraints of the microgravity environment.
Frontiers in Human Neuroscience | 2015
Michele Tagliabue; Anna Lisa Ciancio; Thomas Brochier; Selim Eskiizmirliler; Marc A. Maier
The large number of mechanical degrees of freedom of the hand is not fully exploited during actual movements such as grasping. Usually, angular movements in various joints tend to be coupled, and EMG activities in different hand muscles tend to be correlated. The occurrence of covariation in the former was termed kinematic synergies, in the latter muscle synergies. This study addresses two questions: (i) Whether kinematic and muscle synergies can simultaneously accommodate for kinematic and kinetic constraints. (ii) If so, whether there is an interrelation between kinematic and muscle synergies. We used a reach-grasp-and-pull paradigm and recorded the hand kinematics as well as eight surface EMGs. Subjects had to either perform a precision grip or side grip and had to modify their grip force in order to displace an object against a low or high load. The analysis was subdivided into three epochs: reach, grasp-and-pull, and static hold. Principal component analysis (PCA, temporal or static) was performed separately for all three epochs, in the kinematic and in the EMG domain. PCA revealed that (i) Kinematic- and muscle-synergies can simultaneously accommodate kinematic (grip type) and kinetic task constraints (load condition). (ii) Upcoming grip and load conditions of the grasp are represented in kinematic- and muscle-synergies already during reach. Phase plane plots of the principal muscle-synergy against the principal kinematic synergy revealed (iii) that the muscle-synergy is linked (correlated, and in phase advance) to the kinematic synergy during reach and during grasp-and-pull. Furthermore (iv), pair-wise correlations of EMGs during hold suggest that muscle-synergies are (in part) implemented by coactivation of muscles through common input. Together, these results suggest that kinematic synergies have (at least in part) their origin not just in muscular activation, but in synergistic muscle activation. In short: kinematic synergies may result from muscle synergies.
PLOS ONE | 2013
Michele Tagliabue; Joseph McIntyre
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations.
BMC Neuroscience | 2013
Selim Eskiizmirliler; Olivier Bertrand; Michele Tagliabue; Marc A. Maier
Recent brain machine interface (BMI) applications in humans [1] have shown the particular benefits of decoding of neural signals and of their use in motion control of artificial arms and hands for paralyzed people. Despite of the spectacular advances in the last decade on the kinematic control of reach and grasp movements, the dynamic control of these movements together with the choice of powerful decoding algorithms continues to be a major problem to be resolved. This work reports the results on asynchronous decoding, obtained with the application of artificial neural networks (ANN) [2], to both thumb and index finger kinematics and to their EMG. Neural data (spike trains and EMG) were recorded in the monkey. This work aims at providing a complete BMI framework to reproduce precision grip like hand movements with our 2 finger artificial hand (see Figure.1.A and B). The database was composed of up to six simultaneously recorded CM cell activity, of up to nine EMGs from different forearm muscles, and of the two fingertip position recorded in two monkeys while they performed a precision grip task (see Figure.1.A for robot reproduction). The CM cell activities were used as inputs and thus to train the time-delayed multi-layer perceptron (TDMLP) associated to each recording session in order to estimate both EMG and fingertip position signals. Each training epoch was performed with five different sliding window lengths that also determine the length of the input vector (i.e. the number of spikes considered at each instant) belonging to the time interval [25 ms,..,400 ms]. We trained the networks following three different paradigms: 1) Training the ANNs with only one spike train (from one CM cell). 2) Training the networks with all simultaneously recorded spike trains. 3) Only for the EMG estimation: training the networks with identified and non-identified CM cell spike trains. The identity of a cell as a CM cell was defined by the presence of a post spike facilitation (PSF) obtained by spike triggered averaging of the EMG. We then analyzed statistically the effects of each parameter on the estimating performance of the ANNs. Finally, we used the fingertip position signals estimated by the corresponding trained ANN to drive the index (4 DoF) and thumb finger (5 DoF) of our artificial hand (Shadow Robot Company ©). We then compared the reproduction performance of the hand to the recorded and estimated position signals (Figure (Figure1B1B). Figure 1 A. Artificial forearm with a 2-finger hand of the precision grip setup actuated by pneumatic muscles, B. Reproduction of thumb and index finger position.
Frontiers in Neurology | 2018
Erwin Idoux; Michele Tagliabue; Mathieu Beraneck
Motion sickness occurs when the vestibular system is subjected to conflicting sensory information or overstimulation. Despite the lack of knowledge about the actual underlying mechanisms, several drugs, among which scopolamine, are known to prevent or alleviate the symptoms. Here, we aim at better understanding how motion sickness affects the vestibular system, as well as how scopolamine prevents motion sickness at the behavioral and cellular levels. We induced motion sickness in adult mice and tested the vestibulo-ocular responses to specific stimulations of the semi-circular canals and of the otoliths, with or without scopolamine, as well as the effects of scopolamine and muscarine on central vestibular neurons recorded on brainstem slices. We found that both motion sickness and scopolamine decrease the efficacy of the vestibulo-ocular reflexes and propose that this decrease in efficacy might be a protective mechanism to prevent later occurrences of motion sickness. To test this hypothesis, we used a behavioral paradigm based on visuo-vestibular interactions which reduces the efficacy of the vestibulo-ocular reflexes. This paradigm also offers protection against motion sickness, without requiring any drug. At the cellular level, we find that depending on the neuron, scopolamine can have opposite effects on the polarization level and firing frequency, indicating the presence of at least two types of muscarinic receptors in the medial vestibular nucleus. The present results set the basis for future studies of motion sickness counter-measures in the mouse model and offers translational perspectives for improving the treatment of affected patients.
international conference of the ieee engineering in medicine and biology society | 2016
Patrice Senot; Loïc Damm; Michele Tagliabue; Joseph McIntyre
Smooth physical interaction with our environment, such as when working with tools, requires adaptability to unpredictable perturbations that can be achieved through impedance control of multi-joint limbs. Modulation of arm stiffness can be achieved either increasing co-contraction of antagonistic muscles or by increasing the gain of spinal reflex loops. According to the “automatic gain scaling” principle, the spinal reflex gain, as measured via the H-reflex, scales with muscle activation. A previous experiment from our labs suggested, however, that reflex gains might instead be scaled to the force exerted by the limb, perhaps as a means to counteract destabilizing external forces. The goal of our experiment was to test whether force output, rather than the muscular activity per se, could be the critical factor determining reflex gain. Five subjects generated different levels of force at the wrist with or without assistance to dissociate applied force from agonist muscular activity. We recorded contact force, EMG and H-reflex response from a wrist flexor. We did not find a strict relationship between reflex gain and contact force but nor did we observe consistent modulation of reflex gain simply as a function of agonist muscle activity. These results are discussed in relation to the stability of the task constraints.
international conference on advanced robotics | 2015
Michele Tagliabue; Nadine Francis; Yaoyao Hao; Margaux Duret; Thomas Brochier; Alexa Riehle; Marc A. Maier; Selim Eskiizmirliler
This study focuses on the estimation of kinematic and kinetic information during two-digit grasping using frequency decoding of motor cortex spike trains for brain machine interface applications. Neural data were recorded by a 100-microelectrode array implanted in the motor cortex of one monkey performing instructed reach-grasp-and-pull movements. Decoding of neural data was performed by two different algorithms: i) through Artificial Neural Networks (ANN) consisting of a multi layer perceptron (MLP), and ii) by a Support Vector Machine (SVM) with linear kernel function. Decoding aimed at classifying the upcoming grip type (precision grip vs. side grip) as well as the required grip force (low vs. high). We then used the decoded information to reproduce the monkeys movement on a robotic platform consisting of a two-finger, eleven degrees of freedom (DoF) robotic hand carried by a six DoF robotic arm. The results show that 1) in terms of performance there was no significant difference between ANN and SVM prediction. Both algorithms can be used for frequency decoding of multiple motor cortex spike trains: good performance was found for grip type prediction, less so for grip force. 2) For both algorithms the prediction error was significantly dependent on the position of the input time window associated to different stages of the instructed grasp movement. 3) The lower performance of grasp force prediction was improved by optimizing the neuronal population size presented to the ANN input layer on the basis of information redundancy.