Selim Eskiizmirliler
Paris Descartes University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Selim Eskiizmirliler.
PLOS ONE | 2009
Rodolphe J. Gentili; Charalambos Papaxanthis; Mehdi Ebadzadeh; Selim Eskiizmirliler; Sofiane Ouanezar; Christian Darlot
Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field.
Frontiers in Human Neuroscience | 2015
Michele Tagliabue; Anna Lisa Ciancio; Thomas Brochier; Selim Eskiizmirliler; Marc A. Maier
The large number of mechanical degrees of freedom of the hand is not fully exploited during actual movements such as grasping. Usually, angular movements in various joints tend to be coupled, and EMG activities in different hand muscles tend to be correlated. The occurrence of covariation in the former was termed kinematic synergies, in the latter muscle synergies. This study addresses two questions: (i) Whether kinematic and muscle synergies can simultaneously accommodate for kinematic and kinetic constraints. (ii) If so, whether there is an interrelation between kinematic and muscle synergies. We used a reach-grasp-and-pull paradigm and recorded the hand kinematics as well as eight surface EMGs. Subjects had to either perform a precision grip or side grip and had to modify their grip force in order to displace an object against a low or high load. The analysis was subdivided into three epochs: reach, grasp-and-pull, and static hold. Principal component analysis (PCA, temporal or static) was performed separately for all three epochs, in the kinematic and in the EMG domain. PCA revealed that (i) Kinematic- and muscle-synergies can simultaneously accommodate kinematic (grip type) and kinetic task constraints (load condition). (ii) Upcoming grip and load conditions of the grasp are represented in kinematic- and muscle-synergies already during reach. Phase plane plots of the principal muscle-synergy against the principal kinematic synergy revealed (iii) that the muscle-synergy is linked (correlated, and in phase advance) to the kinematic synergy during reach and during grasp-and-pull. Furthermore (iv), pair-wise correlations of EMGs during hold suggest that muscle-synergies are (in part) implemented by coactivation of muscles through common input. Together, these results suggest that kinematic synergies have (at least in part) their origin not just in muscular activation, but in synergistic muscle activation. In short: kinematic synergies may result from muscle synergies.
Robotics and Autonomous Systems | 2012
François Touvet; N. Daoud; Jean-Pierre Gazeau; Said Zeghloul; Marc A. Maier; Selim Eskiizmirliler
Reach and grasp are the two key functions of human prehension. The Central Nervous System controls these two functions in a separate but interdependent way. The choice between different solutions to reach and grasp an object-provided by multiple and redundant degrees of freedom (dof)-depends both on the properties and on the use (affordance) of the object to be manipulated. This same control paradigm, i.e. subdivision of prehension into reach and grasp as well as the corresponding multimodal (sensory/motor) information fusion schemes, can also be applied to a mechanical hand carried by a robotic arm. The robotic arm will then be responsible for positioning the hand with respect to the object, and the hand will then grasp and manipulate the object. In this article, we present a biomimetic sensory-motor control scheme in the aim of providing an object-dependent and intelligent reach and grasp ability to such systems. The proposed model is based on a multi-network architecture which incorporates multiple Matching Units trained by a statistical learning algorithm (LWPR). Matching Units perform a multimodal signal integration by correlating sensory and motor information analogous to that observed in cerebral neuronal networks. The simulated network of multiple Matching Units provided estimations of object-dependent 5-finger grasp configurations with endpoint positional errors in the order of a few millimeters. For validation, these estimations were then applied to the control of movement kinematics on an experimental robot composed of a 6 dof robot arm carrying a 16 dof mechanical 4-finger hand. Precision of the kinematics control was such that successful reach, grasp and lift was obtained in all the tests.
international conference on robotics and automation | 2011
Sofiane Ouanezar; Frédéric Jean; Bertrand Tondu; Marc A. Maier; Christian Darlot; Selim Eskiizmirliler
This study focuses on biomimetic sensory motor control of a robotic arm. We have developed a command circuit that was mathematically deduced from physical and mathematical constraints describing the function of cerebellar pathways. The control circuit contains an internal predictive model of the direct biomechanical function of the limb placed in a closed loop, so that the circuit computes an approximate inverse function. The structure of the model resembles the anatomic connectivity of the cerebellar pathways. In this paper, we present an application of this model to the control of a 2-link robotic arm actuated by four single-joint McKibben muscles and report the results obtained by simulation and real-time learning of 2 degrees of freedom pointing movements.
BMC Neuroscience | 2013
Selim Eskiizmirliler; Olivier Bertrand; Michele Tagliabue; Marc A. Maier
Recent brain machine interface (BMI) applications in humans [1] have shown the particular benefits of decoding of neural signals and of their use in motion control of artificial arms and hands for paralyzed people. Despite of the spectacular advances in the last decade on the kinematic control of reach and grasp movements, the dynamic control of these movements together with the choice of powerful decoding algorithms continues to be a major problem to be resolved. This work reports the results on asynchronous decoding, obtained with the application of artificial neural networks (ANN) [2], to both thumb and index finger kinematics and to their EMG. Neural data (spike trains and EMG) were recorded in the monkey. This work aims at providing a complete BMI framework to reproduce precision grip like hand movements with our 2 finger artificial hand (see Figure.1.A and B). The database was composed of up to six simultaneously recorded CM cell activity, of up to nine EMGs from different forearm muscles, and of the two fingertip position recorded in two monkeys while they performed a precision grip task (see Figure.1.A for robot reproduction). The CM cell activities were used as inputs and thus to train the time-delayed multi-layer perceptron (TDMLP) associated to each recording session in order to estimate both EMG and fingertip position signals. Each training epoch was performed with five different sliding window lengths that also determine the length of the input vector (i.e. the number of spikes considered at each instant) belonging to the time interval [25 ms,..,400 ms]. We trained the networks following three different paradigms: 1) Training the ANNs with only one spike train (from one CM cell). 2) Training the networks with all simultaneously recorded spike trains. 3) Only for the EMG estimation: training the networks with identified and non-identified CM cell spike trains. The identity of a cell as a CM cell was defined by the presence of a post spike facilitation (PSF) obtained by spike triggered averaging of the EMG. We then analyzed statistically the effects of each parameter on the estimating performance of the ANNs. Finally, we used the fingertip position signals estimated by the corresponding trained ANN to drive the index (4 DoF) and thumb finger (5 DoF) of our artificial hand (Shadow Robot Company ©). We then compared the reproduction performance of the hand to the recorded and estimated position signals (Figure (Figure1B1B). Figure 1 A. Artificial forearm with a 2-finger hand of the precision grip setup actuated by pneumatic muscles, B. Reproduction of thumb and index finger position.
international conference on advanced robotics | 2015
Michele Tagliabue; Nadine Francis; Yaoyao Hao; Margaux Duret; Thomas Brochier; Alexa Riehle; Marc A. Maier; Selim Eskiizmirliler
This study focuses on the estimation of kinematic and kinetic information during two-digit grasping using frequency decoding of motor cortex spike trains for brain machine interface applications. Neural data were recorded by a 100-microelectrode array implanted in the motor cortex of one monkey performing instructed reach-grasp-and-pull movements. Decoding of neural data was performed by two different algorithms: i) through Artificial Neural Networks (ANN) consisting of a multi layer perceptron (MLP), and ii) by a Support Vector Machine (SVM) with linear kernel function. Decoding aimed at classifying the upcoming grip type (precision grip vs. side grip) as well as the required grip force (low vs. high). We then used the decoded information to reproduce the monkeys movement on a robotic platform consisting of a two-finger, eleven degrees of freedom (DoF) robotic hand carried by a six DoF robotic arm. The results show that 1) in terms of performance there was no significant difference between ANN and SVM prediction. Both algorithms can be used for frequency decoding of multiple motor cortex spike trains: good performance was found for grip type prediction, less so for grip force. 2) For both algorithms the prediction error was significantly dependent on the position of the input time window associated to different stages of the instructed grasp movement. 3) The lower performance of grasp force prediction was improved by optimizing the neuronal population size presented to the ANN input layer on the basis of information redundancy.
BMC Neuroscience | 2011
Octave Boussaton; Laurent Bougrain; Thierry Viéville; Selim Eskiizmirliler
As a part of a Brain-Machine Interface, we are currently defining a model for learning and forecasting muscular activity, given sparse brain activity in the form of action potential signals (spike trains). We have been working on the flexion of a finger during a trained precision grip performed by a monkey (macaca nemestrina), as she clasps a metal gauge with her finger and thumb. Experimentally, the activity of about a hundred neurons in the motor cortex can be recorded simultaneously with the help of a multielectrode array, see Figure A and [1] for more details about retrieving and filtering the data. Our method is based on a system of equations involving the firing rate of each recorded neurons, a set of thresholds, and Euclidian distances between averaged and current state at each time step. The firing rates are computed according to given time-windows, between 25 and 100 milliseconds. The thresholds used depend on these firing rates. The learning is done on a subset of the experiments and then evaluated on what remains. A raw estimation is done in order to be used as a reference for estimating the efficiency of each part of the learning formula brings to the final result. The complete improvement formula is divided into three stages and can be expressed as follows: p(t+1) = p(t) + A(t) + B(t) + C(t) where p(t) is the force exerted on the gauge at time t. The A part is the learning reference base of the method in which a straight matching is done between each neuronal code and the derivative of the observed force in the finger at each timestep. What we call a neuronal code is the vector of all the values of the firing rate functions at any given time and during the training stage, to any neuronal code is associated the average value of all the recorded derivatives of the force. The B part is an actuation made on the distance between the current activity of each neuron and its average activity over a former time window of the same length as the time-window used to compute the spike trains. Finally, the C part is a system of equations in which we suppose that every neuron is correlated to each other in a weighted way we optimize during the learning process. The purpose of this study is multifold, we want to estimate (i) the influence of the neurons on each other qualitatively, (ii) the efficiency of various easy-to-tract improvement techniques and (iii) the importance of thresholding the firing rates. We are developing a kind of a systematic approach to spike trains analysis and how they are related to the execution of a movement that allows us to better estimate the influence of several factors, without separating neurons into different groups initially, as in [3] for example but rather consider the information as a whole. The pre-treatment phase ensures that any measurement (corresponding to what we earlier called a neuronal code) is as relevant as any other. The results are quite satisfying and encouraging, given the very restricted complexity of the method, see Figure Figure1A1A. Figure 1 On the left is depicted the experiment. On the right is an example of what can be obtained through our method.The black curve is the observed trajectory that the gray one is supposed to approximate. In this case we used the information of four neurons ...
Experimental Brain Research | 2014
François Touvet; Agnès Roby-Brami; Marc A. Maier; Selim Eskiizmirliler
Kinésithérapie, la Revue | 2013
I. Bonan; Annie Marquer; Selim Eskiizmirliler; A. Yelnik; Pierre-Paul Vidal
communications and networking symposium | 2011
Octave Boussaton; Laurent Bougrain; Thierry Viéville; Selim Eskiizmirliler