Michael A. Arbib
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael A. Arbib.
Experimental Brain Research | 1996
Scott T. Grafton; Michael A. Arbib; Luciano Fadiga; Giacomo Rizzolatti
Positron emission tomography imaging of cerebral blood flow was used to localize brain areas involved in the representation of hand grasping movements. Seven normal subjects were scanned under three conditions. In the first, they observed precision grasping of common objects performed by the examiner. In the second, they imagined themselves grasping the objects without actually moving the hand. These two tasks were compared with a control task of object viewing. Grasp observation activated the left rostral superior temporal sulcus, left inferior frontal cortex (area 45), left rostral inferior parietal cortex (area 40), the rostral part of left supplementary motor area (SMA-proper), and the right dorsal premotor cortex. Imagined grasping activated the left inferior frontal (area 44) and middle frontal cortex, left caudal inferior parietal cortex (area 40), a more extensive response in left rostral SMA-proper, and left dorsal premotor cortex. The two conditions activated different areas of the right posterior cerebellar cortex. We propose that the areas active during grasping observation may form a circuit for recognition of hand-object interactions, whereas the areas active during imagined grasping may be a putative human homologue of a circuit for hand grasping movements recently defined in nonhuman primates. The location of responses in SMA-proper confirms the rostrocaudal segregation of this area for imagined and real movement. A similar segregation is also present in the cerebellum, with imagined and observed grasping movements activating different parts of the posterior lobe and real movements activating the anterior lobe.
NeuroImage | 1997
Scott T. Grafton; Luciano Fadiga; Michael A. Arbib; Giacomo Rizzolatti
Positron emission tomography was used to investigate whether observation of real objects (tools of common use) activates premotor areas in the absence of any overt motor demand. Silent naming of the presented tools and silent naming of their use were also studied. Right-handed normal subjects were employed. Tool observation strongly activated the left dorsal premotor cortex. In contrast, silent tool naming activated Brocas area without additional activity in the dorsal premotor cortex. Silent tool-use naming, in addition to activating Brocas area, increased the activity in the left dorsal premotor cortex and recruited the left ventral premotor cortex and the left supplementary motor area. These data indicate that, even in the absence of any subsequent movement, the left premotor cortex processes objects that, like tools, have a motor valence. This dorsal premotor activation, which further augments when the subject names the tool use, should reflect the neural activity related to motor schemata for object use. The presence of an activation of both dorsal premotor cortex and ventral premotor cortex during tool-use naming suggests a role for these two areas in understanding object semantics.
Systems Neuroscience | 1982
Shun-ichi Amari; Michael A. Arbib
ABSTRACT Cooperative and competitive computations seem to play a greater role in decision making in neural circuitry than do executive controls. A number of neural models have been proposed which include cooperative and competitive computations–e.g., the model of the role of the reticular formation in deciding the overall mode of behavior, the model of how the frogs tectum decides the snapping position, and the model of the use of stereopsis to recognize depth in space. The aim of this paper is to develop a mathematical study of these models (especially the latter two) as a theory of neural fields. Special attention is given to the analysis of equilibrium states of the system and their stability, and to the interaction of different stimuli separated in space and time.
Biological Cybernetics | 2002
Erhan Oztop; Michael A. Arbib
Abstract. Mirror neurons within a monkeys premotor area F5 fire not only when the monkey performs a certain class of actions but also when the monkey observes another monkey (or the experimenter) perform a similar action. It has thus been argued that these neurons are crucial for understanding of actions by others. We offer the hand-state hypothesis as a new explanation of the evolution of this capability: the basic functionality of the F5 mirror system is to elaborate the appropriate feedback – what we call the hand state– for opposition-space based control of manual grasping of an object. Given this functionality, the social role of the F5 mirror system in understanding the actions of others may be seen as an exaptation gained by generalizing from ones own hand to an others hand. In other words, mirror neurons first evolved to augment the “canonical” F5 neurons (active during self-movement based on observation of an object) by providing visual feedback on “hand state,” relating the shape of the hand to the shape of the object. We then introduce the MNS1 (mirror neuron system 1) model of F5 and related brain regions. The existing Fagg–Arbib–Rizzolatti–Sakata model represents circuitry for visually guided grasping of objects, linking the anterior intraparietal area (AIP) with F5 canonical neurons. The MNS1 model extends the AIP visual pathway by also modeling pathways, directed toward F5 mirror neurons, which match arm–hand trajectories to the affordances and location of a potential target object. We present the basic schemas for the MNS1 model, then aggregate them into three “grand schemas”– visual analysis of hand state, reach and grasp, and the core mirror circuit – for each of which we present a useful implementation (a non-neural visual processing system, a multijoint 3-D kinematics simulator, and a learning neural network, respectively). With this implementation we show how the mirror system may learnto recognize actions already in the repertoire of the F5 canonical neurons. We show that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition. We train the system on the basis of final grasp but then observe the whole time course of mirror neuron activity, yielding predictions for neurophysiological experiments under conditions of spatial perturbation, altered kinematics, and ambiguous grasp execution which highlight the importance of the timingof mirror neuron activity.
Journal of Motor Behavior | 1993
Bruce Hoff; Michael A. Arbib
Our goal was to create a principled account of a body of behavioral kinematic data on reaching and grasping. We show how to transform an optimality principle for overall hand transport into a feedback control law and then incorporate look-ahead modules in the controller to compensate for delays in sensory feedback. This model describes the kinematics of hand transport under a variety of circumstances, including target perturbations. We then develop a model for the temporal coordination of reach and grasp. We provide an optimization principle for hand preshaping that trades off the costs of maintaining the hand in an open position and the cost of accelerating the change in grip size. This yields a control system for preshaping. We then show that a model that uses only expected duration for coordination, rather than kinematic or dynamic variables, can describe the kinematics of interaction of hand transport and preshape under a variety of circumstances, including perturbations of object position and object size.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1994
Toshio Uchiyama; Michael A. Arbib
Presents a color image segmentation method which divides the color space into clusters. Competitive learning is used as a tool for clustering the color space based on the least sum-of-squares criterion. We show that competitive learning converges to approximate the optimum solution based on this criterion, theoretically and experimentally. We apply this method to various color scenes and show its efficiency as a color image segmentation method. We also show the effects of using different color coordinates to be clustered, with some experimental results. >
Journal of Neurophysiology | 1998
Scott T. Grafton; Andrew H. Fagg; Michael A. Arbib
Positron emission tomography (PET) brain mapping was used to investigate whether or not human dorsal premotor cortex is involved in selecting motor acts based on arbitrary visual stimuli. Normal subjects performed four movement selection tasks. A manipulandum with three graspable stations was used. An imperative visual cue (LEDs illuminated in random order) indicated which station to grasp next with no instructional delay period. In a power task, a large aperture power grip was used for all trials, irrespective of the LED color. In a precision task, a pincer grasp of thumb and index finger was used. In a conditional task, the type of grasp (power or precision) was randomly determined by LED color. Comparison of the conditional selection task versus the average of the power and precision tasks revealed increased blood flow in left dorsal premotor cortex and superior parietal lobule. The average rate of producing the different grasp types and transport to the manipulandum stations was equivalent across this comparison, minimizing the contribution of movement attributes such as planning the individual movements (as distinct from planning associated with use of instructional stimuli), kinematics, or direction of target or limb movement. A comparison of all three movement tasks versus a rest task identified movement related activity involving a large area of central, precentral and postcentral cortex. In the region of the precentral sulcus movement related activity was located immediately caudal to the area activated during selection. The results establish a role for human dorsal premotor cortex and superior parietal cortex in selecting stimulus guided movements and suggest functional segregation within dorsal premotor cortex.
European Journal of Neuroscience | 1998
Nicolas Schweighofer; Michael A. Arbib; Mitsuo Kawato
This study focuses on the role of the motor cortex, the spinal cord and the cerebellum in the dynamics stage of the control of arm movement. Currently, two classes of models have been proposed for the neural control of movements, namely the virtual trajectory control hypothesis and the acquisition of internal models of the motor apparatus hypothesis. In the present study, we expand the virtual trajectory model to whole arm reaching movements. This expanded model accurately reproduced slow movements, but faster reaching movements deviated significantly from the planned trajectories, indicating that for fast movements, this model was not sufficient. These results led us to propose a new distributed functional model consistent with behavioural, anatomical and neurophysiological data, which takes into account arm muscles, spinal cord, motor cortex and cerebellum and is consistent with the view that the central nervous system acquires a distributed inverse dynamics model of the arm. Previous studies indicated that the cerebellum compensates for the interaction forces that arise during reaching movements. We show here how the cerebellum may increase the accuracy of reaching movements by compensating for the interaction torques by learning a portion of an inverse dynamics model that refines a basic inverse model in the motor cortex and spinal cord.
Trends in Cognitive Sciences | 2004
Michael A. Arbib; Jean Marc Fellous
Some robots have been given emotional expressions in an attempt to improve human-computer interaction. In this article we analyze what it would mean for a robot to have emotion, distinguishing emotional expression for communication from emotion as a mechanism for the organization of behavior. Research on the neurobiology of emotion yields a deepening understanding of interacting brain structures and neural mechanisms rooted in neuromodulation that underlie emotions in humans and other animals. However, the chemical basis of animal function differs greatly from the mechanics and computations of current machines. We therefore abstract from biology a functional characterization of emotion that does not depend on physical substrate or evolutionary history, and is broad enough to encompass the possible emotions of robots.
Current Anthropology | 2008
Michael A. Arbib; Katja Liebal; Simone Pika
The performance of language is multimodal, not confined to speech. Review of monkey and ape communication demonstrates greater flexibility in the use of hands and body than for vocalization. Nonetheless, the gestural repertoire of any group of nonhuman primates is small compared with the vocabulary of any human language and thus, presumably, of the transitional form called protolanguage. We argue that it was the coupling of gestural communication with enhanced capacities for imitation that made possible the emergence of protosign to provide essential scaffolding for protospeech in the evolution of protolanguage. Similarly, we argue against a direct evolutionary path from nonhuman primate vocalization to human speech. The analysis refines aspects of the mirror system hypothesis on the role of the primate brain’s mirror system for manual action in evolution of the human language‐ready brain.