Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Burcu A. Urgen is active.

Publication


Featured researches published by Burcu A. Urgen.


Frontiers in Neurorobotics | 2013

EEG theta and Mu oscillations during perception of human and robot actions

Burcu A. Urgen; Markus Plank; Hiroshi Ishiguro; Howard Poizner; Ayse Pinar Saygin

The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.


Frontiers in Human Neuroscience | 2015

Observation and imitation of actions performed by humans, androids, and robots: an EMG study

Galit Hofree; Burcu A. Urgen; Piotr Winkielman; Ayse Pinar Saygin

Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations.


The Journal of Neuroscience | 2015

Towards an Empirically Grounded Predictive Coding Account of Action Understanding

Burcu A. Urgen; Luke E. Miller

Recent work in cognitive and systems neuroscience suggests that the brain is a prediction machine ([Clark, 2013][1]), continually attempting to predict the external causes of sensory information. This idea is formulated in the predictive coding framework, a modern theory of brain function ([Friston


international workshop on pattern recognition in neuroimaging | 2016

Representational similarity of actions in the human brain

Burcu A. Urgen; Selen Pehlivan; Ayse Pinar Saygin

Visual processing of actions is supported by a network of brain regions in occipito-temporal, parietal, and premotor cortex in the primate brain, known as the Action Observation Network (AON). What remain unclear are the representational properties of each node of this network. In this study, we investigated the representational content of brain areas in AON using fMRI, representational similarity analysis (RSA), and modeling. Subjects were shown video clips of three agents performing eight different actions during fMRI scanning. We then computed the representational dissimilarity matrices (RDMs) for each brain region, and compared them with that of two sets of model representations that were constructed based on computer vision and semantic attributes. Our findings reveal that different nodes of the AON have different representational properties. PSTS as the visual area of the AON represents high level visual features such as movement kinematics. As one goes higher in the AON hierarchy, representations become more abstract and semantic as our results revealed that parietal cortex represents several aspects of actions such as action category, intention of the action, and target of the action. These results suggest that during visual processing of actions, pSTS pools information from visual cortex to compute movement kinematics, and passes that information to higher levels of AON coding semantics of actions such as action category, intention of action, and target of action, consistent with computational models of visual action recognition.


Journal of Vision | 2015

Representational similarity analysis of fMRI responses in brain areas involved in visual action processing

Burcu A. Urgen; Ayse Pinar Saygin

Over the last two decades neurophysiological and neuroimaging studies have identified a network of brain regions in occipito-temporal, parietal, and frontal cortex that are involved in visual processing of actions. What remains unclear are the neural computations and representational properties in each area. In this study, we investigated the representational content of human brain areas in the action observation network using fMRI and representational similarity analysis. Observers were shown video clips of 8 different actions performed by 3 different agents (actors) during fMRI scanning. We then derived two indices from the representational similarity matrices for each region of interest (ROI): Agent decoding index and action decoding index, which reflect the presence of significant agent and action information, respectively. We found significant agent decoding in early visual areas and category sensitive cortical regions including FFA and EBA, as well as in the action observation network. However, agent decoding index varied across ROIs and was strongest in the right posterior superior temporal sulcus (pSTS), and was significantly greater than the indices in ROIs in the parietal and frontal cortex in the right hemisphere. On the other hand, although we found significant action decoding in all visual areas as well as the action observation network, the strength of action decoding was similar across ROIs. However, the representational structure of action types varies across ROIs as revealed by hierarchical clustering, indicating that action-related information changes along the levels of the cortical hierarchy. These results suggest that during visual action processing, pSTS pools information from the early visual areas to compute the identity of the agent, and passes that information to regions in parietal and frontal cortex that code higher-level aspects of actions, consistent with computational models of visual action recognition. Meeting abstract presented at VSS 2015.


Journal of Vision | 2015

Influence of Form and Motion on Biological Motion Prediction

Wednesday Bushong; Burcu A. Urgen; Luke E. Miller; Ayse Pinar Saygin

In natural vision, although moving objects are often partially or fully occluded, we are able to maintain coherent representations of them and their locations. The form of an object can influence judgments regarding its motion path, especially for the case of biological motion (Shiffrar and Freyd, 1990). Moreover, these effects can be dependent on temporal factors such as exposure duration. Here, we used an occlusion paradigm to investigate how the amount of motion information affects predictions of object movements. We were further interested whether these predictions would also be affected by the biologicalness of the object. The object presented was either biological (hand) or non-biological (oval). The pre-occlusion exposure time (prime duration) was either 100, 500, or 1000 ms, followed by a 500 ms occlusion period. When the object reappeared, the motion continued at an earlier frame (-350, -100, -20 ms), at the correct frame, or at a later frame (+20, +100, +350 ms). Participants were asked to judge whether or not the continuation after occlusion was too late. For both object types, there was a significant difference in the psychophysical curves for 100 and 1000 ms prime durations: when very little motion information was available (100 ms) before occlusion, it is harder to make predictions about the movement of the object. For the hand (biological) object only, prediction performance of biological motion trajectories was also significantly different for the 500 and 1000 ms durations. These data suggest that given sufficient time, hence more information, the visual system can be influenced by high-level constraints such as knowledge of how biological objects move in making predictions about object movement. These data are consistent with Bayesian models of cue integration in perception. Meeting abstract presented at VSS 2015.


Cognitive Science | 2012

Temporal Dynamics of Action Perception: The Role of Biological Appearance and Motion Kinematics

Burcu A. Urgen; Markus Plank; Hiroshi Ishiguro; Howard Poizner; Ayse Pinar Saygin


Neuropsychologia | 2018

Uncanny valley as a window into predictive processing in the social brain

Burcu A. Urgen; Marta Kutas; Ayse Pinar Saygin


Journal of Vision | 2017

Representational Similarity of Actions in the Human Brain

Ayse Pinar Saygin; Burcu A. Urgen; Selen Pehlivan


Journal of Vision | 2014

Visual evoked potentials in response to biological and non-biological agents

Burcu A. Urgen; Wayne Khoe; Alvin X. Li; Ayse Pinar Saygin

Collaboration


Dive into the Burcu A. Urgen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howard Poizner

University of California

View shared research outputs
Top Co-Authors

Avatar

Markus Plank

University of California

View shared research outputs
Top Co-Authors

Avatar

Luke E. Miller

University of California

View shared research outputs
Top Co-Authors

Avatar

Marta Kutas

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Driver

University College London

View shared research outputs
Top Co-Authors

Avatar

Alvin X. Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Galit Hofree

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge