Kristen L. Macuga
University of California, Santa Barbara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kristen L. Macuga.
Cognitive Systems Research | 2002
Dario D. Salvucci; Kristen L. Macuga
This paper describes how cognitive modeling can be used in predicting the impact of cell-phone dialing on driver behavior. Models were developed of four metods of cell-phone dialing. These were then integrated with an exiting driver model of steering and speed control.
NeuroImage | 2012
Kristen L. Macuga; Scott H. Frey
The fact that action observation, motor imagery and execution are associated with partially overlapping increases in parieto-frontal areas has been interpreted as evidence for reliance of these behaviors on a common system of motor representations. However, studies that include all three conditions within a single paradigm are rare, and consequently, there is a dearth of knowledge concerning the distinct mechanisms involved in these functions. Here we report key differences in neural representations subserving observation, imagery, and synchronous imitation of a repetitive bimanual finger-tapping task using fMRI under conditions in which visual stimulation is carefully controlled. Relative to rest, observation, imagery, and synchronous imitation are all associated with widespread increases in cortical activity. Importantly, when effects of visual stimulation are properly controlled, each of these conditions is found to have its own unique neural signature. Relative to observation or imagery, synchronous imitation shows increased bilateral activity along the central sulcus (extending into precentral and postcentral gyri), in the cerebellum, supplementary motor area (SMA), parietal operculum, and several motor-related subcortical areas. No areas show greater increases for imagery vs. synchronous imitation; however, relative to synchronous imitation, observation is associated with greater increases in caudal SMA activity than synchronous imitation. Compared to observation, imagery increases activation in pre-SMA and left inferior frontal cortex, while no areas show the inverse effect. Region-of-interest (ROI) analyses reveal that areas involved in bimanual open-loop movements respond most to synchronous imitation (primary sensorimotor, classic SMA, and cerebellum), and less vigorously to imagery and observation. The differential activity between conditions suggests an alternative hierarchical model in which these behaviors all rely on partially independent mechanisms.
Vision Research | 2004
Rob Gray; Kristen L. Macuga; D. Regan
Self-motion through a three-dimensional array of objects creates a radial flow pattern on the retina. We superimposed a simulated object moving in depth on such a flow pattern to investigate the effect of the flow pattern on judgments of both the time to collision (TTC) with an approaching object and the trajectory of that object. Our procedure allowed us to decouple the direction and speed of simulated self motion-in-depth (MID) from the direction and speed of simulated object MID. In Experiment 1 we found that objects with the same closing speed were perceived to have a higher closing speed when self-motion and object-motion were in the same direction and a lower closing speed when they were in the opposite direction. This effect saturated rapidly as the ratio between the speeds of self-motion and object-motion was increased. In Experiment 2 we found that the perceived direction of object-MID was shifted towards the focus of expansion of the flow pattern. In Experiments 3 and 4 we found that the erroneous biases in perceived speed and direction produced by simulated self-motion were significantly reduced when binocular information about MID was added. These findings suggest that the large body of research that has studied motion perception using stationary observers has limited applicability to situations in which both the observer and the object are moving.
Experimental Brain Research | 2007
Kristen L. Macuga; Andrew C. Beall; Jonathan W. Kelly; Roy Smith; Jack M. Loomis
Can driver steering behaviors, such as a lane change, be executed without visual feedback? In a recent study with a fixed-base driving simulator, drivers failed to execute the return phase of a lane change when steering without vision, resulting in systematic final heading errors biased in the direction of the lane change. Here we challenge the generality of that finding. Suppose that, when asked to perform a lane (position) change, drivers fail to recognize that a heading change is required to make a lateral position change. However, given an explicit path, the necessary heading changes become apparent. Here we demonstrate that when heading requirements are made explicit, drivers appropriately implement the return phase. More importantly, by using an electric vehicle outfitted with a portable virtual reality system, we also show that valid inertial information (i.e., vestibular and somatosensory cues) enables accurate steering behavior when vision is absent. Thus, the failure to properly execute a lane change in a driving simulator without a moving base does not present a fundamental problem for feed-forward driving behavior.
Neuropsychologia | 2011
Kristen L. Macuga; Scott H. Frey
The fact that inferior frontal (IFg) and supramarginal (SMg) gyri respond to both self-generated and observed actions has been interpreted as evidence for a perception-action linking mechanism (mirroring). Yet, the brain readily distinguishes between percepts generated by ones own movements vs. those of another. Do IFg and/or SMg respond differentially to these visual stimuli even when carefully matched? We used BOLD fMRI to address this question as participants made repetitive bimanual hand movements while viewing either live visual feedback or perceptually similar, pre-recorded video of an actor. As expected, bilateral IFg and SMg increased activity during both conditions. However, right SMg and IFg responded differentially during live visual feedback vs. matched recordings. These mirror system areas may distinguish self-generated percepts by detecting subtle spatio-temporal differences between predicted and actual sensory feedback and/or visual and somatosensory signals.
Experimental Brain Research | 2012
Kristen L. Macuga; Athan P. Papailiou; Scott H. Frey
A Fitts’ task was used to investigate how tools are incorporated into the internal representations that underlie pointing movements, and whether such knowledge can be generalized across tasks. We measured the speed-accuracy trade-offs that occurred as target width was varied for both real and imagined movements. The dynamics of the pointing tool used in the task were manipulated—regular pen, top-heavy tool, and bottom-heavy tool—to test the fidelity of internal representations of movements involving the use of novel tools. To test if such representations can be generalized, the orientation of the pointing task was also manipulated (horizontal vs. vertical). In all conditions, both real and imagined performances conformed to the speed-accuracy relationship described by Fitts’ law. We found significant differences in imagined MTs for the two weighted tools compared to the regular pen, but not between the weighted tools. By contrast, real movement durations differed between all tools. These results indicate that even relatively brief experience using novel tools is sufficient to influence the internal representation of the dynamics of the tool-limb system. However, in the absence of feedback, these representations do not make explicit differences in performances resulting from the unique dynamics of these weighted tools.
Attention Perception & Psychophysics | 2006
Kristen L. Macuga; Jack M. Loomis; Andrew C. Beall; Jonathan W. Kelly
How do we determine where we are heading during visually controlled locomotion? Psychophysical research has shown that humans are quite good at judging their travel direction, or heading, from retinal optic flow. Here we show that retinal optic flow is sufficient, but not necessary, for determining heading. By using a purely cyclopean stimulus (random dot cinematogram), we demonstrate heading perception without retinal optic flow. We also show that heading judgments are equally accurate for the cyclopean stimulus and a conventional optic flow stimulus, when the two are matched for motion visibility. The human visual system thus demonstrates flexible, robust use of available visual cues for perceiving heading direction.
Psychological Science | 2006
Jack M. Loomis; Andrew C. Beall; Kristen L. Macuga; Jonathan W. Kelly; Roy S. Smith
In everyday life, the optic flow associated with the performance of complex actions, like walking through a field of obstacles and catching a ball, entails retinal flow with motion energy (first-order motion). We report the results of four complex action tasks performed in virtual environments without any retinal motion energy. Specifically, we used dynamic random-dot stereograms with single-frame lifetimes (cyclopean stimuli) such that in neither eye was there retinal motion energy or other monocular information about the actions being performed. Performance on the four tasks with the cyclopean stimuli was comparable to performance with luminance stimuli, which do provide retinal optic flow. The near equivalence of the two types of stimuli indicates that if optic flow is involved in the control of action, it is not tied to first-order retinal motion.
tests and proofs | 2006
Jonathan W. Kelly; Andrew C. Beall; Jack M. Loomis; Roy S. Smith; Kristen L. Macuga
The ability to judge heading (direction of travel) has been the focus of much research, but a role for perceived heading in steering has not been firmly established. Subjects steered down a road consisting of straight and curved segments and made heading judgments along the way. Heading judgments while traversing curved road segments were biased in the direction of the curve by up to 5° and position errors on the same curved roads were highly correlated with heading biases. This correlation was revealed by the simultaneous measurement of steering performance and perceived heading.
Journal of Experimental Psychology: Applied | 2018
Jasper LaFortune; Kristen L. Macuga
Motor learning is an essential task, but little is known about how it might be facilitated via instructional presentation, particularly with respect to recent technological advancements. We examined the effects of spatial orientation (0° vs. 180°) and immersion (immersive virtual reality vs. nonimmersive video) on the ability to reproduce complex, dynamic movement sequences. We also evaluated whether these effects were modulated by experience. Experienced dancers and novices practiced dances by imitating a virtual instructor and then, following a delay, had to perform them from memory. In line with theoretical models of motor learning, video-coded accuracy scores increased with successive trials in accordance with the power law of practice. Participants were more accurate after viewing the instructor in a 0° orientation. However, their performance was not improved by immersive virtual reality instruction. Experienced dancers were more accurate than novices, but experience did not interact with orientation or immersion. These results suggest that, when observing complex, dynamic movement sequences, individuals across experience levels can perform and learn these actions better via a 0° orientation, and that virtual instruction does not require immersion to be effective.