Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan W. Kelly is active.

Publication


Featured researches published by Jonathan W. Kelly.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2007

Sensorimotor Alignment Effects in the Learning Environment and in Novel Environments

Jonathan W. Kelly; Marios N. Avraamides; Jack M. Loomis

Four experiments investigated the conditions contributing to sensorimotor alignment effects (i.e., the advantage for spatial judgments from imagined perspectives aligned with the body). Through virtual reality technology, participants learned object locations around a room (learning room) and made spatial judgments from imagined perspectives aligned or misaligned with their actual facing direction. Sensorimotor alignment effects were found when testing occurred in the learning room but not after walking 3 m into a neighboring (novel) room. Sensorimotor alignment effects returned after returning to the learning room or after providing participants with egocentric imagery instructions in the novel room. Additionally, visual and spatial similarities between the test and learning environments were independently sufficient to cause sensorimotor alignment effects. Memory alignment effects, independent from sensorimotor alignment effects, occurred in all testing conditions. Results are interpreted in the context of two-system spatial memory theories positing separate representations to account for sensorimotor and memory alignment effects.


Cognitive Processing | 2008

Multiple systems of spatial memory and action

Marios N. Avraamides; Jonathan W. Kelly

Recent findings from spatial cognition and cognitive neuroscience suggest that different types of mental representations could mediate the off-line retrieval of spatial relations from memory and the on-line guidance of motor actions in space. As a result, a number of models proposing multiple systems of spatial memory have been recently formulated. In the present article we review these models and we evaluate their postulates based on available experimental evidence. Furthermore, we discuss how a multiple-system model can apply to situations in which people reason about their immediate surroundings or non-immediate environments by incorporating a model of sensorimotor facilitation/interference. This model draws heavily on previous accounts of sensorimotor interference and takes into account findings from the stimulus–response compatibility literature.


Cognition | 2008

The Shape of Human Navigation: How Environmental Geometry Is Used in Maintenance of Spatial Orientation.

Jonathan W. Kelly; Timothy P. McNamara; Bobby Bodenheimer; Thomas H. Carr; John J. Rieser

The role of environmental geometry in maintaining spatial orientation was measured in immersive virtual reality using a spatial updating task (requiring maintenance of orientation during locomotion) within rooms varying in rotational symmetry (the number of room orientations providing the same perspective). Spatial updating was equally good in trapezoidal, rectangular and square rooms (one-fold, two-fold and four-fold rotationally symmetric, respectively) but worse in a circular room (infinity-fold rotationally symmetric). This contrasts with reorientation performance, which was incrementally impaired by increasing rotational symmetry. Spatial updating performance in a shape-changing room (containing visible corners and flat surfaces, but changing its shape over time) was no better than performance in a circular room, indicating that superior spatial updating performance in angular environments was due to remembered room shape, rather than improved self-motion perception in the presence of visible corners and flat surfaces.


Psychonomic Bulletin & Review | 2008

Spatial memories of virtual environments: How egocentric experience, intrinsic structure, and extrinsic structure interact

Jonathan W. Kelly; Timothy P. McNamara

Previous research has uncovered three primary cues that influence spatial memory organization: egocentric experience, intrinsic structure (object defined), and extrinsic structure (environment defined). In the present experiments, we assessed the relative importance of these cues when all three were available during learning. Participants learned layouts from two perspectives in immersive virtual reality. In Experiment 1, axes defined by intrinsic and extrinsic structures were in conflict, and learning occurred from two perspectives, each aligned with either the intrinsic or the extrinsic structure. Spatial memories were organized around a reference direction selected from the first perspective, regardless of its alignment with intrinsic or extrinsic structures. In Experiment 2, axes defined by intrinsic and extrinsic structures were congruent, and spatial memories were organized around reference axes defined by those congruent structures, rather than by the initially experienced view. The findings are discussed in the context of spatial memory theory as it relates to real and virtual environments.


Psychonomic Bulletin & Review | 2012

Tests enhance retention and transfer of spatial learning

Shana K. Carpenter; Jonathan W. Kelly

Many studies have reported that tests are beneficial for learning (e.g., Roediger & Karpicke, 2006a). However, the majority of studies on the testing effect have been limited to a combination of relatively simple verbal tasks and final tests that assessed memory for the same material that had originally been tested. The present study explored whether testing is beneficial for complex spatial memory and whether these benefits hold for both retention and transfer. After encoding a three-dimensional layout of objects presented in a virtual environment, participants completed a judgment-of-relative-direction (JRD) task in which they imagined standing at one object, facing a second object, and pointed to a third object from the imagined perspective. Some participants completed this task by relying on memory for the previously encoded layout (i.e., the test conditions), whereas for others the location of the third object was identified ahead of time, so that retrieval was not required (i.e., the study condition). On a final test assessing their JRD performance, the participants who learned through test outperformed those who learned through study. This was true even when corrective feedback was not provided on the initial JRD task and when the final test assessed memory from vantage points that had never been practiced during the initial JRD.


Cognition | 2010

Reference frames during the acquisition and development of spatial memories

Jonathan W. Kelly; Timothy P. McNamara

Four experiments investigated the role of reference frames during the acquisition and development of spatial knowledge, when learning occurs incrementally across views. In two experiments, participants learned overlapping spatial layouts. Layout 1 was first studied in isolation, and Layout 2 was later studied in the presence of Layout 1. The Layout 1 learning view was manipulated, whereas the Layout 2 view was held constant. Manipulation of the Layout 1 view influenced the reference frame used to organize Layout 2, indicating that reference frames established during early environmental exposure provided a framework for organizing locations learned later. Further experiments demonstrated that reference frames established after learning served to reorganize an existing spatial memory. These results indicate that existing reference frames can structure the acquisition of new spatial memories and that new reference frames can reorganize existing spatial memories.


Perception | 2004

Judgments of exocentric direction in large-scale space

Jonathan W. Kelly; Jack M. Loomis; Andrew C. Beall

Judgments of exocentric direction are quite common, especially when judging where others are looking or pointing. To investigate these judgments in large-scale space, observers were shown two targets in a large open field and were asked to judge the exocentric direction specified by the targets. The targets ranged in egocentric distance from 5 to 20 m with target-to-target angular separations of 45°, 90°, and 135°. Observers judged exocentric direction using two methods: (i) by judging which point on a distant fence appeared collinear with the two targets, and (ii) by orienting their body in a direction parallel with the perceived line segment. In the collinearity task, observers had to imagine the line connecting the targets and then extrapolate this imagined line out to the fence. Observers indicated the perceived point of collinearity on a handheld 360° panoramic cylinder representing their vista. The two judgment methods gave similar results except for a constant bias associated with the body-pointing response. Aside from this bias, the results of these two methods agree with other existing research indicating an effect of relative egocentric distance to the targets on judgment error—line segments are perceived as being rotated in depth. Additionally, verbal estimates of egocentric and exocentric distance suggest that perceived distance is not the cause for the systematic errors in judging exocentric direction.


Perception | 2008

Psychophysics of perceiving eye-gaze and head direction with peripheral vision: Implications for the dynamics of eye-gaze behavior

Jack M. Loomis; Jonathan W. Kelly; Jeremy N. Bailenson; Andrew C. Beall

Two psychophysical experiments are reported, one dealing with the visual perception of the head orientation of another person (the ‘looker’) and the other dealing with the perception of the lookers direction of eye gaze. The participant viewed the looker with different retinal eccentricities, ranging from foveal to far-peripheral viewing. On average, judgments of head orientation were reliable even out to the extremes of peripheral vision (90° eccentricity), with better performance at the extremes when the participant was able to view the looker changing head orientation from one trial to the next. In sharp contrast, judgments of eye-gaze direction were reliable only out to 4° eccentricity, signifying that the eye-gaze social signal is available to people only when they fixate near the lookers eyes. While not unexpected, this vast difference in availability of information about head direction and eye direction, both of which can serve as indicators of the lookers focus of attention, is important for understanding the dynamics of eye-gaze behavior.


Psychonomic Bulletin & Review | 2009

Individual differences in using geometric and featural cues to maintain spatial orientation: Cue quantity and cue ambiguity are more important than cue type

Jonathan W. Kelly; Timothy P. McNamara; Bobby Bodenheimer; Thomas H. Carr; John J. Rieser

Two experiments explored the role of environmental cues in maintaining spatial orientation (sense of selflocation and direction) during locomotion. Of particular interest was the importance of geometric cues (provided by environmental surfaces) and featural cues (nongeometric properties provided by striped walls) in maintaining spatial orientation. Participants performed a spatial updating task within virtual environments containing geometric or featural cues that were ambiguous or unambiguous indicators of self-location and direction. Cue type (geometric or featural) did not affect performance, but the number and ambiguity of environmental cues did. Gender differences, interpreted as a proxy for individual differences in spatial ability and/or experience, highlight the interaction between cue quantity and ambiguity. When environmental cues were ambiguous, men stayed oriented with either one or two cues, whereas women stayed oriented only with two. When environmental cues were unambiguous, women stayed oriented with one cue.


Experimental Brain Research | 2007

Changing lanes: inertial cues and explicit path information facilitate steering performance when visual feedback is removed

Kristen L. Macuga; Andrew C. Beall; Jonathan W. Kelly; Roy Smith; Jack M. Loomis

Can driver steering behaviors, such as a lane change, be executed without visual feedback? In a recent study with a fixed-base driving simulator, drivers failed to execute the return phase of a lane change when steering without vision, resulting in systematic final heading errors biased in the direction of the lane change. Here we challenge the generality of that finding. Suppose that, when asked to perform a lane (position) change, drivers fail to recognize that a heading change is required to make a lateral position change. However, given an explicit path, the necessary heading changes become apparent. Here we demonstrate that when heading requirements are made explicit, drivers appropriately implement the return phase. More importantly, by using an electric vehicle outfitted with a portable virtual reality system, we also show that valid inertial information (i.e., vestibular and somatosensory cues) enables accurate steering behavior when vision is absent. Thus, the failure to properly execute a lane change in a driving simulator without a moving base does not present a fundamental problem for feed-forward driving behavior.

Collaboration


Dive into the Jonathan W. Kelly's collaboration.

Top Co-Authors

Avatar

Jack M. Loomis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge