Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer L. Campos is active.

Publication


Featured researches published by Jennifer L. Campos.


Journal of Vision | 2010

Bayesian integration of visual and vestibular signals for heading

John S. Butler; Stuart T. Smith; Jennifer L. Campos; Hh Bülthoff

Self-motion through an environment involves a composite of signals such as visual and vestibular cues. Building upon previous results showing that visual and vestibular signals combine in a statistically optimal fashion, we investigated the relative weights of visual and vestibular cues during self-motion. This experiment was comprised of three experimental conditions: vestibular alone, visual alone (with four different standard heading values), and visual-vestibular combined. In the combined cue condition, inter-sensory conflicts were introduced (Δ = ±6° or ±10°). Participants performed a 2-interval forced choice task in all conditions and were asked to judge in which of the two intervals they moved more to the right. The cue-conflict condition revealed the relative weights associated with each modality. We found that even when there was a relatively large conflict between the visual and vestibular cues, participants exhibited a statistically optimal reduction in variance. On the other hand, we found that the pattern of results in the unimodal conditions did not predict the weights in the combined cue condition. Specifically, visual-vestibular cue combination was not predicted solely by the reliability of each cue, but rather more weight was given to the vestibular cue.


Experimental Brain Research | 2004

Multisensory integration in the estimation of relative path length

Hong-Jin Sun; Jennifer L. Campos; George S. W. Chan

One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.


Perception | 2004

The contributions of static visual cues, nonvisual cues, and optic flow in distance estimation.

Hong-Jin Sun; Jennifer L. Campos; Meredith Young; George S. W. Chan; Colin G. Ellard

By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180° before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an ‘under-perception’ of movement relative to conditions in which visual information was absent during locomotion.


Archive | 2013

Human Walking in Virtual Environments: Perception, Technology, and Applications

Frank Steinicke; Yon Visell; Jennifer L. Campos; Anatole Lcuyer

This book presents a survey of past and recent developments on human walking in virtual environments with an emphasis on human self-motion perception, the multisensory nature of experiences of walking, conceptual design approaches, current technologies, and applications. The use of Virtual Reality and movement simulation systems is becoming increasingly popular and more accessible to a wide variety of research fields and applications. While, in the past, simulation technologies have focused on developing realistic, interactive visual environments, it is becoming increasingly obvious that our everyday interactions are highly multisensory. Therefore, investigators are beginning to understand the critical importance of developing and validating locomotor interfaces that can allow for realistic, natural behaviours. The book aims to present an overview of what is currently understood about human perception and performance when moving in virtual environments and to situate it relative to the broader scientific and engineering literature on human locomotion and locomotion interfaces. The contents include scientific background and recent empirical findings related to biomechanics, self-motion perception, and physical interactions. The book also discusses conceptual approaches to multimodal sensing, display systems, and interaction for walking in real and virtual environments. Finally, it will present current and emerging applications in areas such as gait and posture rehabilitation, gaming, sports, and architectural design.


Frontiers in Psychology | 2015

Vection and visually induced motion sickness: how are they related?

Behrang Keshavarz; Bernhard E. Riecke; Lawrence J. Hettinger; Jennifer L. Campos

The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.


Experimental Brain Research | 2012

Multisensory integration in the estimation of walked distances.

Jennifer L. Campos; John S. Butler; Hh Bülthoff

When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.


Experimental Brain Research | 2011

Integration of vestibular and proprioceptive signals for spatial updating

Ilja Frissen; Jennifer L. Campos; Jl Souman; Marc O. Ernst

Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.


Memory & Cognition | 2004

Active navigation and orientation-free spatial representations.

Hong-Jin Sun; George S. W. Chan; Jennifer L. Campos

In this study, we examined the orientation dependency of spatial representations following various learning conditions. We assessed the spatial representations of human participants after they had learned a complex spatial layout via map learning, via navigating within a real environment, or via navigating through a virtual simulation of that environment. Performances were compared between conditions involving (1) multiple- versus single-body orientation, (2) active versus passive learning, and (3) high versus low levels of proprioceptive information. Following learning, the participants were required to produce directional judgments to target landmarks. Results showed that the participants developed orientation-specific spatial representations following map learning and passive learning, as indicated by better performance when tested from the initial learning orientation. These results suggest that neither the number of vantage points nor the level of proprioceptive information experienced are determining factors; rather, it is theactive aspect of direct navigation that leads to the development of orientation-free representations.


Seeing and Perceiving | 2011

The Role of Stereo Vision in Visual-Vestibular Integration ∗

John S. Butler; Jennifer L. Campos; Hh Bülthoff; Stuart T. Smith

Self-motion through an environment stimulates several sensory systems, including the visual system and the vestibular system. Recent work in heading estimation has demonstrated that visual and vestibular cues are typically integrated in a statistically optimal manner, consistent with Maximum Likelihood Estimation predictions. However, there has been some indication that cue integration may be affected by characteristics of the visual stimulus. Therefore, the current experiment evaluated whether presenting optic flow stimuli stereoscopically, or presenting both eyes with the same image (binocularly) affects combined visual-vestibular heading estimates. Participants performed a two-interval forced-choice task in which they were asked which of two presented movements was more rightward. They were presented with either visual cues alone, vestibular cues alone or both cues combined. Measures of reliability were obtained for both binocular and stereoscopic conditions. Group level analyses demonstrated that when stereoscopic information was available there was clear evidence of optimal integration, yet when only binocular information was available weaker evidence of cue integration was observed. Exploratory individual analyses demonstrated that for the stereoscopic condition 90% of participants exhibited optimal integration, whereas for the binocular condition only 60% of participants exhibited results consistent with optimal integration. Overall, these findings suggest that stereo vision may be important for self-motion perception, particularly under combined visual-vestibular conditions.


Neuroscience Letters | 2009

The N2pc component in ERP and the lateralization effect of language on color perception.

Qiang Liu; Hong Li; Jennifer L. Campos; Qi Wang; Ye Zhang; Jiang Qiu; Qinglin Zhang; Hong-Jin Sun

This study examined the electrophysiological bases of the effect of language on color perception. In a visual search task, a target was presented to the left or right visual field. The target color was either from the same category as a set of distractors (within-category condition) or from a different category (between-category condition). For both category conditions, the targets elicited a clear N2pc (N2-posterior-contralateral) component in the event-related potential (ERP) in the contralateral hemisphere. In the left hemisphere only, the N2pc amplitude for the between-category condition was larger than that for the within-category condition. These results indicate that the N2pc could be used as an index to describe the lateralization effect of language on color perception.

Collaboration


Dive into the Jennifer L. Campos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Behrang Keshavarz

Toronto Rehabilitation Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack M. Loomis

University of California

View shared research outputs
Top Co-Authors

Avatar

Joshua H. Siegle

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge