Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert S. Allison is active.

Publication


Featured researches published by Robert S. Allison.


ieee virtual reality conference | 2001

Tolerance of temporal delay in virtual environments

Robert S. Allison; Laurence R. Harris; Michael Jenkin; Urszula Jasiobedzka; James E. Zacher

To enhance presence, facilitate sensory motor performance, and avoid disorientation or nausea, virtual-reality applications require the perception of a stable environment. End-end tracking latency (display lag) degrades this illusion of stability and has been identified as a major fault of existing virtual-environment systems. Oscillopsia refers to the perception that the visual world appears to swim about or oscillate in space and is a manifestation of this loss of perceptual stability of the environment. The effects of end-end latency and head velocity on perceptual stability in a virtual environment were investigated psychophysically. Subjects became significantly more likely to report oscillopsia during head movements when end-end latency or head velocity were increased. It is concluded that perceptual instability of the world arises with increased head motion and increased display lag. Oscillopsia is expected to be more apparent in tasks requiring real locomotion or rapid head movement.


IEEE Transactions on Biomedical Engineering | 1996

Combined head and eye tracking system for dynamic testing of the vestibular system

Robert S. Allison; Moshe Eizenman; Bob S. K. Cheung

The authors present a combined head-eye tracking system suitable for use with free head movement during natural activities. This system provides an integrated head and eye position measurement while allowing for a large range of head movement (approx 1.8 m of head translation is tolerated). Six degrees of freedom of head motion and two degrees of freedom of eye motion are measured by the system. The system was designed to be useful for the evaluation of the vestibulo-ocular reflex (VOR). The VOR generates compensatory eye movements in order to stabilize gaze during linear or rotational motion of the head. Current clinical and basic research evaluation of the VOR has used a restricted range of head motion, mainly low-frequency, yaw rotation. An integrated eye-head tracking system such as the one presented here allows the VOR response to linear and angular head motion to be studied in a more physiologically relevant manner. Two examples of the utility of the integrated head and eye tracking system in evaluating the vestibular response to linear and angular motion are presented.


Perception | 1999

Effect of field size, head motion, and rotational velocity on roll vection and illusory self-tilt in a tumbling room

Robert S. Allison; Ian P. Howard; James E. Zacher

The effect of field size, velocity, and visual fixation upon the perception of self-body rotation and tilt was examined in a rotating furnished room. Subjects sat in a stationary chair in the furnished room which could be rotated about the body roll axis. For full-field conditions, complete 360° body rotation (tumbling) was the most common sensation (felt by 80% of subjects). Constant tilt or partial tumbling (less than 360° rotation) occurred more frequently with a small field of view (20 deg). The number of subjects who experienced complete tumbling increased with increases in field of view and room velocity (for velocities between 15 and 30° s−1). The speed of perceived self-rotation relative to room rotation also increased with increasing field of view.


Journal of Experimental Psychology: Human Perception and Performance | 2008

Active gaze, visual look-ahead, and locomotor control

Richard M. Wilkie; John P. Wann; Robert S. Allison

The authors examined observers steering through a series of obstacles to determine the role of active gaze in shaping locomotor trajectories. Participants sat on a bicycle trainer integrated with a large field-of-view simulator and steered through a series of slalom gates. Steering behavior was determined by examining the passing distance through gates and the smoothness of trajectory. Gaze monitoring revealed which slalom targets were fixated and for how long. Participants tended to track the most immediate gate until it was about 1.5 s away, at which point gaze switched to the next slalom gate. To probe this gaze pattern, the authors then introduced a number of experimental conditions that placed spatial or temporal constraints on where participants could look and when. These manipulations resulted in systematic steering errors when observers were forced to use unnatural looking patterns, but errors were reduced when peripheral monitoring of obstacles was allowed. A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.


Journal of Vision | 2009

A reevaluation of the tolerance to vertical misalignment in stereopsis

Kazuho Fukuda; Laurie M. Wilcox; Robert S. Allison; Ian P. Howard

The stereoscopic system tolerates some vertical misalignment of the images in the eyes. However, the reported tolerance for an isolated line stimulus (approximately 4 degrees) is greater than for a random-dot stereogram (RDS, approximately 45 arcmin). We hypothesized that the greater tolerance can be attributed to monoptic depth signals (E. Hering, 1861; M. Kaye, 1978; L. M. Wilcox, J. M. Harris, & S. P. McKee, 2007). We manipulated the vertical misalignment of a pair of isolated stereoscopic dots to assess the contribution of each depth signal separately. For the monoptic stimuli, where only one half-image was present, equivalent horizontal and vertical offsets were imposed instead of disparity. Judgments of apparent depth were well above chance, though there was no conventional disparity signal. For the stereoscopic stimuli, one element was positioned at the midline where monoptic depth perception falls to chance but conventional disparity remains. Subjects lost the depth percept at a vertical misalignment of between 44 and 88 arcmin, which is much smaller than the limit found when both signals were provided. This tolerance for isolated stimuli is comparable to the reported tolerance for RDS. We conclude that previous reports of the greater tolerance to vertical misalignment for isolated stimuli arose from the use of monoptic depth signals.


IEEE Transactions on Broadcasting | 2011

The Effect of Crosstalk on the Perceived Depth From Disparity and Monocular Occlusions

Inna Tsirlin; Laurie M. Wilcox; Robert S. Allison

Crosstalk in stereoscopic displays is defined as the leakage of one eyes image into the image of the other eye. All popular commercial stereoscopic systems suffer from crosstalk to some extent. Studies show that crosstalk causes distortions, reduces image quality and visual comfort, and increases perceived workload. Moreover, there is evidence that crosstalk effects depth perception from disparity. In the present paper we present two experiments. The first addresses the effect of crosstalk on the perceived magnitude of depth from disparity. The second examines the effect of crosstalk on the magnitude of depth perceived from monocular occlusions. Our data show that crosstalk has a detrimental effect on depth perceived from both cues, but it has a stronger effect on depth from monocular occlusions. Our findings taken together with previous results suggest that crosstalk, even in modest amounts, noticeably degrades the quality of stereoscopic images.


Journal of Vision | 2009

Binocular depth discrimination and estimation beyond interaction space

Robert S. Allison; Barbara Gillam; Elia Vecellio

The benefits of binocular vision have been debated throughout the history of vision science yet few studies have considered its contribution beyond a viewing distance of a few meters. In the first set of experiments, we compared monocular and binocular performance on depth interval estimation and discrimination tasks at 4.5, 9.0 or 18.0 m. Under monocular conditions, perceived depth was significantly compressed. Binocular depth estimates were much nearer to veridical although also compressed. Regression-based precision measures were much more precise for binocular compared to monocular conditions (ratios between 2.1 and 48). We confirm that stereopsis supports reliable depth discriminations beyond typical laboratory distances. Furthermore, binocular vision can significantly improve both the accuracy and precision of depth estimation to at least 18 m. In another experiment, we used a novel paradigm that allowed the presentation of real binocular disparity stimuli in the presence of rich environmental cues to distance but not interstimulus depth. We found that the presence of environmental cues to distance greatly enhanced stereoscopic depth constancy at distances of 4.5 and 9.0 m. We conclude that stereopsis is an effective cue for depth discrimination and estimation for distances beyond those traditionally assumed. In normal environments, distance information from other sources such as perspective can be effective in scaling depth from disparity.


Vision Research | 2009

Coarse-fine dichotomies in human stereopsis

Laurie M. Wilcox; Robert S. Allison

There is a long history of research into depth percepts from very large disparities, beyond the fusion limit. Such diplopic stimuli have repeatedly been shown to provide reliable depth percepts. A number of researchers have pointed to differences between the processing of small and large disparities, arguing that they are subserved by distinct neural mechanisms. Other studies have pointed to a dichotomy between the processing of 1st- and 2nd-order stimuli. Here we review literature on the full range of disparity processing to determine how well different proposed dichotomies map onto one another, and to identify unresolved issues.


Virtual Reality | 2002

Simulating Self-Motion I: Cues for the Perception of Motion

Laurence R. Harris; Michael Jenkin; Daniel C. Zikovitz; Fara Redlick; Philip Jaekl; Urszula Jasiobedzka; Heather L. Jenkin; Robert S. Allison

When people move there are many visual and non-visual cues that can inform them about their movement Simulating self-motion in a virtual reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self-motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self-motion, and evokes a perception of self-motion that is related to the actual motion in a way that varies with acceleration Passive physical motion turns out to be a particularly potent self-motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues.


Seeing and Perceiving | 2011

Simulated viewpoint jitter shakes sensory conflict accounts of vection.

Stephen Palmisano; Robert S. Allison; Juno Kim; Frederick Bonato

Sensory conflict has been used to explain the way we perceive and control our self-motion, as well as the aetiology of motion sickness. However, recent research on simulated viewpoint jitter provides a strong challenge to one core prediction of these theories -- that increasing sensory conflict should always impair visually induced illusions of self-motion (known as vection). These studies show that jittering self-motion displays (thought to generate significant and sustained visual-vestibular conflict) actually induce superior vection to comparable non-jittering displays (thought to generate only minimal/transient sensory conflict). Here we review viewpoint jitter effects on vection, postural sway, eye-movements and motion sickness, and relate them to recent behavioural and neurophysiological findings. It is shown that jitter research provides important insights into the role that sensory interaction plays in self-motion perception.

Collaboration


Dive into the Robert S. Allison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sion Jennings

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Macuda

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge