Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Jaekl is active.

Publication


Featured researches published by Philip Jaekl.


Virtual Reality | 2002

Simulating Self-Motion I: Cues for the Perception of Motion

Laurence R. Harris; Michael Jenkin; Daniel C. Zikovitz; Fara Redlick; Philip Jaekl; Urszula Jasiobedzka; Heather L. Jenkin; Robert S. Allison

When people move there are many visual and non-visual cues that can inform them about their movement Simulating self-motion in a virtual reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self-motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self-motion, and evokes a perception of self-motion that is related to the actual motion in a way that varies with acceleration Passive physical motion turns out to be a particularly potent self-motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues.


ieee virtual reality conference | 2002

Perceptual stability during head movement in virtual reality

Philip Jaekl; Robert S. Allison; Laurence R. Harris; Urszula Jasiobedzka; Heather L. Jenkin; Michael Jenkin; James E. Zacher; Daniel C. Zikovitz

Virtual reality displays introduce spatial distortions that are very hard to correct because of the difficulty of precisely modelling the camera from the nodal point of each eye. How significant are these distortions for spatial perception in virtual reality? In this study, we used a helmet-mounted display and a mechanical head tracker to investigate the tolerance to errors between head motions and the resulting visual display. The relationship between the head movement and the associated updating of the visual display was adjusted by subjects until the image was judged as stable relative to the world. Both rotational and translational movements were tested, and the relationship between the movements and the direction of gravity was varied systematically. Typically, for the display to be judged as stable, subjects needed the visual world to be moved in the opposite direction to the head movement by an amount greater than the head movement itself, during both rotational and translational head movements, although a large range of movement was tolerated and judged as appearing stable. These results suggest that it not necessary to model the visual geometry accurately and suggest circumstances when tracker drift can be corrected by jumps in the display which will pass unnoticed by the user.


Brain Research | 2010

Audiovisual contrast enhancement is articulated primarily via the M-pathway

Philip Jaekl; Salvador Soto-Faraco

Although it has been previously reported that audiovisual integration can modulate performance on some visual tasks, multisensory interactions have not been explicitly assessed in the context of different visual processing pathways. In the present study, we test auditory influences on visual processing employing a psychophysical paradigm that reveals distinct spatial contrast signatures of magnocellular and parvocellular visual pathways. We found that contrast thresholds are reduced when noninformative sounds are presented with transient, low-frequency Gabor patch stimuli and thus favor the M-system. In contrast, visual thresholds are unaffected by concurrent sounds when detection is primarily attributed to P-pathway processing. These results demonstrate that the visual detection enhancement resulting from multisensory integration is mainly articulated by the magnocellular system, which is most sensitive at low spatial frequencies. Such enhancement may subserve stimulus-driven processes including the orientation of spatial attention and fast, automatic ocular and motor responses. This dissociation helps explain discrepancies between the results of previous studies investigating visual enhancement by sounds.


Archive | 2010

Space and Time in Perception and Action: Mechanisms of simultaneity constancy

Laurence R. Harris; Vanessa Harrar; Philip Jaekl; Agnieszka Kopinska

There is a delay before sensory information arising from a given event reaches the central nervous system. This delay may be different for information carried by different senses. It will also vary depending on how far the event is from the observer and stimulus properties such as intensity. However, it seems that at least some of these processing time differences can be compensated for by a mechanism that resynchronizes asynchronous signals and enables us to perceive simultaneity correctly. This chapter explores how effectively simultaneity constancy can be achieved, both intramodally within the visual and tactile systems and cross-modally between combinations of auditory, visual, and tactile stimuli. We propose and provide support for a three-stage model of simultaneity constancy in which (1) signals within temporal and spatial windows are identified as corresponding to a single event, (2) a crude resynchronization is applied based on simple rules corresponding to the average processing speed differences between the individual sensory systems, and (3) fine-tuning adjustments are applied based on previous experience with particular combinations of stimuli.


Neuroscience Letters | 2007

Auditory–visual temporal integration measured by shifts in perceived temporal location

Philip Jaekl; Laurence R. Harris

The perceived time of occurrence of a visual stimulus may be shifted towards the onset of an auditory stimulus occurring a short time later. The effect has been attributed to auditory-visual temporal integration although an unknown portion of the shift may be explained by the different processing times of visual and auditory stimuli. Here, perceived onset time is measured in a novel way that separates and compares the magnitude of these effects. Participants observed either a sequence consisting of a visual stimulus followed by an auditory stimulus and then another visual stimulus or the reverse. The temporal location of the second stimulus was varied systematically between the onset of the first and third stimuli, which were separated by a fixed duration. Two timescales were used: a short timescale that allowed for auditory-visual temporal integration to occur, and a long timescale that did not. Psychometric curves were fitted for both timescales, to the percentage the first interval was perceived is shortest, as a function of first interval duration. For the long timescale condition the point of subjective equality (PSE) of the two interval lengths was consistent with the different processing latencies. When visual and auditory stimuli occurred within 125 ms significant additional shifting of the PSE occurred. These results indicate that temporal integration shifts the perceived timing of a visual stimulus by an amount much larger than can be explained differential processing latencies.


Visual Neuroscience | 2009

Sounds can affect visual perception mediated primarily by the parvocellular pathway.

Philip Jaekl; Laurence R. Harris

We investigated the effect of auditory-visual sensory integration on visual tasks that were predominantly dependent on parvocellular processing. These tasks were (i) detecting metacontrast-masked targets and (ii) discriminating orientation differences between high spatial frequency Gabor patch stimuli. Sounds that contained no information relevant to either task were presented before, synchronized with, or after the visual targets, and the results were compared to conditions with no sound. Both tasks used a two-alternative forced choice technique. For detecting metacontrast-masked targets, one interval contained the visual target and both (or neither) intervals contained a sound. Sound-target synchrony within 50 ms lowered luminance thresholds for detecting the presence of a target compared to when no sound occurred or when sound onset preceded target onset. Threshold angles for discriminating the orientation of a Gabor patch consistently increased in the presence of a sound. These results are compatible with sound-induced activity in the parvocellular visual pathway increasing the visibility of flashed targets and hindering orientation discrimination.


Experimental Brain Research | 2014

On the ‘visual’ in ‘audio-visual integration’: a hypothesis concerning visual pathways

Philip Jaekl; Alexis Pérez-Bellido; Salvador Soto-Faraco

Abstract Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.


Neuropsychologia | 2015

The contribution of dynamic visual cues to audiovisual speech perception

Philip Jaekl; Ana Pesquita; Agnès Alsius; Kevin G. Munhall; Salvador Soto-Faraco

Seeing a speakers facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speakers facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speakers facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.


Seeing and Perceiving | 2010

Space constancy vs. shape constancy.

Philip Jaekl; Laurence R. Harris

The perceived distance between objects has been found to decrease over time in memory, demonstrating a partial failure of space constancy. Such mislocalization has been attributed to a generalized compression effect in memory. We confirmed this drift with a pair of remembered dot positions but did not find a compression of perceived distance when the space between the dots was filled with a connecting line. When the dot pairs were viewed eccentrically the compression in memory was substantially less. These results are in line with a combination of factors previously demonstrated to cause distortion in spatial memory--foveal bias and memory averaging--rather than a general compression of remembered visual space. Our findings indicate that object shape does not appear to be vulnerable to failures of space constancy observed with remembered positions.


PLOS ONE | 2015

Audiovisual Delay as a Novel Cue to Visual Distance.

Philip Jaekl; Jakob Seidlitz; Laurence R. Harris; Duje Tadin

For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.

Collaboration


Dive into the Philip Jaekl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duje Tadin

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge