Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jane E. Aspell is active.

Publication


Featured researches published by Jane E. Aspell.


Frontiers in Psychology | 2013

Visual capture and the experience of having two bodies – Evidence from two different virtual reality techniques

Lukas Heydrich; Trevor J. Dodds; Jane E. Aspell; Bruno Herbelin; Hh Bülthoff; Betty J. Mohler; Olaf Blanke

In neurology and psychiatry the detailed study of illusory own body perceptions has suggested close links between bodily processing and self-consciousness. One such illusory own body perception is heautoscopy where patients have the sensation of being reduplicated and to exist at two or even more locations. In previous experiments, using a video head-mounted display, self-location and self-identification were manipulated by applying conflicting visuo-tactile information. Yet the experienced singularity of the self was not affected, i.e., participants did not experience having multiple bodies or selves. In two experiments presented in this paper, we investigated self-location and self-identification while participants saw two virtual bodies (video-generated in study 1 and 3D computer generated in study 2) that were stroked either synchronously or asynchronously with their own body. In both experiments, we report that self-identification with two virtual bodies was stronger during synchronous stroking. Furthermore, in the video generated setup with synchronous stroking participants reported a greater feeling of having multiple bodies than in the control conditions. In study 1, but not in study 2, we report that self-location – measured by anterior posterior drift – was significantly shifted towards the two bodies in the synchronous condition only. Self-identification with two bodies, the sensation of having multiple bodies, and the changes in self-location show that the experienced singularity of the self can be studied experimentally. We discuss our data with respect to ownership for supernumerary hands and heautoscopy. We finally compare the effects of the video and 3D computer generated head-mounted display technology and discuss the possible benefits of using either technology to induce changes in illusory self-identification with a virtual body.


PLOS ONE | 2009

Keeping in Touch with One's Self: Multisensory Mechanisms of Self-Consciousness

Jane E. Aspell; Bigna Lenggenhager; Olaf Blanke

Background The spatial unity between self and body can be disrupted by employing conflicting visual-somatosensory bodily input, thereby bringing neurological observations on bodily self-consciousness under scientific scrutiny. Here we designed a novel paradigm linking the study of bodily self-consciousness to the spatial representation of visuo-tactile stimuli by measuring crossmodal congruency effects (CCEs) for the full body. Methodology/Principal Findings We measured full body CCEs by attaching four vibrator-light pairs to the trunks (backs) of subjects who viewed their bodies from behind via a camera and a head mounted display (HMD). Subjects made speeded elevation (up/down) judgments of the tactile stimuli while ignoring light stimuli. To modulate self-identification for the seen body subjects were stroked on their backs with a stick and the felt stroking was either synchronous or asynchronous with the stroking that could be seen via the HMD. We found that (1) tactile stimuli were mislocalized towards the seen body (2) CCEs were modulated systematically during visual-somatosensory conflict when subjects viewed their body but not when they viewed a body-sized object, i.e. CCEs were larger during synchronous than during asynchronous stroking of the body and (3) these changes in the mapping of tactile stimuli were induced in the same experimental condition in which predictable changes in bodily self-consciousness occurred. Conclusions/Significance These data reveal that systematic alterations in the mapping of tactile stimuli occur in a full body illusion and thus establish CCE magnitude as an online performance proxy for subjective changes in global bodily self-consciousness.


Psychological Science | 2013

Turning Body and Self Inside Out Visualized Heartbeats Alter Bodily Self-Consciousness and Tactile Perception

Jane E. Aspell; Lukas Heydrich; Guillaume Marillier; Tom Lavanchy; Bruno Herbelin; Olaf Blanke

Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants’ (projected, “virtual”) bodies and flashed in synchrony with their heartbeats. We investigated whether these “cardio-visual” signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants’ backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness.


European Journal of Neuroscience | 2005

Neuromagnetic correlates of visual motion coherence

Jane E. Aspell; Topi Tanskanen; Anya Hurlbert

In order to characterize cortical responses to coherent motion we use magnetoencephalography (MEG) to measure human brain activity that is modulated by the degree of global coherence in a visual motion stimulus. Five subjects passively viewed two‐phase motion sequences of sparse random dot fields. In the first (incoherent) phase the dots moved in random directions; in the second (coherent) phase a variable percentage of dots moved uniformly in one direction while the others moved randomly. We show that: (i) visual‐motion‐evoked magnetic fields, measured with a whole‐scalp neuromagnetometer, reveal two transient events, within which we identify two significant peaks – the ‘ON‐M220’ peak approximately 220 ms after the onset of incoherent motion and the ‘TR‐M230’ peak, approximately 230 ms after the transition from incoherent to coherent motion; (ii) in lateral occipital channels, the TR‐M230 peak amplitude varies with the percentage of motion coherence; (iii) two main sources are active in response to the transition from incoherent to coherent motion, the human medial temporal area complex/V3 accessory area (hMT+/V3A) and the superior temporal sulcus (STS), and (iv) these distinct areas show a similar, significant dependence of response strength and latency on motion coherence.


PLOS ONE | 2012

Extending the Body to Virtual Tools Using a Robotic Surgical Interface: Evidence from the Crossmodal Congruency Task

Ali Sengül; Michiel van Elk; Giulio Rognini; Jane E. Aspell; Hannes Bleuler; Olaf Blanke

The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.


Journal of Neurophysiology | 2011

Leg muscle vibration modulates bodily self-consciousness: integration of proprioceptive, visual, and tactile signals

Estelle Palluel; Jane E. Aspell; Olaf Blanke

Behavioral studies have used visuo-tactile conflicts between a participants body and a visually presented fake or virtual body to investigate the importance of bodily perception for self-consciousness (bodily self-consciousness). Illusory self-identification with a fake body and changes in tactile processing--modulation of visuo-tactile cross-modal congruency effects (CCEs)--were reported in previous findings. Although proprioceptive signals are deemed important for bodily self-consciousness, their contribution to the representation of the full body has not been studied. Here we investigated whether and how self-identification and tactile processing (CCE magnitude) could be modified by altering proprioceptive signals with 80-Hz vibrations at the legs. Participants made elevation judgments of tactile cues (while ignoring nearby lights) during synchronous and asynchronous stroking of a seen fake body. We found that proprioceptive signals during vibrations altered the magnitude of self-identification and mislocalization of touch (CCE) in a synchrony-dependent fashion: we observed an increase of self-identification and CCE magnitude during asynchronous stroking. In a second control experiment we studied whether proprioceptive signals per se, or those from the lower limbs in particular, were essential for these changes. We applied vibrations at the upper limbs (which provide no information about the position of the participants body in space) and in this case observed no modulation of bodily self-consciousness or tactile perception. These data link proprioceptive signals from the legs that are conveyed through the dorsal column-medial lemniscal pathway to bodily self-consciousness. We discuss their integration with bodily signals from vision and touch for full-body representations.


European Journal of Neuroscience | 2010

Seeing the body modulates audiotactile integration

Jane E. Aspell; Tom Lavanchy; Bigna Lenggenhager; Olaf Blanke

Audiotactile integration has been studied using various experimental setups but so far crossmodal congruency effects (CCEs) have not been found for tactile targets paired with auditory distractors. In the present study we investigated whether audiotactile CCEs exist and, if so, whether these CCEs have similar characteristics to those found by previous authors with visual distractors. We measured audiotactile CCEs by attaching four vibrators to the backs of participants and presented auditory stimuli from four loudspeakers placed, in separate blocks, at different distances in front of or behind the participant’s body. Participants discriminated the elevation of tactile stimuli while ignoring the auditory distractors. CCEs were found only when participants were provided with noninformative vision of their own body, as seen from behind via a camera and head‐mounted display; they were absent when participants did not view their body. Furthermore, in contrast to visuotactile CCEs, audiotactile CCEs did not depend on whether the distractors were presented on the same or different side as the tactile targets. The present study provides the first demonstration of an audiotactile CCE: incongruent auditory distractors impaired performance on a tactile elevation discrimination task relative to performance with congruent distractors. We show that audiotactile CCEs differ from visuotactile CCEs as they do not appear to be as sensitive to the spatial relations between the distractors and the tactile stimuli. We also show that these CCEs are modulated by vision of the body.


European Journal of Neuroscience | 2013

Visuo-tactile integration and body ownership during self-generated action

Giulio Rognini; Ali Sengül; Jane E. Aspell; Roy Salomon; Hannes Bleuler; Olaf Blanke

Although there is increasing knowledge about how visual and tactile cues from the hands are integrated, little is known about how self‐generated hand movements affect such multisensory integration. Visuo‐tactile integration often occurs under highly dynamic conditions requiring sensorimotor updating. Here, we quantified visuo‐tactile integration by measuring cross‐modal congruency effects (CCEs) in different bimanual hand movement conditions with the use of a robotic platform. We found that classical CCEs also occurred during bimanual self‐generated hand movements, and that such movements lowered the magnitude of visuo‐tactile CCEs as compared to static conditions. Visuo‐tactile integration, body ownership and the sense of agency were decreased by adding a temporal visuo‐motor delay between hand movements and visual feedback. These data show that visual stimuli interfere less with the perception of tactile stimuli during movement than during static conditions, especially when decoupled from predictive motor information. The results suggest that current models of visuo‐tactile integration need to be extended to account for multisensory integration in dynamic conditions.


Experimental Brain Research | 2010

Differential human brain activation by vertical and horizontal global visual textures

Jane E. Aspell; John Wattam-Bell; Janette Atkinson; Oliver Braddick

Mid-level visual processes which integrate local orientation information for the detection of global structure can be investigated using global form stimuli of varying complexity. Several lines of evidence suggest that the identification of concentric and parallel organisations relies on different underlying neural substrates. The current study measured brain activation by concentric, horizontal parallel, and vertical parallel arrays of short line segments, compared to arrays of randomly oriented segments. Six subjects were scanned in a blocked design functional magnetic resonance imaging experiment. We compared percentage BOLD signal change during the concentric, horizontal and vertical blocks within early retinotopic areas, the fusiform face area and the lateral occipital complex. Unexpectedly, we found that vertical and horizontal parallel forms differentially activated visual cortical areas beyond V1, but in general, activations to concentric and parallel forms did not differ. Vertical patterns produced the highest percentage signal change overall and only area V3A showed a significant difference between concentric and parallel (horizontal) stimuli, with the former better activating this area. These data suggest that the difference in brain activation to vertical and horizontal forms arises at intermediate or global levels of visual representation since the differential activity was found in mid-level retinotopic areas V2 and V3 but not in V1. This may explain why earlier studies—using methods that emphasised responses to local orientation—did not discover this vertical–horizontal anisotropy.


Vision Research | 2006

Interaction of spatial and temporal integration in global form processing

Jane E. Aspell; John Wattam-Bell; Oliver Braddick

The mechanisms by which global structure is extracted from local orientation information are not well understood. Sensitivity to global structure can be investigated using coherence thresholds for detection of global forms of varying complexity, such as parallel and concentric arrays of oriented line elements. In this study, we investigated temporal integration in the detection of these forms and its interaction with spatial integration. We find that for concentric patterns, integration times drop as region size increases from 3 degrees to 10.9 degrees , while for parallel patterns, the reverse is true. The same spatiotemporal relationship was found for Glass patterns as for line element arrays. The two types of organization therefore show quite different spatiotemporal relations, supporting previous arguments that different types of neural mechanism underlie their detection.

Collaboration


Dive into the Jane E. Aspell's collaboration.

Top Co-Authors

Avatar

Olaf Blanke

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Estelle Palluel

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Tom Lavanchy

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Bruno Herbelin

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Lukas Heydrich

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ali Sengül

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge