Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hb Helbig is active.

Publication


Featured researches published by Hb Helbig.


Experimental Brain Research | 2007

Optimal integration of shape information from vision and touch

Hb Helbig; Marc O. Ernst

Many tasks can be carried out by using several sources of information. For example, an object’s size and shape can be judged based on visual as well as haptic cues. It has been shown recently that human observers integrate visual and haptic size information in a statistically optimal fashion, in the sense that the integrated estimate is most reliable (Ernst and Banks in Nature 415:429–433, 2002). In the present study, we tested whether this holds also for visual and haptic shape information. In previous studies virtual stimuli were used to test for optimality in integration. Virtual displays may, however, contain additional inappropriate cues that provide conflicting information and thus affect cue integration. Therefore, we studied optimal integration using real objects. Furthermore, we presented visual information via mirrors to create a spatial separation between visual and haptic cues while observers saw their hand touching the object and thus, knew that they were seeing and feeling the same object. Does this knowledge promote integration even though signals are spatially discrepant which has been shown to lead to a breakdown of integration (Gepshtein et al. in J Vis 5:1013–1023, 2005)? Consistent with the model predictions, observers weighted visual and haptic cues to shape according to their reliability: progressively more weight was given to haptics when visual information became less reliable. Moreover, the integrated visual–haptic estimate was more reliable than either unimodal estimate. These findings suggest that observers integrate visual and haptic shape information of real 3D objects. Thereby, knowledge that multisensory signals arise from the same object seems to promote integration.


Journal of Vision | 2008

Visual-haptic cue weighting is independent of modality-specific attention

Hb Helbig; Marc O. Ernst

Some object properties (e.g., size, shape, and depth information) are perceived through multiple sensory modalities. Such redundant sensory information is integrated into a unified percept. The integrated estimate is a weighted average of the sensory estimates, where higher weight is attributed to the more reliable sensory signal. Here we examine whether modality-specific attention can affect multisensory integration. Selectively reducing attention in one sensory channel can reduce the relative reliability of the estimate derived from this channel and might thus alter the weighting of the sensory estimates. In the present study, observers performed unimodal (visual and haptic) and bimodal (visual-haptic) size discrimination tasks. They either performed the primary task alone or they performed a secondary task simultaneously (dual task). The secondary task consisted of a same/different judgment of rapidly presented visual letter sequences, and so might be expected to withdraw attention predominantly from the visual rather than the haptic channel. Comparing size discrimination performance in single- and dual-task conditions, we found that vision-based estimates were more affected by the secondary task than the haptics-based estimates, indicating that indeed attention to vision was more reduced than attention to haptics. This attentional manipulation, however, did not affect the cue weighting in the bimodal task. Bimodal discrimination performance was better than unimodal performance in both single- and dual-task conditions, indicating that observers still integrate visual and haptic size information in the dual-task condition, when attention is withdrawn from vision. These findings indicate that visual-haptic cue weighting is independent of modality-specific attention.


Perception | 2007

Knowledge about a common source can promote visual – haptic integration

Hb Helbig; Marc O. Ernst

The brain integrates object information from multiple sensory systems to form a unique representation of our environment. Temporal synchrony and spatial coincidence are important factors for multisensory integration, indicating that the multisensory signals come from a common source. Spatial separations can lead to a decline of visual – haptic integration (Gepshtein et al, 2005 Journal of Vision 5 1013–1023). Here we tested whether prior knowledge that two signals arise from the same object can promote integration even when the signals are spatially discrepant. In one condition, participants had direct view of the object they touched. In a second condition, mirrors were used to create a spatial separation between the seen and the felt object. Participants saw the mirror and their hand in the mirror exploring the object and thus knew that they were seeing and touching the same object. To determine the visual — haptic interaction we created a conflict between the seen and the felt shape using an optically distorting lens that made a rectangle look like a square. Participants judged the shape of the probe by selecting a comparison object matching in shape. We found a mutual biasing effect of shape information from vision and touch, independent of whether participants directly looked at the object they touched or whether the seen and the felt object information was spatially separated with the aid of a mirror. This finding suggests that prior knowledge about object identity can promote integration, even when information from vision and touch is provided at spatially discrepant locations.


Experimental Brain Research | 2010

Action observation can prime visual object recognition.

Hb Helbig; Jasmin Steinwender; Markus Graf; Markus Kiefer

Observing an action activates action representations in the motor system. Moreover, the representations of manipulable objects are closely linked to the motor systems at a functional and neuroanatomical level. Here, we investigated whether action observation can facilitate object recognition using an action priming paradigm. As prime stimuli we presented short video movies showing hands performing an action in interaction with an object (where the object itself was always removed from the video). The prime movie was followed by a (briefly presented) target object affording motor interactions that are either similar (congruent condition) or dissimilar (incongruent condition) to the prime action. Participants had to decide whether an object name shown after the target picture corresponds with the picture or not (picture–word matching task). We found superior accuracy for prime–target pairs with congruent as compared to incongruent actions across two experiments. Thus, action observation can facilitate recognition of a manipulable object typically involving a similar action. This action priming effect supports the notion that action representations play a functional role in object recognition.


Journal of Cognitive Neuroscience | 2011

Tracking the time course of action priming on object recognition: Evidence for fast and slow influences of action on perception

Markus Kiefer; Eun-Jin Sim; Hb Helbig; Markus Graf

Perception and action are classically thought to be supported by functionally and neuroanatomically distinct mechanisms. However, recent behavioral studies using an action priming paradigm challenged this view and showed that action representations can facilitate object recognition. This study determined whether action representations influence object recognition during early visual processing stages, that is, within the first 150 msec. To this end, the time course of brain activation underlying such action priming effects was examined by recording ERPs. Subjects were sequentially presented with two manipulable objects (e.g., tools), which had to be named. In the congruent condition, both objects afforded similar actions, whereas dissimilar actions were afforded in the incongruent condition. In order to test the influence of the prime modality on action priming, the first object (prime) was presented either as picture or as word. We found an ERP effect of action priming over the central scalp as early as 100 msec after target onset for pictorial, but not for verbal primes. A later action priming effect on the N400 ERP component known to index semantic integration processes was obtained for both picture and word primes. The early effect was generated in a fronto-parietal motor network, whereas the late effect reflected activity in anterior temporal areas. The present results indicate that action priming influences object recognition through both fast and slow pathways: Action priming affects rapid visuomotor processes only when elicited by pictorial prime stimuli. However, it also modulates comparably slow conceptual integration processes independent of the prime modality.


NeuroImage | 2012

The neural mechanisms of reliability weighted integration of shape information from vision and touch

Hb Helbig; Marc O. Ernst; Emiliano Ricciardi; Pietro Pietrini; Axel Thielscher; Katja M. Mayer; J Schultz; Uta Noppeney

Behaviourally, humans have been shown to integrate multisensory information in a statistically-optimal fashion by averaging the individual unisensory estimates according to their relative reliabilities. This form of integration is optimal in that it yields the most reliable (i.e. least variable) multisensory percept. The present study investigates the neural mechanisms underlying integration of visual and tactile shape information at the macroscopic scale of the regional BOLD response. Observers discriminated the shapes of ellipses that were presented bimodally (visual-tactile) or visually alone. A 2 × 5 factorial design manipulated (i) the presence vs. absence of tactile shape information and (ii) the reliability of the visual shape information (five levels). We then investigated whether regional activations underlying tactile shape discrimination depended on the reliability of visual shape. Indeed, in primary somatosensory cortices (bilateral BA2) and the superior parietal lobe the responses to tactile shape input were increased when the reliability of visual shape information was reduced. Conversely, tactile inputs suppressed visual activations in the right posterior fusiform gyrus, when the visual signal was blurred and unreliable. Somatosensory and visual cortices may sustain integration of visual and tactile shape information either via direct connections from visual areas or top-down effects from higher order parietal areas.


Cerebral Cortex | 2015

When Action Observation Facilitates Visual Perception: Activation in Visuo-Motor Areas Contributes to Object Recognition

Eun-Jin Sim; Hb Helbig; Markus Graf; Markus Kiefer

Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture-word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.


Human Haptic Perception: Basics and Applications | 2008

Haptic perception in interaction with other senses

Hb Helbig; Marc O. Ernst

Human perception is inherently multisensory: we perceive the world simultaneously with multiple senses. While strolling the farmers market, for example, we might become aware of the presence of a delicious fruit by its characteristic smell. We might use our senses of vision and touch to identify the fruit by its typical size and shape and touch it to select only that one with the distinctive soft texture that signals ‘ripe’. When we take a bite of the fruit, we taste its characteristic flavour and hear a slight smacking sound which confirms that the fruit we perceive with our senses of vision, touch, audition, smell and taste is a ripe, delicious peach. That is, in the natural environment the information delivered by our sense of touch is combined with information gathered by each of the other senses to create a robust percept. Combining information from multiple systems is essential because no information-processing system, neither technical nor biological, is powerful enough to provide a precise and accurate sensory estimate under all conditions.


Journal of Vision | 2005

Looking in the mirror does not prevent multimodal integration

Hb Helbig; Marc O. Ernst

Conclusion: Subjects do not integrate information that emanates from sensory sources with spatial discrepancy in lack of facilitating cognitive factors. The reported shape percept is determined by the task. visual matching: Question: Does integration break down when there is a spatial discrepancy between visual and haptic stimulus and there isn’t any reason to assume that both signals belong to the same object.


Journal of Vision | 2010

Integration of shape information from vision and touch: Optimal perception and neural correlates

Hb Helbig; Ricciardi E, Pietrini, P; Marc O. Ernst

Collaboration


Dive into the Hb Helbig's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uta Noppeney

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emiliano Ricciardi

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge