Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Basil Wahn is active.

Publication


Featured researches published by Basil Wahn.


PLOS ONE | 2016

Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception

Sabine U. König; Frank Schumann; Johannes Keyser; Caspar Goeke; Carina Krause; Susan Wache; Aleksey Lytochkin; Manuel Ebert; Vincent Brunsch; Basil Wahn; Kai Kaspar; Saskia K. Nagel; T Meilinger; Hh Bülthoff; Thomas Wolbers; Christian Büchel; Peter König

Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.


Frontiers in Psychology | 2017

Joint Action: Mental Representations, Shared Information and General Mechanisms for Coordinating with Others

Cordula Vesper; Ekaterina Abramova; Judith Bütepage; Francesca Ciardo; Benjamin Crossey; Alfred O. Effenberg; Dayana Hristova; April Karlinsky; Luke McEllin; Sari R. R. Nijssen; Laura Schmitz; Basil Wahn

In joint action, multiple people coordinate their actions to perform a task together. This often requires precise temporal and spatial coordination. How do co-actors achieve this? How do they coordinate their actions toward a shared task goal? Here, we provide an overview of the mental representations involved in joint action, discuss how co-actors share sensorimotor information and what general mechanisms support coordination with others. By deliberately extending the review to aspects such as the cultural context in which a joint action takes place, we pay tribute to the complex and variable nature of this social phenomenon.


Frontiers in Integrative Neuroscience | 2016

Attentional Resource Allocation in Visuotactile Processing Depends on the Task, But Optimal Visuotactile Integration Does Not Depend on Attentional Resources.

Basil Wahn; Peter König

Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources.


Ergonomics | 2016

Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

Basil Wahn; Jessika Schwandt; Matti Krüger; Daina Crafa; Vanessa Nunnendorf; Peter König

In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other’s gaze, they use this information to adjust to each other’s actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other’s gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.


Advances in Cognitive Psychology | 2017

Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

Basil Wahn; Peter König

Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain’s costly resource expenditures and simultaneously maximizes capability to process currently relevant information.


PLOS ONE | 2016

Pupil sizes scale with attentional load and task experience in a multiple object tracking task

Basil Wahn; Daniel P. Ferris; W. David Hairston; Peter König; Nicholas Seow Chiang Price

Previous studies have related changes in attentional load to pupil size modulations. However, studies relating changes in attentional load and task experience on a finer scale to pupil size modulations are scarce. Here, we investigated how these changes affect pupil sizes. To manipulate attentional load, participants covertly tracked between zero and five objects among several randomly moving objects on a computer screen. To investigate effects of task experience, the experiment was conducted on three consecutive days. We found that pupil sizes increased with each increment in attentional load. Across days, we found systematic pupil size reductions. We compared the model fit for predicting pupil size modulations using attentional load, task experience, and task performance as predictors. We found that a model which included attentional load and task experience as predictors had the best model fit while adding performance as a predictor to this model reduced the overall model fit. Overall, results suggest that pupillometry provides a viable metric for precisely assessing attentional load and task experience in visuospatial tasks.


I-perception | 2017

Auditory Stimulus Detection Partially Depends on Visuospatial Attentional Resources

Basil Wahn; Supriya Murali; Scott Sinnett; Peter König

Humans’ ability to detect relevant sensory information while being engaged in a demanding task is crucial in daily life. Yet, limited attentional resources restrict information processing. To date, it is still debated whether there are distinct pools of attentional resources for each sensory modality and to what extent the process of multisensory integration is dependent on attentional resources. We addressed these two questions using a dual task paradigm. Specifically, participants performed a multiple object tracking task and a detection task either separately or simultaneously. In the detection task, participants were required to detect visual, auditory, or audiovisual stimuli at varying stimulus intensities that were adjusted using a staircase procedure. We found that tasks significantly interfered. However, the interference was about 50% lower when tasks were performed in separate sensory modalities than in the same sensory modality, suggesting that attentional resources are partly shared. Moreover, we found that perceptual sensitivities were significantly improved for audiovisual stimuli relative to unisensory stimuli regardless of whether attentional resources were diverted to the multiple object tracking task or not. Overall, the present study supports the view that attentional resource allocation in multisensory processing is task-dependent and suggests that multisensory benefits are not dependent on attentional resources.


PLOS ONE | 2018

Performance similarities predict collective benefits in dyadic and triadic joint visual search

Basil Wahn; Artur Czeszumski; Peter König

When humans perform tasks together, they may reach a higher performance in comparison to the best member of a group (i.e., a collective benefit). Earlier research showed that interindividual performance similarities predict collective benefits for several joint tasks. Yet, researchers did not test whether this is the case for joint visuospatial tasks. Also, researchers did not investigate whether dyads and triads reach a collective benefit when they are forbidden to exchange any information while performing a visuospatial task. In this study, participants performed a joint visual search task either alone, in dyads, or in triads, and were not allowed to exchange any information while doing the task. We found that dyads reached a collective benefit. Triads did outperform their best individual member and dyads—yet, they did not outperform the best dyad pairing within the triad. In addition, similarities in performance significantly predicted the collective benefit for dyads and triads. Furthermore, we find that the dyads’ and triads’ search performances closely match a simulated performance based on the individual search performances, which assumed that members of a group act independently. Overall, the present study supports the view that performance similarities predict collective benefits in joint tasks. Moreover, it provides a basis for future studies to investigate the benefits of exchanging information between co-actors in joint visual search tasks.


bioRxiv | 2017

Pupil Size Asymmetries Are Modulated By An Interaction Between Attentional Load And Task Experience

Basil Wahn; Daniel P. Ferris; W. David Hairston; Peter König

In a recently published study [1], we investigated how human pupil sizes are modulated by task experience as well as attentional load in a visuospatial task. In particular, participants performed a multiple object tracking (MOT) task while pupil sizes were recorded using binocular eyetracking measurements. To vary the attentional load, participants performed the MOT task either tracking zero or up to five targets. To manipulate the task experience, participants performed the MOT task on three consecutive days. We found that pupil sizes systematically increased with attentional load and decreased with additional task experience. For all these analyses, we averaged across the pupil sizes for the left and right eye. However, findings of a recent study [2] have suggested that also asymmetries in pupil sizes could be related to attentional processing. Given these findings, we further analyzed our data to investigate to what extent pupil size asymmetries are modulated by attentional load and task experience. We found a significant interaction effect between these two factors. That is, on the first day of the measurements, pupil size asymmetries were not modulated by attentional load while this was the case for the second and third day of the measurements. In particular, for the second and third day, pupil size asymmetries systematically increased with attentional load, indicating that attentional processing also modulates pupil size asymmetries. Given these results, we suggest that an increase in task experience (and associated reductions in arousal) uncover modulations in pupil size asymmetries related to attentional processing that are not observable for typical arousal levels. We suggest that these modulations could be a result of right-lateralized attentional processing in the brain that in turn influences structures involved in the control of pupil sizes such as the locus coeruleus. We can exclude a number of possible alternative explanations for this effect related to our experimental setup. Yet, given the novelty of this finding and the arguably speculative explanation of the underlying mechanisms, we suggest that future studies are needed to replicate the present effect and further investigate the underlying mechanisms.


Annals of the New York Academy of Sciences | 2018

Group benefits in joint perceptual tasks—a review

Basil Wahn; Alan Kingstone; Peter König

In daily life, humans often perform perceptual tasks together to reach a shared goal. In these situations, individuals may collaborate (e.g., by distributing task demands) to perform the task better than when the task is performed alone (i.e., attain a group benefit). In this review, we identify the factors influencing if, and to what extent, a group benefit is attained and provide a framework of measures to assess group benefits in perceptual tasks. In particular, we integrate findings from two frequently investigated joint perceptual tasks: visuospatial tasks and decision‐making tasks. For both task types, we find that an exchange of information between coactors is critical to improve joint performance. Yet, the type of exchanged information and how coactors collaborate differs between tasks. In visuospatial tasks, coactors exchange information about the performed actions to distribute task demands. In perceptual decision‐making tasks, coactors exchange their confidence on their individual perceptual judgments to negotiate a joint decision. We argue that these differences can be explained by the task structure: coactors distribute task demands if a joint task allows for a spatial division and stimuli can be accurately processed by one individual. Otherwise, they perform the task individually and then integrate their individual judgments.

Collaboration


Dive into the Basil Wahn's collaboration.

Top Co-Authors

Avatar

Peter König

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Alan Kingstone

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Laura Schmitz

Central European University

View shared research outputs
Top Co-Authors

Avatar

Scott Sinnett

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

April Karlinsky

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caspar Goeke

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Schumann

University of Osnabrück

View shared research outputs
Researchain Logo
Decentralizing Knowledge