Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernard Marius 't Hart is active.

Publication


Featured researches published by Bernard Marius 't Hart.


Frontiers in Neuroscience | 2011

Pupil Dilation Signals Surprise: Evidence for Noradrenaline’s Role in Decision Making

Kerstin Preuschoff; Bernard Marius 't Hart; Wolfgang Einhäuser

Our decisions are guided by the rewards we expect. These expectations are often based on incomplete knowledge and are thus subject to uncertainty. While the neurophysiology of expected rewards is well understood, less is known about the physiology of uncertainty. We hypothesize that uncertainty, or more specifically errors in judging uncertainty, are reflected in pupil dilation, a marker that has frequently been associated with decision making, but so far has remained largely elusive to quantitative models. To test this hypothesis, we measure pupil dilation while observers perform an auditory gambling task. This task dissociates two key decision variables – uncertainty and reward – and their errors from each other and from the act of the decision itself. We first demonstrate that the pupil does not signal expected reward or uncertainty per se, but instead signals surprise, that is, errors in judging uncertainty. While this general finding is independent of the precise quantification of these decision variables, we then analyze this effect with respect to a specific mathematical model of uncertainty and surprise, namely risk and risk prediction error. Using this quantification, we find that pupil dilation and risk prediction error are indeed highly correlated. Under the assumption of a tight link between noradrenaline (NA) and pupil size under constant illumination, our data may be interpreted as empirical evidence for the hypothesis that NA plays a similar role for uncertainty as dopamine does for reward, namely the encoding of error signals.


Visual Cognition | 2009

Gaze allocation in natural stimuli: Comparing free exploration to head-fixed viewing conditions

Bernard Marius 't Hart; Johannes Vockeroth; Frank Schumann; Klaus Bartl; Erich Schneider; Peter König; Wolfgang Einhäuser

“Natural” gaze is typically measured by tracking eye positions during scene presentation in laboratory settings. How informative are such investigations for real-world conditions? Using a mobile eyetracking setup (“EyeSeeCam”), we measure gaze during free exploration of various in- and outdoor environments, while simultaneously recording head-centred videos. Here, we replay these videos in a laboratory setup. Half of the laboratory observers view the movies continuously, half as sequences of static 1-second frames. We find a bias of eye position to the stimulus centre, which is strongest in the 1 s frame replay condition. As a consequence, interobserver consistency is highest in this condition, though not fully explained by spatial bias alone. This leaves room for image specific bottom-up models to predict gaze beyond generic biases. Indeed, the “saliency map” predicts eye position in all conditions, and best for continuous replay. Continuous replay predicts real-world gaze better than 1 s frame replay does. In conclusion, experiments and models benefit from preserving the spatial statistics and temporal continuity of natural stimuli to improve their validity for real-world gaze behaviour.


Experimental Brain Research | 2012

Mind the step: complementary effects of an implicit task on eye and head movements in real-life gaze allocation

Bernard Marius 't Hart; Wolfgang Einhäuser

Gaze in real-world scenarios is controlled by a huge variety of parameters, such as stimulus features, instructions or context, all of which have been studied systematically in laboratory studies. It is, however, unclear how these results transfer to real-world situations, when participants are largely unconstrained in their behavior. Here we measure eye and head orientation and gaze in two conditions, in which we ask participants to negotiate paths in a real-world outdoor environment. The implicit task set is varied by using paths of different irregularity: In one condition, the path consists of irregularly placed steps, and in the other condition, a cobbled road is used. With both paths located adjacently, the visual environment (i.e., context and features) for both conditions is virtually identical, as is the instruction. We show that terrain regularity causes differences in head orientation and gaze behavior, specifically in the vertical direction. Participants direct head and eyes lower when terrain irregularity increases. While head orientation is not affected otherwise, vertical spread of eye-in-head orientation also increases significantly for more irregular terrain. This is accompanied by altered patterns of eye movements, which compensate for the lower average gaze to still inspect the visual environment. Our results quantify the importance of implicit task demands for gaze allocation in the real world, and imply qualitatively distinct contributions of eyes and head in gaze allocation. This underlines the care that needs to be taken when inferring real-world behavior from constrained laboratory data.


Attention Perception & Psychophysics | 2009

Saliency on a natural scene background: effects of color and luminance contrast add linearly.

Sonja Engmann; Bernard Marius 't Hart; Thomas Sieren; Selim Onat; Peter König; Wolfgang Einhäuser

In natural vision, shifts in spatial attention are associated with shifts of gaze. Computational models of such overt attention typically use the concept of a saliency map: Normalized maps of center-surround differences are computed for individual stimulus features and added linearly to obtain the saliency map. Although the predictions of such models correlate with fixated locations better than chance, their mechanistic assumptions are less well investigated. Here, we tested one key assumption: Do the effects of different features add linearly or according to a max-type of interaction? We measured the eye position of observers viewing natural stimuli whose luminance contrast and/or color contrast (saturation) increased gradually toward one side. We found that these feature gradients biased fixations toward regions of high contrasts. When two contrast gradients (color and luminance) were superimposed, linear summation of their individual effects predicted their combined effect. This demonstrated that the interaction of color and luminance contrast with respect to human overt attention is—irrespective of the precise model—consistent with the assumption of linearity, but not with a max-type interaction of these features.


Philosophical Transactions of the Royal Society B | 2013

Attention in natural scenes: contrast affects rapid visual processing and fixations alike

Bernard Marius 't Hart; Hannah Claudia Elfriede Fanny Schmidt; Ingo Klein-Harmeyer; Wolfgang Einhäuser

For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the targets contrast is modified (increased/decreased) and its background modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target + background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios.


PLOS ONE | 2016

Time Course of Reach Adaptation and Proprioceptive Recalibration during Visuomotor Learning

Jennifer E. Ruttle; Erin K. Cressman; Bernard Marius 't Hart; Denise Y. P. Henriques

Training to reach with rotated visual feedback results in adaptation of hand movements, which persist when the perturbation is removed (reach aftereffects). Training also leads to changes in felt hand position, which we refer to as proprioceptive recalibration. The rate at which motor and proprioceptive changes develop throughout training is unknown. Here, we aim to determine the timescale of these changes in order to gain insight into the processes that may be involved in motor learning. Following six rotated reach training trials (30° rotation), at three radially located targets, we measured reach aftereffects and perceived hand position (proprioceptive guided reaches). Participants trained with opposing rotations one week apart to determine if the original training led to any retention or interference. Results suggest that both motor and proprioceptive recalibration occurred in as few as six rotated-cursor training trials (7.57° & 3.88° respectively), with no retention or interference present one week after training. Despite the rapid speed of both motor and sensory changes, these shifts do not saturate to the same degree. Thus, different processes may drive these changes and they may not constitute a single implicit process.


PLOS ONE | 2016

Separating Predicted and Perceived Sensory Consequences of Motor Learning.

Bernard Marius 't Hart; Denise Y. P. Henriques

During motor adaptation the discrepancy between predicted and actually perceived sensory feedback is thought to be minimized, but it can be difficult to measure predictions of the sensory consequences of actions. Studies attempting to do so have found that self-directed, unseen hand position is mislocalized in the direction of altered visual feedback. However, our lab has shown that motor adaptation also leads to changes in perceptual estimates of hand position, even when the target hand is passively displaced. We attribute these changes to a recalibration of hand proprioception, since in the absence of a volitional movement, efferent or predictive signals are likely not involved. The goal here is to quantify the extent to which changes in hand localization reflect a change in the predicted sensory (visual) consequences or a change in the perceived (proprioceptive) consequences. We did this by comparing changes in localization produced when the hand movement was self-generated (‘active localization’) versus robot-generated (‘passive localization’) to the same locations following visuomotor adaptation to a rotated cursor. In this passive version, there should be no predicted consequences of these robot-generated hand movements. We found that although changes in localization were somewhat larger in active localization, the passive localization task also elicited substantial changes. Our results suggest that the change in hand localization following visuomotor adaptation may not be based entirely on updating predicted sensory consequences, but may largely reflect changes in our proprioceptive state estimate.


PLOS ONE | 2011

Faces in places: humans and machines make similar face detection errors.

Bernard Marius 't Hart; Tilman Gerrit Jakob Abresch; Wolfgang Einhäuser

The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toasts roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications (“Viola-Jones” algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithms output: correct detections (“real faces”), false positives (“illusory faces”) and correctly rejected locations (“non faces”). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.


PLOS ONE | 2015

Gaze in visual search is guided more efficiently by positive cues than by negative cues

Günter Kugler; Bernard Marius 't Hart; Stefan Kohlbecher; Wolfgang Einhäuser; Erich Schneider

Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In “positive cue” blocks, participants were informed about possible colors of the target item. In “negative cue” blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line (“relevant items”). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value.


Frontiers in Human Neuroscience | 2015

Visual Search in the Real World: Color Vision Deficiency Affects Peripheral Guidance, but Leaves Foveal Verification Largely Unaffected.

Guenter Kugler; Bernard Marius 't Hart; Stefan Kohlbecher; Klaus Bartl; Frank Schumann; Wolfgang Einhäuser; Erich Schneider

Background: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. Methods: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Results: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. Conclusions: In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency.

Collaboration


Dive into the Bernard Marius 't Hart's collaboration.

Top Co-Authors

Avatar

Wolfgang Einhäuser

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erich Schneider

Brandenburg University of Technology

View shared research outputs
Top Co-Authors

Avatar

Frank Schumann

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Peter König

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge