Selim Onat
University of Osnabrück
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Selim Onat.
Frontiers in Psychology | 2010
Alper Açık; Adjmal Sarwary; Rafael Schultze-Kraft; Selim Onat; Peter König
Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature – color, luminance contrast etc. – guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.
Journal of Vision | 2007
Selim Onat; Klaus Libertus; Peter König
In everyday life, our brains decide about the relevance of huge amounts of sensory input. Further complicating this situation, this input is distributed over different modalities. This raises the question of how different sources of information interact for the control of overt attention during free exploration of the environment under natural conditions. Different modalities may work independently or interact to determine the consequent overt behavior. To answer this question, we presented natural images and lateralized natural sounds in a variety of conditions and we measured the eye movements of human subjects. We show that, in multimodal conditions, fixation probabilities increase on the side of the image where the sound originates showing that, at a coarser scale, lateralized auditory stimulation topographically increases the salience of the visual field. However, this shift of attention is specific because the probability of fixation of a given location on the side of the sound scales with the saliency of the visual stimulus, meaning that the selection of fixation points during multimodal conditions is dependent on the saliencies of both auditory and visual stimuli. Further analysis shows that a linear combination of both unimodal saliencies provides a good model for this integration process, which is optimal according to information-theoretical criteria. Our results support a functional joint saliency map, which integrates different unimodal saliencies before any decision is taken about the subsequent fixation point. These results provide guidelines for the performance and architecture of any model of overt attention that deals with more than one modality.
Journal of Vision | 2014
José P. Ossandón; Selim Onat; Peter König
Viewing behavior exhibits temporal and spatial structure that is independent of stimulus content and task goals. One example of such structure is horizontal biases, which are likely rooted in left-right asymmetries of the visual and attentional systems. Here, we studied the existence, extent, and mechanisms of this bias. Left- and right-handed subjects explored scenes from different image categories, presented in original and mirrored versions. We also varied the spatial spectral content of the images and the timing of stimulus onset. We found a marked leftward bias at the start of exploration that was independent of image category. This left bias was followed by a weak bias to the right that persisted for several seconds. This asymmetry was found in the majority of right-handers but not in left-handers. Neither low- nor high-pass filtering of the stimuli influenced the bias. This argues against mechanisms related to the hemispheric segregation of global versus local visual processing. Introducing a delay in stimulus onset after offset of a central fixation spot also had no influence. The bias was present even when stimuli were presented continuously and without any requirement to fixate, associated to both fixation- and saccade-contingent image changes. This suggests the bias is not caused by structural asymmetries in fixation control. Instead the pervasive horizontal bias is compatible with known asymmetries of higher-level attentional areas related to the detection of novel events.
Nature Neuroscience | 2015
Selim Onat; Christian Büchel
Organisms tend to respond similarly to stimuli that are perceptually close to an event that predicts adversity, a phenomenon known as fear generalization. Greater dissimilarity yields weaker behavioral responses, forming a fear-tuning profile. The perceptual model of fear generalization assumes that behavioral fear tuning results from perceptual similarities, suggesting that brain responses should also exhibit the same fear-tuning profile. Using fMRI and a circular fear-generalization procedure, we tested this prediction. In contrast with the perceptual model, insula responses showed less generalization than behavioral responses and encoded the aversive quality of the conditioned stimulus, as shown by high pattern similarity between the conditioned stimulus and the shock. Also inconsistent with the perceptual model, object-sensitive visual areas responded to ambiguity-related outcome uncertainty. Together these results indicate that fear generalization is not passively driven by perception, but is an active process integrating threat identification and ambiguity-based uncertainty to orchestrate a flexible, adaptive fear response.
Vision Research | 2009
Alper Açık; Selim Onat; Frank Schumann; Wolfgang Einhäuser; Peter König
During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features--such as edges, symmetries, and recursive patterns--guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data.
Pain | 2014
Stephan Geuter; Matthias Gamer; Selim Onat; Christian Büchel
Summary Skin conductance and pupil dilation responses to painful stimuli accurately predict behavioral pain ratings across subjects on the individual trial level. ABSTRACT Pain is commonly assessed by subjective reports on rating scales. However, in many experimental and clinical settings, an additional, objective indicator of pain is desirable. In order to identify an objective, parametric signature of pain intensity that is predictive at the individual stimulus level across subjects, we recorded skin conductance and pupil diameter responses to heat pain stimuli of different durations and temperatures in 34 healthy subjects. The temporal profiles of trial‐wise physiological responses were characterized by component scores obtained from principal component analysis. These component scores were then used as predictors in a linear regression analysis, resulting in accurate pain predictions for individual trials. Using the temporal information encoded in the principal component scores explained the data better than prediction by a single summary statistic (ie, maximum amplitude). These results indicate that perceived pain is best reflected by the temporal dynamics of autonomic responses. Application of the regression model to an independent data set of 20 subjects resulted in a very good prediction of the pain ratings demonstrating the generalizability of the identified temporal pattern. Utilizing the readily available temporal information from skin conductance and pupil diameter responses thus allows parametric prediction of pain in human subjects.
Experimental Brain Research | 2009
Sonja Schall; Cliodhna Quigley; Selim Onat; Peter König
Disparate sensory streams originating from a common underlying event share similar dynamics, and this plays an important part in multisensory integration. Here we investigate audiovisual binding by presenting continuously changing, temporally congruent and incongruent stimuli. Recorded EEG signals are used to quantify spectrotemporal and waveform locking of neural activity to stimulus dynamics. Spectrotemporal analysis reveals locking to visual stimulus dynamics in both a broad alpha and the beta band. The properties of these effects suggest they are a correlate of bottom-up processing in the visual system. Waveform locking reveals two cortically distinct processes that lock to visual stimulus dynamics with differing topographies and time lags relative to the stimuli. Most importantly, these are modulated in strength by the congruency of an accompanying auditory stream. In addition, the waveform locking found at occipital electrodes shows an increase over stimulus duration for visual and congruent audiovisual stimuli. Hence we argue that these effects reflect audiovisual interaction. We thus propose that spectrotemporal and waveform locking reflect different mechanisms involved in the processing of dynamic audiovisual stimuli.
PLOS ONE | 2014
Selim Onat; Alper Açık; Frank Schumann; Peter König
During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- and high-level sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features.
Attention Perception & Psychophysics | 2009
Sonja Engmann; Bernard Marius 't Hart; Thomas Sieren; Selim Onat; Peter König; Wolfgang Einhäuser
In natural vision, shifts in spatial attention are associated with shifts of gaze. Computational models of such overt attention typically use the concept of a saliency map: Normalized maps of center-surround differences are computed for individual stimulus features and added linearly to obtain the saliency map. Although the predictions of such models correlate with fixated locations better than chance, their mechanistic assumptions are less well investigated. Here, we tested one key assumption: Do the effects of different features add linearly or according to a max-type of interaction? We measured the eye position of observers viewing natural stimuli whose luminance contrast and/or color contrast (saturation) increased gradually toward one side. We found that these feature gradients biased fixations toward regions of high contrasts. When two contrast gradients (color and luminance) were superimposed, linear summation of their individual effects predicted their combined effect. This demonstrated that the interaction of color and luminance contrast with respect to human overt attention is—irrespective of the precise model—consistent with the assumption of linearity, but not with a max-type interaction of these features.
Cerebral Cortex | 2011
Selim Onat; Peter König; Dirk Jancke
Neurons in primary visual cortex have been characterized by their selectivity to orientation, spatiotemporal frequencies, and motion direction, among others all essential parameters to decompose complex image structure. However, their concerted functioning upon real-world visual dynamics remained unobserved since most studies tested these parameters in isolation rather than in rich mixture. We used voltage-sensitive dye imaging to characterize population responses to natural scene movies, and for comparison, to well-established moving gratings. For the latter, we confirm previous observations of a deceleration/acceleration notch. Upon stimulation with natural movies, however, a subsequent acceleration component was almost absent. Furthermore, we found that natural stimuli revealed sparsely distributed nonseparable space-time dynamics, continuously modulated by movie motion. Net excitation levels detected with gratings were reached only rarely with natural movies. Emphasizing this observation, across the entire time course, both average and peak amplitudes were lower than nonspecific, that is, minimum, activity obtained for gratings. We estimated a necessary increase of ∼30% of movie contrast to match high grating activity levels. Our results suggest that in contrast to gratings, processing of complex natural input is based on a balanced and stationary interplay between excitation and inhibition and point to the importance of suppressive mechanisms in shaping the operating regime of cortical dynamics.