Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sennay Ghebreab is active.

Publication


Featured researches published by Sennay Ghebreab.


Journal of Vision | 2009

Brain responses strongly correlate with Weibull image statistics when processing natural images.

H.S. Scholte; Sennay Ghebreab; Lourens J. Waldorp; Arnold W. M. Smeulders; Victor A. F. Lamme

The visual appearance of natural scenes is governed by a surprisingly simple hidden structure. The distributions of contrast values in natural images generally follow a Weibull distribution, with beta and gamma as free parameters. Beta and gamma seem to structure the space of natural images in an ecologically meaningful way, in particular with respect to the fragmentation and texture similarity within an image. Since it is often assumed that the brain exploits structural regularities in natural image statistics to efficiently encode and analyze visual input, we here ask ourselves whether the brain approximates the beta and gamma values underlying the contrast distributions of natural images. We present a model that shows that beta and gamma can be easily estimated from the outputs of X-cells and Y-cells. In addition, we covaried the EEG responses of subjects viewing natural images with the beta and gamma values of those images. We show that beta and gamma explain up to 71% of the variance of the early ERP signal, substantially outperforming other tested contrast measurements. This suggests that the brain is strongly tuned to the images beta and gamma values, potentially providing the visual system with an efficient way to rapidly classify incoming images on the basis of omnipresent low-level natural image statistics.


The Journal of Neuroscience | 2013

From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category

I. Groen; Sennay Ghebreab; H. Prins; Victor A. F. Lamme; H.S. Scholte

The visual system processes natural scenes in a split second. Part of this process is the extraction of “gist,” a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100–150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness.


IEEE Transactions on Biomedical Engineering | 2004

Combining strings and necklaces for interactive three-dimensional segmentation of spinal images using an Integral deformable spine model

Sennay Ghebreab; Arnold W. M. Smeulders

Segmentation of the spine directly from three-dimensional (3-D) image data is desirable to accurately capture its morphological properties. We describe a method that allows true 3-D spinal image segmentation using a deformable integral spine model. The method learns the appearance of vertebrae from multiple continuous features recorded along vertebra boundaries in a given training set of images. Important summarizing statistics are encoded into a necklace model on which landmarks are differentiated on their free dimensions. The landmarks are used within a priority segmentation scheme to reduce the complexity of the segmentation problem. Necklace models are coupled by string models. The string models describe in detail the biological variability in the appearance of spinal curvatures from multiple continuous features recorded in the training set. In the segmentation phase, the necklace and string models are used to interactively detect vertebral structures in new image data via elastic deformation reminiscent of a marionette with strings allowing for movement between interrelated structures. Strings constrain the deformation of the spine model within feasible solutions. The driving application in this work is analysis of computed tomography scans of the human lumbar spine. An illustration of the segmentation process shows that the method is promising for segmentation of the spine and for assessment of its morphological properties.


PLOS Computational Biology | 2012

Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.


international conference on computer vision | 2013

Calibration-Free Gaze Estimation Using Human Gaze Patterns

Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab

We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4:3°. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup, without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators.


IEEE Transactions on Medical Imaging | 2004

Population-based incremental interactive concept learning for image retrieval by stochastic string segmentations

Sennay Ghebreab; C. Carl Jaffe; Arnold W. M. Smeulders

We propose a method for concept-based medical image retrieval that is a superset of existing semantic-based image retrieval methods. We conceive of a concept as an incremental and interactive formalization of the users conception of an object in an image. The premise is that such a concept is closely related to a users specific preferences and subjectivity and, thus, allows to deal with the complexity and content-dependency of medical image content. We describe an object in terms of multiple continuous boundary features and represent an object concept by the stochastic characteristics of an object population. A population-based incrementally learning technique, in combination with relevance feedback, is then used for concept customization. The user determines the speed and direction of concept customization using a single parameter that defines the degree of exploration and exploitation of the search space. Images are retrieved from a database in a limited number of steps based upon the customized concept. To demonstrate our method we have performed concept-based image retrieval on a database of 292 digitized X-ray images of cervical vertebrae with a variety of abnormalities. The results show that our method produces precise and accurate results when doing a direct search. In an open-ended search our method efficiently and effectively explores the search space.


Journal of Neurophysiology | 2016

The time course of natural scene perception with reduced attention

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.


Frontiers in Computational Neuroscience | 2012

Low-level contrast statistics are diagnostic of invariance of natural textures

I. Groen; Sennay Ghebreab; Victor A. F. Lamme; H. Steven Scholte

Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as “different” compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Strings: variational deformable models of multivariate continuous boundary features

Sennay Ghebreab; Arnold W. M. Smeulders

We propose a new image segmentation technique called strings. A string is a variational deformable model that is learned from a collection of example objects rather than built from a priori analytical or geometrical knowledge. As opposed to existing approaches, an object boundary is represented by a one-dimensional multivariate curve in functional space, a feature function, rather than by a point in vector space. In the learning phase, feature functions are defined by extraction of multiple shape and image features along continuous object boundaries in a given learning set. The feature functions are aligned, then subjected to functional principal components analysis and functional principal regression to summarize the feature space and to model its content, respectively. Also, a Mahalanobis distance model is constructed for evaluation of boundaries in terms of their feature functions, taking into account the natural variations seen in the learning set. In the segmentation phase, an object boundary in a new image is searched for with help of a curve. The curve gives rise to a feature function, a string, that is weighted by the regression model and evaluated by the Mahalanobis model. The curve is deformed in an iterative procedure to produce feature functions with minimal Mahalanobis distance. Strings have been compared with active shape models on 145 vertebra images, showing that strings produce better results when initialized close to the target boundary, and comparable results otherwise.


NeuroImage: Clinical | 2016

Machine learning and dyslexia: Classification of individual structural neuro-imaging scans of students with and without dyslexia.

Peter Tamboer; Harrie C. M. Vorst; Sennay Ghebreab; H.S. Scholte

Meta-analytic studies suggest that dyslexia is characterized by subtle and spatially distributed variations in brain anatomy, although many variations failed to be significant after corrections of multiple comparisons. To circumvent issues of significance which are characteristic for conventional analysis techniques, and to provide predictive value, we applied a machine learning technique – support vector machine – to differentiate between subjects with and without dyslexia. In a sample of 22 students with dyslexia (20 women) and 27 students without dyslexia (25 women) (18–21 years), a classification performance of 80% (p < 0.001; d-prime = 1.67) was achieved on the basis of differences in gray matter (sensitivity 82%, specificity 78%). The voxels that were most reliable for classification were found in the left occipital fusiform gyrus (LOFG), in the right occipital fusiform gyrus (ROFG), and in the left inferior parietal lobule (LIPL). Additionally, we found that classification certainty (e.g. the percentage of times a subject was correctly classified) correlated with severity of dyslexia (r = 0.47). Furthermore, various significant correlations were found between the three anatomical regions and behavioural measures of spelling, phonology and whole-word-reading. No correlations were found with behavioural measures of short-term memory and visual/attentional confusion. These data indicate that the LOFG, ROFG and the LIPL are neuro-endophenotype and potentially biomarkers for types of dyslexia related to reading, spelling and phonology. In a second and independent sample of 876 young adults of a general population, the trained classifier of the first sample was tested, resulting in a classification performance of 59% (p = 0.07; d-prime = 0.65). This decline in classification performance resulted from a large percentage of false alarms. This study provided support for the use of machine learning in anatomical brain imaging.

Collaboration


Dive into the Sennay Ghebreab's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H.S. Scholte

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

I. Groen

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theo Gevers

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge