Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hossein Adeli is active.

Publication


Featured researches published by Hossein Adeli.


Philosophical Transactions of the Royal Society B | 2013

Modelling eye movements in a categorical search task

Gregory J. Zelinsky; Hossein Adeli; Yifan Peng; Dimitris Samaras

We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search.


The Journal of Neuroscience | 2017

A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search

Hossein Adeli; Françoise Vitu; Gregory J. Zelinsky

Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASCs predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems.


Journal of Vision | 2013

Specifying the relationships between objects, gaze, and descriptions for scene understanding

Kiwon Yun; Yifan Peng; Hossein Adeli; Tamara L. Berg; Dimitris Samaras; Gregory J. Zelinsky

Goal • Conduct combined behavioral and computer vision experiments to better understand the relationships between: Ø the objects that are detected in an image, Ø the eye movements that people make while viewing that image, Ø and the words that they produce when asked to describe it. Contribution • Comprehension of how humans view and interpret visual imagery. • Demonstrate prototype applications for gaze-enabled detection and annotation by integrating gaze cues with the outputs of current visual recognition systems


Journal of Vision | 2015

A model of saccade programming during scene viewing based on population averaging in the superior colliculus

Hossein Adeli; Françoise Vitu; Gregory J. Zelinsky


Journal of Vision | 2018

Emergence of visuospatial attention in a brain-inspired deep neural network

Gregory J. Zelinsky; Hossein Adeli


Journal of Vision | 2017

The magnification-factor accounts for the greater hypometria and imprecision of larger saccades : evidence from a parametric human-behavioral study.

Françoise Vitu; Soazig Casteau; Hossein Adeli; Gregory J. Zelinsky; Eric Castet


Journal of Vision | 2017

Predicting Scanpath Agreement during Scene Viewing using Deep Neural Networks

Zijun Wei; Hossein Adeli; Minh Hoai; Gregory J. Zelinsky; Dimitris Samaras


neural information processing systems | 2016

Learned Region Sparsity and Diversity Also Predicts Visual Attention

Zijun Wei; Hossein Adeli; Minh Hoai; Gregory J. Zelinsky; Dimitris Samaras


Perception | 2016

Modeling attention and saccade programming in realworld contexts

Gregory J. Zelinsky; Hossein Adeli; Françoise Vitu


Journal of Vision | 2016

The new best model of visual search can be found in the brain

Gregory J. Zelinsky; Hossein Adeli; Françoise Vitu

Collaboration


Dive into the Hossein Adeli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minh Hoai

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Yifan Peng

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Zijun Wei

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Soazig Casteau

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Kiwon Yun

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Tamara L. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Eric Castet

Aix-Marseille University

View shared research outputs
Researchain Logo
Decentralizing Knowledge