Amir Rasouli
York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amir Rasouli.
ieee intelligent vehicles symposium | 2017
Amir Rasouli; Iuliia Kotseruba; John K. Tsotsos
The contribution of this paper is twofold. The first is a novel dataset for studying behaviors of traffic participants while crossing. Our dataset contains more than 650 samples of pedestrian behaviors in various street configurations and weather conditions. These examples were selected from approx. 240 hours of driving in the city, suburban and rural roads. The second contribution is an analysis of our data from the point of view of joint attention. We identify what types of non-verbal communication cues road users use at the point of crossing, their responses, and under what circumstances the crossing event takes place. It was found that in more than 90% of the cases pedestrians gaze at the approaching cars prior to crossing in non-signalized crosswalks. The crossing action, however, depends on additional factors such as time to collision, explicit drivers reaction or structure of the crosswalk.
canadian conference on computer and robot vision | 2014
Amir Rasouli; John K. Tsotsos
Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In this paper, we describe a method of visual search that exploits the use of attention in the form of a saliency map. This map is used to update the probability distribution of which areas to examine next, increasing the utility of spatial volumes where objects consistent with the targets visual saliency are observed. We present experimental results on a mobile robot and conclude that our method improves the process of visual search in terms of reducing the time and number of actions to be performed to complete the process.
Cognitive Processing | 2018
John K. Tsotsos; Iuliia Kotseruba; Amir Rasouli; Markus D. Solbach
It is almost universal to regard attention as the facility that permits an agent, human or machine, to give priority processing resources to relevant stimuli while ignoring the irrelevant. The reality of how this might manifest itself throughout all the forms of perceptual and cognitive processes possessed by humans, however, is not as clear. Here, we examine this reality with a broad perspective in order to highlight the myriad ways that attentional processes impact both perception and cognition. The paper concludes by showing two real-world problems that exhibit sufficient complexity to illustrate the ways in which attention and cognition connect. These then point to new avenues of research that might illuminate the overall cognitive architecture of spatial cognition.
canadian conference on computer and robot vision | 2016
Amir Rasouli; John K. Tsotsos
Visual search is a fundamental problem in autonomous robotics. Traditionally, visual search is formulatedas an optimization problem in which the sequence of actions ischosen based on immediate efficiency. In this paper we examinethe effects of the task constraint in the form of maximumallowable cost on action selection in search. We propose threealgorithms, namely Greedy Search with Constraint (GSC),Extended Greedy Search (EGS) and Dynamic Look AheadSearch (DLAS), to investigate which algorithm, whether locallyor globally, has the most efficient performance under variousconditions with a predefined task constraint. We examine ourmethods in environments of various sizes and configurationswith three cost constraints including time, energy consumptionand the distance travelled by the robot. Through extensiveexperiments on a mobile robot, we show that the environmentcharacteristics as well as the type of constraint applied canalter the performance of the methods significantly. We alsoshow that GSC algorithm, which relies on visual clues in anenvironment to optimize search, achieves the best and mostefficient performance in comparison to EGS and DLAS.
arXiv: Robotics | 2016
Iuliia Kotseruba; Amir Rasouli; John K. Tsotsos
international conference on computer vision | 2017
Amir Rasouli; Iuliia Kotseruba; John K. Tsotsos
arXiv: Robotics | 2018
Amir Rasouli; John K. Tsotsos
IEEE Transactions on Intelligent Vehicles | 2018
Amir Rasouli; Iuliia Kotseruba; John K. Tsotsos
arXiv: Robotics | 2018
Amir Rasouli; Pablo Lanillos; Gordon Cheng; John K. Tsotsos
arXiv: Robotics | 2018
Amir Rasouli; John K. Tsotsos