Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Calden Wloka is active.

Publication


Featured researches published by Calden Wloka.


Vision Research | 2015

On computational modeling of visual saliency: Examining what's right, and what's left.

Neil D. B. Bruce; Calden Wloka; Nick Frosst; Shafin Rahman; John K. Tsotsos

In the past decade, a large number of computational models of visual saliency have been proposed. Recently a number of comprehensive benchmark studies have been presented, with the goal of assessing the performance landscape of saliency models under varying conditions. This has been accomplished by considering fixation data, annotated image regions, and stimulus patterns inspired by psychophysics. In this paper, we present a high-level examination of challenges in computational modeling of visual saliency, with a heavy emphasis on human vision and neural computation. This includes careful assessment of different metrics for performance of visual saliency models, and identification of remaining difficulties in assessing model performance. We also consider the importance of a number of issues relevant to all saliency models including scale-space, the impact of border effects, and spatial or central bias. Additionally, we consider the biological plausibility of models in stepping away from exemplar input patterns towards a set of more general theoretical principles consistent with behavioral experiments. As a whole, this presentation establishes important obstacles that remain in visual saliency modeling, in addition to identifying a number of important avenues for further investigation.


computer vision and pattern recognition | 2016

Spatially Binned ROC: A Comprehensive Saliency Metric

Calden Wloka; John Tstotsos

A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.


BMC Neuroscience | 2014

Boundary effects across filter spatial scales

Calden Wloka; Neil D. B. Bruce; John K. Tsotsos

Most saliency algorithms rely on a filter processing stage in which an image is analyzed using a bank of convolution kernels. When applying a convolution to an image, however, a region of pixels with thickness equal to one-half the kernel width at the image border is left undefined due to insufficient input (this undefined region is hereafter referred to as the boundary region). While the percentage of the output image falling within the boundary region is often kept small, this limits the spatial scale of filter which can be applied to the image. There is clear psychophysical evidence from visual search tasks that spatial scale can be used as a component of visual search, with differences in feature size, spatial frequency, and sub-component grouping [1]. Thus, handling filters with dimensions that are significant with respect to the image size is worthwhile if the spatial scale component of visual search is to be effectively incorporated, but this requires dealing with the resulting boundary region. A large number of computational strategies have been developed over the years for dealing with the boundary region issue, including: image tiling/wrapping, image mirroring, image padding, filter truncation, and output truncation. Formal evaluations and comparisons of such strategies have not previously been performed. We provide such a comparison using visual search stimuli commonly utilized in human psychophysical experiments, as well as propose a novel method for incorporating information across multiple spatial scales with an output image defined up to the boundary region created by the smallest spatial scale.


F1000Research | 2013

Overt fixations reflect a natural central bias

Calden Wloka; John K. Tsotsos


Journal of Eye Movement Research | 2016

A Focus on Selection for Fixation

John K. Tsotsos; Iuliia Kotseruba; Calden Wloka


computer vision and pattern recognition | 2018

Active Fixation Control to Predict Saccade Sequences

Calden Wloka; Iuliia Kotseruba; John K. Tsotsos


arXiv: Computer Vision and Pattern Recognition | 2017

Saccade Sequence Prediction: Beyond Static Saliency Maps.

Calden Wloka; Iuliia Kotseruba; John K. Tsotsos


Journal of Vision | 2017

The Interaction of Target-Distractor Similarity and Visual Search Efficiency for Basic Features

Calden Wloka; Sang-Ah Yoo; Rakesh Sengupta; John K. Tsotsos


Archive | 2016

Focusing on Selection for Fixation

John K. Tsotsos; Calden Wloka; Yulia Kotseruba


Journal of Vision | 2016

Psychophysical evaluation of saliency algorithms

Calden Wloka; Sang-Ah Yoo; Rakesh Sengupta; Toni Kunic; John K. Tsotsos

Collaboration


Dive into the Calden Wloka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge