Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shaiyan Keshvari is active.

Publication


Featured researches published by Shaiyan Keshvari.


PLOS Computational Biology | 2013

No Evidence for an Item Limit in Change Detection

Shaiyan Keshvari; Ronald van den Berg; Wei Ji Ma

Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items (“item-limit models”). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size (“continuous-resource models”). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.


PLOS ONE | 2012

Probabilistic computation in human perception under variability in encoding precision

Shaiyan Keshvari; Ronald van den Berg; Wei Ji Ma

A key function of the brain is to interpret noisy sensory information. To do so optimally, observers must, in many tasks, take into account knowledge of the precision with which stimuli are encoded. In an orientation change detection task, we find that encoding precision does not only depend on an experimentally controlled reliability parameter (shape), but also exhibits additional variability. In spite of variability in precision, human subjects seem to take into account precision near-optimally on a trial-to-trial and item-to-item basis. Our results offer a new conceptualization of the encoding of sensory information and highlight the brain’s remarkable ability to incorporate knowledge of uncertainty during complex perceptual decision-making.


Journal of Vision | 2016

Pooling of continuous features provides a unifying account of crowding.

Shaiyan Keshvari; Ruth Rosenholtz

Visual crowding refers to phenomena in which the perception of a peripheral target is strongly affected by nearby flankers. Observers often report seeing the stimuli as “jumbled up,” or otherwise confuse the target with the flankers. Theories of visual crowding contend over which aspect of the stimulus gets confused in peripheral vision. Attempts to test these theories have led to seemingly conflicting results, with some experiments suggesting that the mechanism underlying crowding operates on unbound features like color or orientation (Parkes, Lund, Angelucci, Solomon, & Morgan, 2001), while others suggest it “jumbles up” more complex features, or even objects like letters (Korte, 1923). Many of these theories operate on discrete features of the display items, such as the orientation of each line or the identity of each item. By contrast, here we examine the predictions of the Texture Tiling Model, which operates on continuous feature measurements (Balas, Nakano, & Rosenholtz, 2009). We show that the main effects of three studies from the crowding literature are consistent with the predictions of Texture Tiling Model. This suggests that many of the stimulus-specific curiosities surrounding crowding are the inherent result of the informativeness of a rich set of image statistics for the particular tasks.


ACM Transactions on Computer-Human Interaction | 2017

Colors -- Messengers of Concepts: Visual Design Mining for Learning Color Semantics

Ali Jahanian; Shaiyan Keshvari; S. V. N. Vishwanathan; Jan P. Allebach

We study the concept of color semantics by modeling a dataset of magazine cover designs, evaluating the model via crowdsourcing, and demonstrating several prototypes that facilitate color-related design tasks. We investigate a probabilistic generative modeling framework that expresses semantic concepts as a combination of color and word distributions -- color-word topics. We adopt an extension to Latent Dirichlet Allocation (LDA) topic modeling, called LDA-dual, to infer a set of color-word topics over a corpus of 2,654 magazine covers spanning 71 distinct titles and 12 genres. Although LDA models text documents as distributions over word topics, we model magazine covers as distributions over color-word topics. The results of our crowdsourcing experiments confirm that the model is able to successfully discover the associations between colors and linguistic concepts. Finally, we demonstrate several prototype applications that use the learned model to enable more meaningful interactions in color palette recommendation, design example retrieval, pattern recoloring, image retrieval, and image color selection.


Journal of Vision | 2017

Evidence for configural superiority effects in convolutional neural networks

Shaiyan Keshvari; Ruth Rosenholtz

• Configural superiority effect (CSE) – combinations of parts are perceived more quickly and accurately than the parts alone1,2 • CSEs thought to be driven by “emergent” feature (EF) differences between target and distractors1 • EFs may result from the visual system learning abstract representations to support complex tasks, like object recognition, at the expense of simpler but less ecologically relevant tasks • Convolutional Neural Nets (CNNs) excel at object recognition, as well as tasks for which they are not trained. Feature vectors at different layers correlate with responses of various brain areas3


Journal of Vision | 2016

Peripheral material perception

Shaiyan Keshvari; Maarten W. A. Wijntjes

Experimental details: • 6 material categories from FMD (stone, water, wood, fabric, foliage, leather) • 50 examples from each category (leaving out “object-like” images), 300 trials • Grayscale (Luminance channel from LAB space) • Feedback on only first 25 trials • Texture used one synthetic texture (made using P-S algorithm) per original • Peripheral used gaze-contingent display (enforced 2 deg radius to center, Eyelink 2000) with stimuli at 10 deg eccentricity • Peripheral and texture done as separate blocks in one session INTRODUCTION Given a single image, humans can rapidly identify a material and its properties. This ability relies on various cues, including but not limited to color, texture, shape, and gloss. Here, we study the contribution of texture, and how it might relate to peripheral perception of materials.


F1000Research | 2013

A high-dimensional pooling model accounts for seemingly conflicting substitution effects in crowding

Shaiyan Keshvari; Ruth Rosenholtz


Journal of Vision | 2018

Modeling perceptual grouping in peripheral vision for information visualization

Shaiyan Keshvari; Dian Yu; Ruth Rosenholtz


Cognitive Research: Principles and Implications | 2018

Web pages: What can you see in a single fixation?

Ali Jahanian; Shaiyan Keshvari; Ruth Rosenholtz


arXiv: Neural and Evolutionary Computing | 2017

A Fast Foveated Fully Convolutional Network Model for Human Peripheral Vision

Lex Fridman; Benedikt Jenik; Shaiyan Keshvari; Bryan Reimer; Christoph Zetzsche; Ruth Rosenholtz

Collaboration


Dive into the Shaiyan Keshvari's collaboration.

Top Co-Authors

Avatar

Ruth Rosenholtz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Jahanian

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bryan Reimer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lex Fridman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dian Yu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maarten W. A. Wijntjes

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge