Amrita Puri
Illinois State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amrita Puri.
Experimental Brain Research | 2012
Santani Teng; Amrita Puri; David Whitney
Echolocating organisms represent their external environment using reflected auditory information from emitted vocalizations. This ability, long known in various non-human species, has also been documented in some blind humans as an aid to navigation, as well as object detection and coarse localization. Surprisingly, our understanding of the basic acuity attainable by practitioners—the most fundamental underpinning of echoic spatial perception—remains crude. We found that experts were able to discriminate horizontal offsets of stimuli as small as ~1.2° auditory angle in the frontomedial plane, a resolution approaching the maximum measured precision of human spatial hearing and comparable to that found in bats performing similar tasks. Furthermore, we found a strong correlation between echolocation acuity and age of blindness onset. This first measure of functional spatial resolution in a population of expert echolocators demonstrates precision comparable to that found in the visual periphery of sighted individuals.
Neuropsychologia | 2012
Allison Yamanashi Leib; Amrita Puri; Jason Fischer; Shlomo Bentin; David Whitney; Lynn C. Robertson
Prosopagnosics, individuals who are impaired at recognizing single faces, often report increased difficulty when confronted with crowds. However, the discrimination of crowds has never been fully tested in the prosopagnosic population. Here we investigate whether developmental prosopagnosics can extract ensemble characteristics from groups of faces. DP and control participants viewed sets of faces varying in either identity or emotion, and were asked to estimate the average identity or emotion of each set. Face sets were displayed in two orientations (upright and inverted) to control for low-level visual features during ensemble encoding. Control participants made more accurate estimates of the mean identity and emotion when faces were upright than inverted. In all conditions, DPs performed equivalently to controls. This finding demonstrates that integration across different faces in a crowd is possible in the prosopagnosic population and appears to be intact despite their face recognition deficits. Results also demonstrate that ensemble representations are derived differently for upright and inverted faces, and the effects are not due to low-level visual information.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Kenith V. Sobel; Amrita Puri; Thomas J. Faulkenberry; Taylor D. Dague
The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target’s physical size, and the remaining distractors shared the target’s numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect.
Attention Perception & Psychophysics | 2016
Kenith V. Sobel; Amrita Puri; Thomas J. Faulkenberry
The size congruity effect refers to the interaction between the numerical and physical (i.e., font) sizes of digits in a numerical (or physical) magnitude selection task. Although various accounts of the size congruity effect have attributed this interaction to either an early representational stage or a late decision stage, only Risko, Maloney, and Fugelsang (Attention, Perception, & Psychophysics, 75, 1137–1147, 2013) have asserted a central role for attention. In the present study, we used a visual search paradigm to further study the role of attention in the size congruity effect. In Experiments 1 and 2, we showed that manipulating top-down attention (via the task instructions) had a significant impact on the size congruity effect. The interaction between numerical and physical size was larger for numerical size comparison (Exp. 1) than for physical size comparison (Exp. 2). In the remaining experiments, we boosted the feature salience by using a unique target color (Exp. 3) or by increasing the display density by using three-digit numerals (Exps. 4 and 5). As expected, a color singleton target abolished the size congruity effect. Searching for three-digit targets based on numerical size (Exp. 4) resulted in a large size congruity effect, but search based on physical size (Exp. 5) abolished the effect. Our results reveal a substantial role for top-down attention in the size congruity effect, which we interpreted as support for a shared-decision account.
I-perception | 2011
Santani Teng; Amrita Puri; David Whitney
In active echolocation, reflections from self-generated acoustic pulses are used to represent the external environment. This ability has been described in some blind humans to aid in navigation and obstacle perception[1-4]. Echoic object representation has been described in echolocating bats and dolphins[5,6], but most prior work in humans has focused on navigation or other basic spatial tasks[4,7,8]. Thus, the nature of echoic object information received by human practitioners remains poorly understood. In two match-to-sample experiments, we tested the ability of five experienced blind echolocators to identify objects haptically which they had previously sampled only echoically. In each trial, a target object was presented on a platform and subjects sampled it using echolocation clicks. The target object was then removed and re-presented along with a distractor object. Only tactile sampling was allowed in identifying the target. Subjects were able to identify targets at greater than chance levels among both common household objects (p < .001) and novel objects constructed from plastic blocks (p = .018). While overall accuracy was indicative of high task difficulty, our results suggest that objects sampled by echolocation are recognizable by shape, and that this representation is available across sensory modalities.
Frontiers in Psychology | 2015
Yang Bai; Allison Yamanashi Leib; Amrita Puri; David Whitney; Kaiping Peng
Journal of Vision | 2016
Karl Zipser; Kendrick Kay; Amrita Puri
Journal of Vision | 2016
Alex Dayer; Kassandra R. Lee; Stephen Chow; Eli Flynn; Amrita Puri
Journal of Vision | 2016
Sarah E. Caputo; Amrita Puri
Journal of Vision | 2016
Amrita Puri; Kenith V. Sobel; Nikolas Sieg; Zachery Stillman