Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay Hegdé is active.

Publication


Featured researches published by Jay Hegdé.


The Journal of Neuroscience | 2010

A Link between Visual Disambiguation and Visual Memory

Jay Hegdé; Daniel Kersten

Sensory information in the retinal image is typically too ambiguous to support visual object recognition by itself. Theories of visual disambiguation posit that to disambiguate, and thus interpret, the incoming images, the visual system must integrate the sensory information with previous knowledge of the visual world. However, the underlying neural mechanisms remain unclear. Using functional magnetic resonance imaging (fMRI) of human subjects, we have found evidence for functional specialization for storing disambiguating information in memory versus interpreting incoming ambiguous images. Subjects viewed two-tone, “Mooney” images, which are typically ambiguous when seen for the first time but are quickly disambiguated after viewing the corresponding unambiguous color images. Activity in one set of regions, including a region in the medial parietal cortex previously reported to play a key role in Mooney image disambiguation, closely reflected memory for previously seen color images but not the subsequent disambiguation of Mooney images. A second set of regions, including the superior temporal sulcus, showed the opposite pattern, in that their responses closely reflected the subjects percepts of the disambiguated Mooney images on a stimulus-to-stimulus basis but not the memory of the corresponding color images. Functional connectivity between the two sets of regions was stronger during those trials in which the disambiguated percept was stronger. This functional interaction between brain regions that specialize in storing disambiguating information in memory versus interpreting incoming ambiguous images may represent a general mechanism by which previous knowledge disambiguates visual sensory information.


Psychological Science | 2012

Learning to Break Camouflage by Learning the Background

Xin Chen; Jay Hegdé

How does the visual system recognize a camouflaged object? Obviously, the brain cannot afford to learn all possible camouflaged scenes or target objects. However, it may learn the general statistical properties of backgrounds of interest, which would enable it to break camouflage by comparing the statistics of a background with a target versus the statistics of the same background without a target. To determine whether the brain uses this strategy, we digitally created novel camouflaged scenes that had only the general statistical properties of the background in common. When subjects learned to break camouflage, their ability to detect a camouflaged target improved significantly not only for previously unseen instances of a camouflaged scene, but also for scenes that contained novel targets. Moreover, performance improved even for scenes that did not contain an actual target but had the statistical properties of backgrounds with a target. These results reveal that learning backgrounds is a powerful, versatile strategy by which the brain can learn to break camouflage.


PLOS ONE | 2010

Fragment-Based Learning of Visual Object Categories in Non-Human Primates

Sarah Kromrey; Matthew Maestri; Karin Hauffen; Evgeniy Bart; Jay Hegdé

When we perceive a visual object, we implicitly or explicitly associate it with an object category we know. Recent research has shown that the visual system can use local, informative image fragments of a given object, rather than the whole object, to classify it into a familiar category. We have previously reported, using human psychophysical studies, that when subjects learn new object categories using whole objects, they incidentally learn informative fragments, even when not required to do so. However, the neuronal mechanisms by which we acquire and use informative fragments, as well as category knowledge itself, have remained unclear. Here we describe the methods by which we adapted the relevant human psychophysical methods to awake, behaving monkeys and replicated key previous psychophysical results. This establishes awake, behaving monkeys as a useful system for future neurophysiological studies not only of informative fragments in particular, but also of object categorization and category learning in general.


Frontiers in Human Neuroscience | 2012

Object recognition in clutter: cortical responses depend on the type of learning

Jay Hegdé; Serena K. Thompson; Mark J. Brady; Daniel Kersten

Theoretical studies suggest that the visual system uses prior knowledge of visual objects to recognize them in visual clutter, and posit that the strategies for recognizing objects in clutter may differ depending on whether or not the object was learned in clutter to begin with. We tested this hypothesis using functional magnetic resonance imaging (fMRI) of human subjects. We trained subjects to recognize naturalistic, yet novel objects in strong or weak clutter. We then tested subjects recognition performance for both sets of objects in strong clutter. We found many brain regions that were differentially responsive to objects during object recognition depending on whether they were learned in strong or weak clutter. In particular, the responses of the left fusiform gyrus (FG) reliably reflected, on a trial-to-trial basis, subjects object recognition performance for objects learned in the presence of strong clutter. These results indicate that the visual system does not use a single, general-purpose mechanism to cope with clutter. Instead, there are two distinct spatial patterns of activation whose responses are attributable not to the visual context in which the objects were seen, but to the context in which the objects were learned.


PLOS ONE | 2011

What the 'Moonwalk' Illusion Reveals about the Perception of Relative Depth from Motion

Sarah Kromrey; Evgeniy Bart; Jay Hegdé

When one visual object moves behind another, the object farther from the viewer is progressively occluded and/or disoccluded by the nearer object. For nearly half a century, this dynamic occlusion cue has beenthought to be sufficient by itself for determining the relative depth of the two objects. This view is consistent with the self-evident geometric fact that the surface undergoing dynamic occlusion is always farther from the viewer than the occluding surface. Here we use a contextual manipulation ofa previously known motion illusion, which we refer to as the‘Moonwalk’ illusion, to demonstrate that the visual system cannot determine relative depth from dynamic occlusion alone. Indeed, in the Moonwalk illusion, human observers perceive a relative depth contrary to the dynamic occlusion cue. However, the perception of the expected relative depth is restored by contextual manipulations unrelated to dynamic occlusion. On the other hand, we show that an Ideal Observer can determine using dynamic occlusion alone in the same Moonwalk stimuli, indicating that the dynamic occlusion cue is, in principle, sufficient for determining relative depth. Our results indicate that in order to correctly perceive relative depth from dynamic occlusion, the human brain, unlike the Ideal Observer, needs additionalsegmentation information that delineate the occluder from the occluded object. Thus, neural mechanisms of object segmentation must, in addition to motion mechanisms that extract information about relative depth, play a crucial role in the perception of relative depth from motion.


Frontiers in Computational Neuroscience | 2012

Invariant recognition of visual objects: some emerging computational principles

Evgeniy Bart; Jay Hegdé

Invariant object recognition refers to recognizing an object regardless of irrelevant image variations, such as variations in viewpoint, lighting, retinal size, background, etc. The perceptual result of invariance, where the perception of a given object property is unaffected by irrelevant image variations, is often referred to as perceptual constancy (Kofka, 1935; Walsh and Kulikowski, 2010). n nMechanisms of invariant object recognition have, to a significant extent, remained unclear. This is both because experimental and computational studies have so far largely focused on understanding object recognition without these variations, and because the underlying computational problems are profoundly difficult. n nThe 10 articles in this Research Topic Issue focus on some of the key computational issues in invariant object recognition. There is no pretending that the articles cover all key areas of current research exhaustively or seamlessly. For instance, none of the articles in this issue address size invariance (Kilpatrick and Ittelson, 1953) or color constancy (Foster, 2011). Nonetheless, the articles collectively paint a useful pointillist picture of current research on computational principles of invariance.


Frontiers in Psychology | 2014

Semantic descriptor ranking: a quantitative method for evaluating qualitative verbal reports of visual cognition in the laboratory or the clinic

Matthew Maestri; Jeffrey G. Odel; Jay Hegdé

For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial “proof” of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, “animal,” “dog,” and “retriever” can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.


Journal of Vision | 2012

Learning-Dependent Changes in Brain Responses While Learning to Break Camouflage: A Human fMRI Study

Nicole Streeb; Xin Chen; Jay Hegdé

Abstract : Camouflage represents an extreme case of figure-ground segregation whereby a target object is effectively disguised against its background even when in plain view. While it is possible to learn to break camouflage with training, the mechanisms that underlie this learning remain poorly understood. We carried out an in-scanner learning experiment in which subjects learned to break the camouflage of up to three different target objects (counter-rotated across subjects) in randomly interleaved trials. We created camouflaged scenes, each of which contained a single novel foreground digital embryo target camouflaged against a background of additional novel digital embryos. Subjects performed a bootstrapped learning task in which they reported whether or not a given camouflaged scene contained a target. In early phases of the scan, subjects performed at chance levels (binomial proportions test, p 0.05), indicating that the targets were effectively camouflaged. As expected, each subject (N=3) achieved significant learning (p 0.05) for at least one target object (learned target), and failed to learn at least one other object (non-learned target). We found three different regions in either hemisphere in which the responses increased significantly across the scans for learned targets (p 0.05, corrected for multiple comparisons), but not for non-learned targets (p 0.05). In the intra-occipital gyrus (IOG), responses showed a larger decrease below baseline levels for the non-learned target than for the learned target throughout the scan. By contrast, responses in the superior temporal sulcus (STS) and the lateral occipital complex (LOC) increased from baseline levels during learning for learned targets, but not for non-learned targets. Together, these results indicate that camouflage learning results in systematic changes in the responses of these three regions, each previously known to play a major role in object recognition.


Journal of Vision | 2010

Implicit Learning of Background Texture while Learning to Break Camouflage

Xin Chen; Jay Hegdé


Frontiers in Neuroscience | 2013

Exploiting temporal continuity of views to learn visual object invariance

Evgeniy Bart; Jay Hegdé

Collaboration


Dive into the Jay Hegdé's collaboration.

Top Co-Authors

Avatar

Xin Chen

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Kromrey

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar

Matthew Maestri

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar

Amber Epting

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karin Hauffen

Georgia Regents University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicole Streeb

Georgia Regents University

View shared research outputs
Researchain Logo
Decentralizing Knowledge