Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ana E. Van Gulick is active.

Publication


Featured researches published by Ana E. Van Gulick.


Cerebral Cortex | 2015

Expertise Effects in Face-Selective Areas are Robust to Clutter and Diverted Attention, but not to Competition

Rankin W. McGugin; Ana E. Van Gulick; Benjamin J. Tamber-Rosenau; David A. Ross; Isabel Gauthier

Expertise effects for nonface objects in face-selective brain areas may reflect stable aspects of neuronal selectivity that determine how observers perceive objects. However, bottom-up (e.g., clutter from irrelevant objects) and top-down manipulations (e.g., attentional selection) can influence activity, affecting the link between category selectivity and individual performance. We test the prediction that individual differences expressed as neural expertise effects for cars in face-selective areas are sufficiently stable to survive clutter and manipulations of attention. Additionally, behavioral work and work using event related potentials suggest that expertise effects may not survive competition; we investigate this using functional magnetic resonance imaging. Subjects varying in expertise with cars made 1-back decisions about cars, faces, and objects in displays containing one or 2 objects, with only one category attended. Univariate analyses suggest car expertise effects are robust to clutter, dampened by reducing attention to cars, but nonetheless more robust to manipulations of attention than competition. While univariate expertise effects are severely abolished by competition between cars and faces, multivariate analyses reveal new information related to car expertise. These results demonstrate that signals in face-selective areas predict expertise effects for nonface objects in a variety of conditions, although individual differences may be expressed in different dependent measures depending on task and instructions.


Journal of Vision | 2014

Experience moderates overlap between object and face recognition, suggesting a common ability

Isabel Gauthier; Rankin W. McGugin; Jennifer J. Richler; Grit Herzmann; Magen Speegle; Ana E. Van Gulick

Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience.


Psychological Assessment | 2015

Item Response Theory Analyses of the Cambridge Face Memory Test (CFMT)

Sun-Joo Cho; Jeremy Wilmer; Grit Herzmann; Rankin W. McGugin; Daniel Fiset; Ana E. Van Gulick; Kaitlin F. Ryan; Isabel Gauthier

We evaluated the psychometric properties of the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). First, we assessed the dimensionality of the test with a bifactor exploratory factor analysis (EFA). This EFA analysis revealed a general factor and 3 specific factors clustered by targets of CFMT. However, the 3 specific factors appeared to be minor factors that can be ignored. Second, we fit a unidimensional item response model. This item response model showed that the CFMT items could discriminate individuals at different ability levels and covered a wide range of the ability continuum. We found the CFMT to be particularly precise for a wide range of ability levels. Third, we implemented item response theory (IRT) differential item functioning (DIF) analyses for each gender group and 2 age groups (age ≤ 20 vs. age > 21). This DIF analysis suggested little evidence of consequential differential functioning on the CFMT for these groups, supporting the use of the test to compare older to younger, or male to female, individuals. Fourth, we tested for a gender difference on the latent facial recognition ability with an explanatory item response model. We found a significant but small gender difference on the latent ability for face recognition, which was higher for women than men by 0.184, at age mean 23.2, controlling for linear and quadratic age effects. Finally, we discuss the practical considerations of the use of total scores versus IRT scale scores in applications of the CFMT.


Behavior Research Methods | 2016

Measuring nonvisual knowledge about object categories: The Semantic Vanderbilt Expertise Test.

Ana E. Van Gulick; Rankin W. McGugin; Isabel Gauthier

How much do people differ in their abilities to recognize objects, and what is the source of these differences? To address the first question, psychologists have created visual learning tests including the Cambridge Face Memory Test (Duchaine & Nakayama, 2006) and the Vanderbilt Expertise Test (VET; McGugin et al., 2012). The second question requires consideration of the influences of both innate potential and experience, but experience is difficult to measure. One solution is to measure the products of experience beyond perceptual knowledge—specifically, nonvisual semantic knowledge. For instance, the relation between semantic and perceptual knowledge can help clarify the nature of object recognition deficits in brain-damaged patients (Barton, Hanif, & Ashraf, Brain, 132, 3456–3466, 2009). We present a reliable measure of nonperceptual knowledge in a format applicable across categories. The Semantic Vanderbilt Expertise Test (SVET) measures knowledge of relevant category-specific nomenclature. We present SVETs for eight categories: cars, planes, Transformers, dinosaurs, shoes, birds, leaves, and mushrooms. The SVET demonstrated good reliability and domain-specific validity. We found partial support for the idea that the only source of domain-specific shared variance between the VET and SVET is experience with a category. We also demonstrated the utility of the SVET-Bird in experts. The SVET can facilitate the study of individual differences in visual recognition.


Journal of Cognitive Neuroscience | 2016

Cortical thickness in fusiform face area predicts face and object recognition performance

Rankin W. McGugin; Ana E. Van Gulick; Isabel Gauthier

The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.


British Journal of Social Psychology | 2011

Prime and prejudice: co-occurrence in the culture as a source of automatic stereotype priming.

Paul Verhaeghen; Shelley N. Aikman; Ana E. Van Gulick

It has been argued that stereotype priming (response times are faster for stereotypical word pairs, such as black-poor, than for non-stereotypical word pairs, such as black-balmy) is partially a function of biases in the belief system inherent in the culture. In three priming experiments, we provide direct evidence for this position, showing that stereotype priming effects associated with race, gender, and age can be very well explained through objectively measured associative co-occurrence of prime and target in the culture: (a) once objective associative strength between word pairs is taken into account, stereotype priming effects disappear; (b) the relationship between response time and associative strength is identical for social primes and non-social primes. The correlation between associative-value-controlled stereotype priming and self-report measures of racism, sexism, and ageism is near zero. The racist/sexist/ageist in all of us appears to be (at least partially) a reflection of the surrounding culture.


Current Directions in Psychological Science | 2015

Category Learning Stretches Neural Representations in Visual Cortex

Jonathan R. Folstein; Thomas J. Palmeri; Ana E. Van Gulick; Isabel Gauthier

In this article, we review recent work that shows how learning to categorize objects changes how those objects are represented in the mind and the brain. After category learning, visual perception of objects is enhanced along perceptual dimensions that were relevant to the learned categories, an effect we call dimensional modulation. Dimensional modulation stretches object representations along category-relevant dimensions and shrinks them along category-irrelevant dimensions. The perceptual advantage for category-relevant dimensions extends beyond categorization and can be observed during visual discrimination and other tasks that do not depend on the learned categories. Evidence from fMRI studies shows that category learning causes ventral-stream neural populations in visual cortex representing objects along a category-relevant dimension to become more distinct. These results are consistent with a view that specific aspects of cognitive tasks associated with objects can account for how our visual system responds to objects.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2014

The Perceptual Effects of Learning Object Categories That Predict Perceptual Goals.

Ana E. Van Gulick; Isabel Gauthier

In classic category learning studies, subjects typically learn to assign items to 1 of 2 categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance, with objects in only 1 category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In 2 experiments, subjects first learned to categorize complex objects from a single morphspace into 2 categories based on 1 morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the 2 categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the nondiagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations.


International Journal on Digital Libraries | 2017

Automating data sharing through authoring tools

John R. Kitchin; Ana E. Van Gulick; Lisa Zilinski

In the current scientific publishing landscape, there is a need for an authoring workflow that easily integrates data and code into manuscripts and that enables the data and code to be published in reusable form. Automated embedding of data and code into published output will enable superior communication and data archiving. In this work, we demonstrate a proof of concept for a workflow, org-mode, which successfully provides this authoring capability and workflow integration. We illustrate this concept in a series of examples for potential uses of this workflow. First, we use data on citation counts to compute the h-index of an author, and show two code examples for calculating the h-index. The source for each example is automatically embedded in the PDF during the export of the document. We demonstrate how data can be embedded in image files, which themselves are embedded in the document. Finally, metadata about the embedded files can be automatically included in the exported PDF, and accessed by computer programs. In our customized export, we embedded metadata about the attached files in the PDF in an Info field. A computer program could parse this output to get a list of embedded files and carry out analyses on them. Authoring tools such as Emacs + org-mode can greatly facilitate the integration of data and code into technical writing. These tools can also automate the embedding of data into document formats intended for consumption.


PLOS ONE | 2018

Data management and sharing in neuroimaging: Practices and perceptions of MRI researchers

John Borghi; Ana E. Van Gulick

Neuroimaging methods such as magnetic resonance imaging (MRI) involve complex data collection and analysis protocols, which necessitate the establishment of good research data management (RDM). Despite efforts within the field to address issues related to rigor and reproducibility, information about the RDM-related practices and perceptions of neuroimaging researchers remains largely anecdotal. To inform such efforts, we conducted an online survey of active MRI researchers that covered a range of RDM-related topics. Survey questions addressed the type(s) of data collected, tools used for data storage, organization, and analysis, and the degree to which practices are defined and standardized within a research group. Our results demonstrate that neuroimaging data is acquired in multifarious forms, transformed and analyzed using a wide variety of software tools, and that RDM practices and perceptions vary considerably both within and between research groups, with trainees reporting less consistency than faculty. Ratings of the maturity of RDM practices from ad-hoc to refined were relatively high during the data collection and analysis phases of a project and significantly lower during the data sharing phase. Perceptions of emerging practices including open access publishing and preregistration were largely positive, but demonstrated little adoption into current practice.

Collaboration


Dive into the Ana E. Van Gulick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grit Herzmann

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Borghi

California Digital Library

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Ross

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John R. Kitchin

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge