Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabian A. Soto is active.

Publication


Featured researches published by Fabian A. Soto.


Psychological Science | 2010

Missing the Forest for the Trees Object-Discrimination Learning Blocks Categorization Learning

Fabian A. Soto; Edward A. Wasserman

Growing evidence indicates that error-driven associative learning underlies the ability of nonhuman animals to categorize natural images. This study explored whether this form of learning might also be at play when people categorize natural objects in photographs. Two groups of college students (a blocking group and a control group) were trained on a categorization task and then tested with novel photographs from each category; however, only the blocking group received pretraining on a task that required the discrimination of objects from the same category. Because of this earlier noncategorical discrimination learning, the blocking group performed well in the categorization task from the outset, and this strong initial performance reduced the likelihood of category learning driven by error. There was far less transfer of categorical responding during testing in the blocking group than in the control group; this finding suggests that learning the specific properties of each photographic image in pretraining blocked later learning of an open-ended category.


Psychological Review | 2014

Explaining Compound Generalization in Associative and Causal Learning Through Rational Principles of Dimensional Generalization

Fabian A. Soto; Samuel J. Gershman; Yael Niv

How do we apply learning from one situation to a similar, but not identical, situation? The principles governing the extent to which animals and humans generalize what they have learned about certain stimuli to novel compounds containing those stimuli vary depending on a number of factors. Perhaps the best studied among these factors is the type of stimuli used to generate compounds. One prominent hypothesis is that different generalization principles apply depending on whether the stimuli in a compound are similar or dissimilar to each other. However, the results of many experiments cannot be explained by this hypothesis. Here, we propose a rational Bayesian theory of compound generalization that uses the notion of consequential regions, first developed in the context of rational theories of multidimensional generalization, to explain the effects of stimulus factors on compound generalization. The model explains a large number of results from the compound generalization literature, including the influence of stimulus modality and spatial contiguity on the summation effect, the lack of influence of stimulus factors on summation with a recovered inhibitor, the effect of spatial position of stimuli on the blocking effect, the asymmetrical generalization decrement in overshadowing and external inhibition, and the conditions leading to a reliable external inhibition effect. By integrating rational theories of compound and dimensional generalization, our model provides the first comprehensive computational account of the effects of stimulus factors on compound generalization, including spatial and temporal contiguity between components, which have posed long-standing problems for rational theories of associative and causal learning.


Psychonomic Bulletin & Review | 2015

General recognition theory with individual differences: a new method for examining perceptual and decisional interactions with an application to face perception.

Fabian A. Soto; Lauren Vucovich; Robert Musgrave; F. Gregory Ashby

A common question in perceptual science is to what extent different stimulus dimensions are processed independently. General recognition theory (GRT) offers a formal framework via which different notions of independence can be defined and tested rigorously, while also dissociating perceptual from decisional factors. This article presents a new GRT model that overcomes several shortcomings with previous approaches, including a clearer separation between perceptual and decisional processes and a more complete description of such processes. The model assumes that different individuals share similar perceptual representations, but vary in their attention to dimensions and in the decisional strategies they use. We apply the model to the analysis of interactions between identity and emotional expression during face recognition. The results of previous research aimed at this problem have been disparate. Participants identified four faces, which resulted from the combination of two identities and two expressions. An analysis using the new GRT model showed a complex pattern of dimensional interactions. The perception of emotional expression was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. There were violations of decisional separability of expression from identity and of identity from expression, with the former being more consistent across participants than the latter. One explanation for the disparate results in the literature is that decisional strategies may have varied across studies and influenced the results of tests of perceptual interactions, as previous studies lacked the ability to dissociate between perceptual and decisional interactions.


Frontiers in Neural Circuits | 2014

Mechanisms of object recognition: what we have learned from pigeons

Fabian A. Soto; Edward A. Wasserman

Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons.


Behavioural Processes | 2010

Effect of between-category similarity on basic-level superiority in pigeons

Olga F. Lazareva; Fabian A. Soto; Edward A. Wasserman

Children categorize stimuli at the basic level faster than at the superordinate level. We hypothesized that between-category similarity may affect this basic level superiority effect. Dissimilar categories may be easy to distinguish at the basic level but be difficult to group at the superordinate level, whereas similar categories may be easy to group at the superordinate level but be difficult to distinguish at the basic level. Consequently, similar basic level categories may produce a superordinate-before-basic learning trend, whereas dissimilar basic level categories may result in a basic-before-superordinate learning trend. We tested this hypothesis in pigeons by constructing superordinate level categories out of basic level categories with known similarity. In Experiment 1, we experimentally evaluated the between-category similarity of four basic level photographic categories using multiple fixed interval-extinction training (Astley and Wasserman, 1992). We used the resultant similarity matrices in Experiment 2 to construct two superordinate level categories from basic level categories with high between-category similarity (cars and persons; chairs and flowers). We then trained pigeons to concurrently classify those photographs into either the proper basic level category or the proper superordinate level category. Under these conditions, the pigeons learned the superordinate level discrimination faster than the basic level discrimination, confirming our hypothesis that basic level superiority is affected by between-category similarity.


NeuroImage | 2013

Brain activity across the development of automatic categorization: a comparison of categorization tasks using multi-voxel pattern analysis.

Fabian A. Soto; Jennifer G. Waldschmidt; Sébastien Hélie; F. Gregory Ashby

Previous evidence suggests that relatively separate neural networks underlie initial learning of rule-based and information-integration categorization tasks. With the development of automaticity, categorization behavior in both tasks becomes increasingly similar and exclusively related to activity in cortical regions. The present study uses multi-voxel pattern analysis to directly compare the development of automaticity in different categorization tasks. Each of the three groups of participants received extensive training in a different categorization task: either an information-integration task, or one of two rule-based tasks. Four training sessions were performed inside an MRI scanner. Three different analyses were performed on the imaging data from a number of regions of interest (ROIs). The common patterns analysis had the goal of revealing ROIs with similar patterns of activation across tasks. The unique patterns analysis had the goal of revealing ROIs with dissimilar patterns of activation across tasks. The representational similarity analysis aimed at exploring (1) the similarity of category representations across ROIs and (2) how those patterns of similarities compared across tasks. The results showed that common patterns of activation were present in motor areas and basal ganglia early in training, but only in the former later on. Unique patterns were found in a variety of cortical and subcortical areas early in training, but they were dramatically reduced with training. Finally, patterns of representational similarity between brain regions became increasingly similar across tasks with the development of automaticity.


Quarterly Journal of Experimental Psychology | 2009

Generality of the summation effect in human causal learning

Fabian A. Soto; Edgar H. Vogel; Ramón D. Castillo; Allan R. Wagner

Considerable research has examined the contrasting predictions of the elemental and configural association theories proposed by Rescorla and Wagner (1972) and Pearce (1987), respectively. One simple method to distinguish between these approaches is the summation test, in which the associative strength attributed to a novel compound of two separately trained cues is examined. Under common assumptions, the configural view predicts that the strength of the compound will approximate to the average strength of its components, whereas the elemental approach predicts that the strength of the compound will be greater than the strength of either component. Different studies have produced mixed outcomes. In studies of human causal learning, Collins and Shanks (2006) suggested that the observation of summation is encouraged by training, in which different stimuli are associated with different submaximal outcomes, and by testing, in which the alternative outcomes can be scaled. The reported experiments further pursued this reasoning. In Experiment 1, summation was more substantial when the participants were trained with outcomes identified as submaximal than when trained with simple categorical (presence/absence) outcomes. Experiments 2 and 3 demonstrated that summation can also be obtained with categorical outcomes during training, if the participants are encouraged by instruction or the character of training to rate the separately trained components with submaximal ratings. The results are interpreted in terms of apparent performance constraints in evaluations of the contrasting theoretical predictions concerning summation.


Journal of Experimental Psychology: Animal Behavior Processes | 2010

Integrality/Separability of Stimulus Dimensions and Multidimensional Generalization in Pigeons

Fabian A. Soto; Edward A. Wasserman

The authors present a quantitative framework for interpreting the results of multidimensional stimulus generalization experiments in animals using concepts derived from the geometrical approach to human cognition. The authors apply the model to the analysis of stimulus generalization data obtained from pigeons trained with different sets of stimuli varying along two orthogonal dimensions. Separable pigeons were trained with stimuli varying along the dimensions of circle size and line tilt, dimensions found to be separable in previous human research; integral pigeons were trained with stimuli varying along two dimensions of rotation in depth, dimensions that are intuitively integral and which hold special interest for theories of object recognition. The model accurately described the stimulus generalization data, with best fits to the City-Block metric for separable pigeons and to the euclidean metric for integral pigeons.


Frontiers in Psychology | 2017

Testing Separability and Independence of Perceptual Dimensions with General Recognition Theory: A Tutorial and New R Package (grtools)

Fabian A. Soto; Emily Zheng; Johnny Fonseca; F. Gregory Ashby

Determining whether perceptual properties are processed independently is an important goal in perceptual science, and tools to test independence should be widely available to experimental researchers. The best analytical tools to test for perceptual independence are provided by General Recognition Theory (GRT), a multidimensional extension of signal detection theory. Unfortunately, there is currently a lack of software implementing GRT analyses that is ready-to-use by experimental psychologists and neuroscientists with little training in computational modeling. This paper presents grtools, an R package developed with the explicit aim of providing experimentalists with the ability to perform full GRT analyses using only a couple of command lines. We describe the software and provide a practical tutorial on how to perform each of the analyses available in grtools. We also provide advice to researchers on best practices for experimental design and interpretation of results when applying GRT and grtools


NeuroImage | 2016

Dissociable changes in functional network topology underlie early category learning and development of automaticity

Fabian A. Soto; Danielle S. Bassett; F. Gregory Ashby

Recent work has shown that multimodal association areas-including frontal, temporal, and parietal cortex-are focal points of functional network reconfiguration during human learning and performance of cognitive tasks. On the other hand, neurocomputational theories of category learning suggest that the basal ganglia and related subcortical structures are focal points of functional network reconfiguration during early learning of some categorization tasks but become less so with the development of automatic categorization performance. Using a combination of network science and multilevel regression, we explore how changes in the connectivity of small brain regions can predict behavioral changes during training in a visual categorization task. We find that initial category learning, as indexed by changes in accuracy, is predicted by increasingly efficient integrative processing in subcortical areas, with higher functional specialization, more efficient integration across modules, but a lower cost in terms of redundancy of information processing. The development of automaticity, as indexed by changes in the speed of correct responses, was predicted by lower clustering (particularly in subcortical areas), higher strength (highest in cortical areas), and higher betweenness centrality. By combining neurocomputational theories and network scientific methods, these results synthesize the dissociative roles of multimodal association areas and subcortical structures in the development of automaticity during category learning.

Collaboration


Dive into the Fabian A. Soto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Omar David Perez

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudia Wong

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Zheng

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge