Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen José Hanson is active.

Publication


Featured researches published by Stephen José Hanson.


Neuroinformatics | 2009

PyMVPA: a Python Toolbox for Multivariate Pattern Analysis of fMRI Data

Michael Hanke; Yaroslav O. Halchenko; Per B. Sederberg; Stephen José Hanson; James V. Haxby; Stefan Pollmann

Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python’s ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.


NeuroImage | 2004

Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a "face" area?

Stephen José Hanson; Toshihiko Matsuka; James V. Haxby

Haxby et al. [Science 293 (2001) 2425] recently argued that category-related responses in the ventral temporal (VT) lobe during visual object identification were overlapping and distributed in topography. This observation contrasts with prevailing views that object codes are focal and localized to specific areas such as the fusiform and parahippocampal gyri. We provide a critical test of Haxbys hypothesis using a neural network (NN) classifier that can detect more general topographic representations and achieves 83% correct generalization performance on patterns of voxel responses in out-of-sample tests. Using voxel-wise sensitivity analysis we show that substantially the same VT lobe voxels contribute to the classification of all object categories, suggesting the code is combinatorial. Moreover, we found no evidence for local single category representations. The neural network representations of the voxel codes were sensitive to both category and superordinate level features that were only available implicitly in the object categories.


NeuroImage | 2010

Six problems for causal inference from fMRI

Joseph Ramsey; Stephen José Hanson; Catherine Hanson; Yaroslav O. Halchenko; Russell A. Poldrack; Clark Glymour

Neuroimaging (e.g. fMRI) data are increasingly used to attempt to identify not only brain regions of interest (ROIs) that are especially active during perception, cognition, and action, but also the qualitative causal relations among activity in these regions (known as effective connectivity; Friston, 1994). Previous investigations and anatomical and physiological knowledge may somewhat constrain the possible hypotheses, but there often remains a vast space of possible causal structures. To find actual effective connectivity relations, search methods must accommodate indirect measurements of nonlinear time series dependencies, feedback, multiple subjects possibly varying in identified regions of interest, and unknown possible location-dependent variations in BOLD response delays. We describe combinations of procedures that under these conditions find feed-forward sub-structure characteristic of a group of subjects. The method is illustrated with an empirical data set and confirmed with simulations of time series of non-linear, randomly generated, effective connectivities, with feedback, subject to random differences of BOLD delays, with regions of interest missing at random for some subjects, measured with noise approximating the signal to noise ratio of the empirical data.


Psychological Science | 2009

Decoding the large-scale structure of brain function by classifying mental States across individuals.

Russell A. Poldrack; Yaroslav O. Halchenko; Stephen José Hanson

Brain-imaging research has largely focused on localizing patterns of activity related to specific mental processes, but recent work has shown that mental states can be identified from neuroimaging data using statistical classifiers. We investigated whether this approach could be extended to predict the mental state of an individual using a statistical classifier trained on other individuals, and whether the information gained in doing so could provide new insights into how mental processes are organized in the brain. Using a variety of classifier techniques, we achieved cross-validated classification accuracy of 80% across individuals (chance = 13%). Using a neural network classifier, we recovered a low-dimensional representation common to all the cognitive-perceptual tasks in our data set, and we used an ontology of cognitive processes to determine the cognitive concepts most related to each dimension. These results revealed a small organized set of large-scale networks that map cognitive processes across a highly diverse set of mental tasks, suggesting a novel way to characterize the neural basis of cognition.


Behavioral and Brain Sciences | 1990

What connectionist models learn: Learning and representation in connectionist networks

Stephen José Hanson; David J. Burr

Connectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and “simple” homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including “distributed representations”) or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks.


Neural Computation | 2000

Nonlinear Autoassociation Is Not Equivalent to PCA

Nathalie Japkowicz; Stephen José Hanson; Mark A. Gluck

A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.


NeuroImage | 2011

Multi-subject search correctly identifies causal connections and most causal directions in the DCM models of the Smith et al. simulation study.

Joseph Ramsey; Stephen José Hanson; Clark Glymour

Smith et al. report a large study of the accuracy of 38 search procedures for recovering effective connections in simulations of DCM models under 28 different conditions. Their results are disappointing: no method reliably finds and directs connections without large false negatives, large false positives, or both. Using multiple subject inputs, we apply a previously published search algorithm, IMaGES, and novel orientation algorithms, LOFS, in tandem to all of the simulations of DCM models described by Smith et al. (2011). We find that the procedures accurately identify effective connections in almost all of the conditions that Smith et al. simulated and, in most conditions, direct causal connections with precision greater than 90% and recall greater than 80%.


Frontiers in Neuroinformatics | 2009

PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data

Michael Hanke; Yaroslav O. Halchenko; Per B. Sederberg; Ingo Fründ; Jochem W. Rieger; Christoph Herrmann; James V. Haxby; Stephen José Hanson; Stefan Pollmann

The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings.


Machine Learning | 1994

Using Neural Networks to Modularize Software

Robert W. Schwanke; Stephen José Hanson

This article describes our experience with designing and using a module architecture assistant, an intelligent tool to help human software architects improve the modularity of large programs. The tool models modularization as nearest-neighbor clustering and classification, and uses the model to make recommendations for improving modularity by rearranging module membership. The tool learns similarity judgments that match those of the human architect by performing back propagation on a specialized neural network. The tools classifier outperformed other classifiers, both in learning and generalization, on a modest but realistic data set. The architecture assistant significantly improved its performance during a field trial on a larger data set, through a combination of learning and knowledge acquisition.


Machine Learning | 1990

Conceptual clustering and categorization: bridging the gap between induction and causal models

Stephen José Hanson

Abstract Categorization processes are central to many human capabilities; e.g., language, reasoning, problem solving. The concept of categorization is also at the base of many kinds of phenomenon which AI researchers have attempted to model; e.g., induction, analogy, and the use of causal models. Most approaches to induction can be characterized on a single dimension such as model driven, “top-down” to data driven, “bottom-up.” At the one end a large amount of preconstructed information (knowledge rich) is used while on the other end the featural similarity is analyzed of a given set of objects or events in the absence of other knowledge structures. These two kinds of approaches, represented recently by explanation-based learning (EBL) and similarity-based learning (SBL), conflict in terms of the proper approach to categorization and construction of causal theories. One view central to the present approach is that featural information is instrumental in formation of knowledge structures. Knowledge structures can be more general than objects and can possess more complex information than features (e.g., abstract concepts, actions, relations). Such knowledge structures are hypothesized to be both created and further manipulated by the SBL mechanism that learned them in the first place. The present approach is related to the discovery of category structure and the use of feature intercorrelations and their interaction with generalization, inheritance, retrieval, and memory organization.Categorization processes are central to many human capabilities; e.g., language, reasoning, problem solving. The concept of categorization is also at the base of many kinds of phenomenon which AI researchers have attempted to model; e.g., induction, analogy, and the use of causal models. Most approaches to induction can be characterized on a single dimension such as model driven, “top-down” to data driven, “bottom-up.” At the one end a large amount of preconstructed information (knowledge rich) is used while on the other end the featural similarity is analyzed of a given set of objects or events in the absence of other knowledge structures. These two kinds of approaches, represented recently by explanation-based learning (EBL) and similarity-based learning (SBL), conflict in terms of the proper approach to categorization and construction of causal theories. One view central to the present approach is that featural information is instrumental in formation of knowledge structures. Knowledge structures can be more general than objects and can possess more complex information than features (e.g., abstract concepts, actions, relations). Such knowledge structures are hypothesized to be both created and further manipulated by the SBL mechanism that learned them in the first place. The present approach is related to the discovery of category structure and the use of feature intercorrelations and their interaction with generalization, inheritance, retrieval, and memory organization.

Collaboration


Dive into the Stephen José Hanson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bharat B. Biswal

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Clark Glymour

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge