Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Terrence J. Sejnowski is active.

Publication


Featured researches published by Terrence J. Sejnowski.


Psychophysiology | 2000

Removing electroencephalographic artifacts by blind source separation

Tzyy-Ping Jung; Scott Makeig; Colin Humphries; Te-Won Lee; Martin J. McKeown; Vicente J. Iragui; Terrence J. Sejnowski

Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.


IEEE Transactions on Neural Networks | 2002

Face recognition by independent component analysis

Marian Stewart Bartlett; Javier R. Movellan; Terrence J. Sejnowski

A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance.


Human Brain Mapping | 1998

Analysis of fMRI Data by Blind Separation Into Independent Spatial Components

Martin J. McKeown; Scott Makeig; Greg Brown; Tzyy-Ping Jung; Sandra S. Kindermann; Anthony J. Bell; Terrence J. Sejnowski

Current analytical techniques applied to functional magnetic resonance imaging (fMRI) data require a priori knowledge or specific assumptions about the time courses of processes contributing to the measured signals. Here we describe a new method for analyzing fMRI data based on the independent component analysis (ICA) algorithm of Bell and Sejnowski ([1995]: Neural Comput 7:1129–1159). We decomposed eight fMRI data sets from 4 normal subjects performing Stroop color‐naming, the Brown and Peterson word/number task, and control tasks into spatially independent components. Each component consisted of voxel values at fixed three‐dimensional locations (a component “map”), and a unique associated time course of activation. Given data from 144 time points collected during a 6‐min trial, ICA extracted an equal number of spatially independent components. In all eight trials, ICA derived one and only one component with a time course closely matching the time course of 40‐sec alternations between experimental and control tasks. The regions of maximum activity in these consistently task‐related components generally overlapped active regions detected by standard correlational analysis, but included frontal regions not detected by correlation. Time courses of other ICA components were transiently task‐related, quasiperiodic, or slowly varying. By utilizing higher‐order statistics to enforce successively stricter criteria for spatial independence between component maps, both the ICA algorithm and a related fourth‐order decomposition technique (Comon [1994]: Signal Processing 36:11–20) were superior to principal component analysis (PCA) in determining the spatial and temporal extent of task‐related activation. For each subject, the time courses and active regions of the task‐related ICA components were consistent across trials and were robust to the addition of simulated noise. Simulated movement artifact and simulated task‐related activations added to actual fMRI data were clearly separated by the algorithm. ICA can be used to distinguish between nontask‐related signal components, movements, and other artifacts, as well as consistently or transiently task‐related fMRI activations, based on only weak assumptions about their spatial distributions and without a priori assumptions about their time courses. ICA appears to be a highly promising method for the analysis of fMRI data from normal and clinical populations, especially for uncovering unpredictable transient patterns of brain activity associated with performance of psychomotor tasks. Hum. Brain Mapping 6:160–188, 1998.


Neural Computation | 1999

Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources

Te-Won Lee; Mark A. Girolami; Terrence J. Sejnowski

An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.


Journal of Molecular Biology | 1988

Predicting the secondary structure of globular proteins using neural network models

Ning Qian; Terrence J. Sejnowski

We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures.


Neural Computation | 2000

Learning Overcomplete Representations

Michael S. Lewicki; Terrence J. Sejnowski

In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Classifying facial actions

Gianluca Donato; Marian Stewart Bartlett; Joseph C. Hager; Paul Ekman; Terrence J. Sejnowski

The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.


Neural Networks | 1988

Analysis of hidden units in a layered network trained to classify sonar targets

R.Paul Gorman; Terrence J. Sejnowski

Abstract A neural network learning procedure has been applied to the classification of sonar returns from two undersea targets, a metal cylinder and a similarly shaped rock. Networks with an intermediate layer of hidden processing units achieved a classification accuracy as high as 100% on a training set of 104 returns. These networks correctly classified up to 90.4% of 104 test returns not contained in the training set. This performance was better than that of a nearest neighbor classifier, which was 82.7%, and was close to that of an optimal Bayes classifier. Specific signal features extracted by hidden units in a trained network were identified and related to coding schemes in the pattern of connection strengths between the input and the hidden units. Network performance and classification strategy was comparable to that of trained human listeners.


Nature Reviews Neuroscience | 2001

Correlated neuronal activity and the flow of neural information.

Emilio Salinas; Terrence J. Sejnowski

For years we have known that cortical neurons collectively have synchronous or oscillatory patterns of activity, the frequencies and temporal dynamics of which are associated with distinct behavioural states. Although the function of these oscillations has remained obscure, recent experimental and theoretical results indicate that correlated fluctuations might be important for cortical processes, such as attention, that control the flow of information in the brain.


Science | 2013

Global Epigenomic Reconfiguration During Mammalian Brain Development

Ryan Lister; Eran A. Mukamel; Joseph R. Nery; Mark A. Urich; Clare A. Puddifoot; Nicholas D. Johnson; Jacinta Lucero; Yun Huang; Andrew J. Dwork; Matthew D. Schultz; Miao Yu; Julian Tonti-Filippini; Holger Heyn; Shijun Hu; Joseph C. Wu; Anjana Rao; Manel Esteller; Chuan He; Fatemeh Haghighi; Terrence J. Sejnowski; M. Margarita Behrens; Joseph R. Ecker

Introduction Several lines of evidence point to a key role for dynamic epigenetic changes during brain development, maturation, and learning. DNA methylation (mC) is a stable covalent modification that persists in post-mitotic cells throughout their lifetime, defining their cellular identity. However, the methylation status at each of the ~1 billion cytosines in the genome is potentially an information-rich and flexible substrate for epigenetic modification that can be altered by cellular activity. Indeed, changes in DNA methylation have been implicated in learning and memory, as well as in age-related cognitive decline. However, little is known about the cell type–specific patterning of DNA methylation and its dynamics during mammalian brain development. The DNA methylation landscape of human and mouse neurons is dynamically reconfigured through development. Base-resolution analysis allowed identification of methylation in the CG and CH context (H = A, C, or T). Unlike other differentiated cell types, neurons accumulate substantial mCH during the early years of life, coinciding with the period of synaptogenesis and brain maturation. Methods We performed genome-wide single-base resolution profiling of the composition, patterning, cell specificity, and dynamics of DNA methylation in the frontal cortex of humans and mice throughout their lifespan (MethylC-Seq). Furthermore, we generated base-resolution maps of 5-hydroxymethylcytosine (hmC) in mammalian brains by TAB-Seq at key developmental stages, accompanied by RNA-Seq transcriptional profiling. Results Extensive methylome reconfiguration occurs during development from fetal to young adult. In this period, coincident with synaptogenesis, highly conserved non-CG methylation (mCH) accumulates in neurons, but not glia, to become the dominant form of methylation in the human neuronal genome. We uncovered surprisingly complex features of brain cell DNA methylation at multiple scales, first by identifying intragenic methylation patterns in neurons and glia that distinguish genes with cell type–specific activity. Second, we report a novel mCH signature that identifies genes escaping X-chromosome inactivation in neurons. Third, we find >100,000 developmentally dynamic and cell type–specific differentially CG-methylated regions that are enriched at putative regulatory regions of the genome. Finally, whole-genome detection of 5-hydroxymethylcytosine (hmC) at single-base resolution revealed that this mark is present in fetal brain cells at locations that lose CG methylation and become activated during development. CG-demethylation at these hmC-poised loci depends on Tet2 activity. Discussion Whole-genome single-base resolution methylcytosine and hydroxymethylcytosine maps revealed profound changes during frontal cortex development in humans and mice. These results extend our knowledge of the unique role of DNA methylation in brain development and function, and offer a new framework for testing the role of the epigenome in healthy function and in pathological disruptions of neural circuits. Overall, brain cell DNA methylation has unique features that are precisely conserved, yet dynamic and cell-type specific. Epigenetic Brainscape Epigenetic modifications and their potential changes during development are of high interest, but few studies have characterized such differences. Lister et al. (1237905, published online 4 July; see the Perspective by Gabel and Greenberg) report whole-genome base-resolution analysis of DNA cytosine modifications and transcriptome analysis in the frontal cortex of human and mouse brains at multiple developmental stages. The high-resolution mapping of DNA cytosine methylation (5mC) and one of its oxidation derivatives (5hmC) at key developmental stages provides a comprehensive resource covering the temporal dynamics of these epigenetic modifications in neurons compared to glia. The data suggest that methylation marks are dynamic during brain development in both humans and mice. A genome-wide map shows that DNA methylation in neurons and glial cells changes during development in humans and mice. [Also see Perspective by Gabel and Greenberg] DNA methylation is implicated in mammalian brain development and plasticity underlying learning and memory. We report the genome-wide composition, patterning, cell specificity, and dynamics of DNA methylation at single-base resolution in human and mouse frontal cortex throughout their lifespan. Widespread methylome reconfiguration occurs during fetal to young adult development, coincident with synaptogenesis. During this period, highly conserved non-CG methylation (mCH) accumulates in neurons, but not glia, to become the dominant form of methylation in the human neuronal genome. Moreover, we found an mCH signature that identifies genes escaping X-chromosome inactivation. Last, whole-genome single-base resolution 5-hydroxymethylcytosine (hmC) maps revealed that hmC marks fetal brain cell genomes at putative regulatory regions that are CG-demethylated and activated in the adult brain and that CG demethylation at these hmC-poised loci depends on Tet2 activity.

Collaboration


Dive into the Terrence J. Sejnowski's collaboration.

Top Co-Authors

Avatar

Maxim Bazhenov

University of California

View shared research outputs
Top Co-Authors

Avatar

Scott Makeig

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas M. Bartol

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar

Tzyy-Ping Jung

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor Timofeev

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony J. Bell

Salk Institute for Biological Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge