Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Te-Won Lee is active.

Publication


Featured researches published by Te-Won Lee.


Psychophysiology | 2000

Removing electroencephalographic artifacts by blind source separation

Tzyy-Ping Jung; Scott Makeig; Colin Humphries; Te-Won Lee; Martin J. McKeown; Vicente J. Iragui; Terrence J. Sejnowski

Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity.


Neural Computation | 1999

Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources

Te-Won Lee; Mark A. Girolami; Terrence J. Sejnowski

An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.


Neural Computation | 2003

Dictionary learning algorithms for sparse representation

Kenneth Kreutz-Delgado; Joseph F. Murray; Bhaskar D. Rao; Kjersti Engan; Te-Won Lee; Terrence J. Sejnowski

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial 25 words or less), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).


Proceedings of the IEEE | 2001

Imaging brain dynamics using independent component analysis

Tzyy-Ping Jung; Scott Makeig; Martin J. McKeown; Anthony J. Bell; Te-Won Lee; Terrence J. Sejnowski

The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain.


IEEE Signal Processing Letters | 1999

Blind source separation of more sources than mixtures using overcomplete representations

Te-Won Lee; Michael S. Lewicki; Mark A. Girolami; Terrence J. Sejnowski

Empirical results were obtained for the blind source separation of more sources than mixtures using a previously proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: (1) learning an overcomplete representation for the observed data and (2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal.


Computers & Mathematics With Applications | 2000

A Unifying Information-Theoretic Framework for Independent Component Analysis

Te-Won Lee; Mark A. Girolami; Anthony J. Bell; Terrence J. Sejnowski

Abstract We show that different theories recently proposed for independent component analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra [1] and Cardoso [2] showed that the infomax approach of Bell and Sejnowski [3] and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties, and therefore, all three approaches yield the same learning rule for a fixed nonlinearity. Girolami and Fyfe [4] have shown that the nonlinear principal component analysis (PCA) algorithm of Karhunen and Joutsensalo [5] and Oja [6] can also be viewed from information-theoretic principles since it minimizes the sum of squares of the fourth-order marginal cumulants, and therefore, approximately minimizes the mutual information [7]. Lambert [8] has proposed different Bussgang cost functions for multichannel blind deconvolution. We show how the Bussgang property relates to the infomax principle. Finally, we discuss convergence and stability as well as future research issues in blind source separation.


Archive | 2007

Blind speech separation

Shoji Makino; Te-Won Lee; Hiroshi Sawada

Part I: Multiple Microphone Blind Speech Separation with ICA 1. Convolutive Blind Source Separation for Speech Signals S.C.Douglas, M.Gupta. 2. Frequency-Domain Blind Source Separation H.Sawada, S.Araki, S.Makino. 3. Blind Source Separation using Space-Time Independent Component Analysis M.Davies, et al. 4. TRINICON-based Blind System Identification with Application to Multiple-Source Localization and Separation H.Buchner, R.Aichner, W.Kellermann. 5. SIMO-Model-Based Blind Source Separation-principle and its applications H.Saruwatari, T.Takatani, K.Shikano. 6. Independent Vector Analysis for Convolutive Blind Speech Separation I.Lee, T.Kim, T-W.Lee. 7. Relative Newton and Smoothing Multiplier Optimization Methods for Blind Source Separation M.Zibulevsky. Part II: Underdeterminded Blind Speech Separation with Sparseness 8. The DUET Blind Source Separation Algorithm S.Rickard. 9. K-means Based Underdetermined Blind Speech Separation S.Araki, H.Sawada, S.Makino. 10. Underdetermined Blind Source Separation of Convolutive Mixtures by Hierarchical Clustering and L1-Norm Minimization S.Winter, et al. 11. Bayesian Audio Source Separation C.Fevotte. Part III: Single Microphone Blind Speech Separation 12. Monaural Source Separation G.J.Jang, T-W.Lee. 13. Probabilistic Decompositions of Spectra for Sound Separation P.Smaragdis. 14. Sparsification for Monaural Source Separation H.Asari, et al. 15. Monaural Speech Separation by Support Vector Machines S.Hochreiter, M.C.Mozer. Index.


IEEE Transactions on Audio, Speech, and Language Processing | 2007

Blind Source Separation Exploiting Higher-Order Frequency Dependencies

Taesu Kim; Hagai Attias; Soo-Young Lee; Te-Won Lee

Blind source separation (BSS) is a challenging problem in real-world environments where sources are time delayed and convolved. The problem becomes more difficult in very reverberant conditions, with an increasing number of sources, and geometric configurations of the sources such that finding directionality is not sufficient for source separation. In this paper, we propose a new algorithm that exploits higher order frequency dependencies of source signals in order to separate them when they are mixed. In the frequency domain, this formulation assumes that dependencies exist between frequency bins instead of defining independence for each frequency bin. In this manner, we can avoid the well-known frequency permutation problem. To derive the learning algorithm, we define a cost function, which is an extension of mutual information between multivariate random variables. By introducing a source prior that models the inherent frequency dependencies, we obtain a simple form of a multivariate score function. In experiments, we generate simulated data with various kinds of sources in various environments. We evaluate the performances and compare it with other well-known algorithms. The results show the proposed algorithm outperforms the others in most cases. The algorithm is also able to accurately recover six sources with six microphones. In this case, we can obtain about 16-dB signal-to-interference ratio (SIR) improvement. Similar performance is observed in real conference room recordings with three human speakers reading sentences and one loudspeaker playing music


neural information processing systems | 1996

Blind Separation of Delayed and Convolved Sources

Te-Won Lee; Anthony J. Bell; Russell H. Lambert

We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve non-minimum phase transfer functions, not-invertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate real-room separation of two natural signals using this approach.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

ICA mixture models for unsupervised classification of non-Gaussian classes and automatic context switching in blind signal separation

Te-Won Lee; Michael S. Lewicki; Terrence J. Sejnowski

An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent, non-Gaussian densities. The algorithm estimates the density of each class and is able to model class distributions with non-Gaussian structure. The new algorithm can improve classification accuracy compared with standard Gaussian mixture models. When applied to blind source separation in nonstationary environments, the method can switch automatically between classes, which correspond to contexts with different mixing properties. The algorithm can learn efficient codes for images containing both natural scenes and text. This method shows promise for modeling non-Gaussian structure in high-dimensional data and has many potential applications.

Collaboration


Dive into the Te-Won Lee's collaboration.

Top Co-Authors

Avatar

Terrence J. Sejnowski

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kwokleung Chan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiucang Hao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge