Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsung-Han Chan is active.

Publication


Featured researches published by Tsung-Han Chan.


IEEE Transactions on Image Processing | 2015

PCANet: A Simple Deep Learning Baseline for Image Classification?

Tsung-Han Chan; Kui Jia; Shenghua Gao; Jiwen Lu; Zinan Zeng; Yi Ma

In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.


IEEE Transactions on Signal Processing | 2009

A Convex Analysis-Based Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing

Tsung-Han Chan; Chong-Yung Chi; Yu-Min Huang; Wing-Kin Ma

Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craigs criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods.


IEEE Signal Processing Magazine | 2014

A Signal Processing Perspective on Hyperspectral Unmixing: Insights from Remote Sensing

Wing-Kin Ma; José M. Bioucas-Dias; Tsung-Han Chan; Nicolas Gillis; Paul D. Gader; Antonio Plaza; ArulMurugan Ambikapathi; Chong-Yung Chi

Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing - blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts - discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research - are elegant and very different from statistical independence-based BSS approaches established in the SP field. Moreover, the latest research on blind HU is rapidly adopting advanced techniques, such as those in sparse SP and optimization. The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.


IEEE Transactions on Geoscience and Remote Sensing | 2011

A Simplex Volume Maximization Framework for Hyperspectral Endmember Extraction

Tsung-Han Chan; Wing-Kin Ma; ArulMurugan Ambikapathi; Chong-Yung Chi

In the late 1990s, Winter proposed an endmember extraction belief that has much impact on endmember extraction techniques in hyperspectral remote sensing. The idea is to find a maximum-volume simplex whose vertices are drawn from the pixel vectors. Winters belief has stimulated much interest, resulting in many different variations of pixel search algorithms, widely known as N-FINDR, being proposed. In this paper, we take a continuous optimization perspective to revisit Winters belief, where the aim is to provide an alternative framework of formulating and understanding Winters belief in a systematic manner. We first prove that, fundamentally, the existence of pure pixels is not only sufficient for the Winter problem to perfectly identify the ground-truth endmembers but also necessary. Then, under the umbrella of the Winter problem, we derive two methods using two different optimization strategies. One is by alternating optimization. The resulting algorithm turns out to be an N-FINDR variant, but, with the proposed formulation, we can pin down some of its convergence characteristics. Another is by successive optimization; interestingly, the resulting algorithm is found to exhibit some similarity to vertex component analysis. Hence, the framework provides linkage and alternative interpretations to these existing algorithms. Furthermore, we propose a robust worst case generalization of the Winter problem for accounting for perturbed pixel effects in the noisy scenario. An algorithm combining alternating optimization and projected subgradients is devised to deal with the problem. We use both simulations and real data experiments to demonstrate the viability and merits of the proposed algorithms.


IEEE Transactions on Signal Processing | 2008

A Convex Analysis Framework for Blind Separation of Non-Negative Sources

Tsung-Han Chan; Wing-Kin Ma; Chong-Yung Chi; Yue Joseph Wang

This paper presents a new framework for blind source separation (BSS) of non-negative source signals. The proposed framework, referred herein to as convex analysis of mixtures of non-negative sources (CAMNS), is deterministic requiring no source independence assumption, the entrenched premise in many existing (usually statistical) BSS frameworks. The development is based on a special assumption called local dominance. It is a good assumption for source signals exhibiting sparsity or high contrast, and thus is considered realistic to many real-world problems such as multichannel biomedical imaging. Under local dominance and several standard assumptions, we apply convex analysis to establish a new BSS criterion, which states that the source signals can be perfectly identified (in a blind fashion) by finding the extreme points of an observation-constructed polyhedral set. Methods for fulfilling the CAMNS criterion are also derived, using either linear programming or simplex geometry. Simulation results on several data sets are presented to demonstrate the efficacy of the proposed method over several other reported BSS methods.


european conference on computer vision | 2012

Robust and practical face recognition via structured sparsity

Kui Jia; Tsung-Han Chan; Yi Ma

Sparse representation based classification (SRC) methods have recently drawn much attention in face recognition, due to their good performance and robustness against misalignment, illumination variation, and occlusion. They assume the errors caused by image variations can be modeled as pixel-wisely sparse. However, in many practical scenarios these errors are not truly pixel-wisely sparse but rather sparsely distributed with structures, i.e., they constitute contiguous regions distributed at different face positions. In this paper, we introduce a class of structured sparsity-inducing norms into the SRC framework, to model various corruptions in face images caused by misalignment, shadow (due to illumination change), and occlusion. For practical face recognition, we develop an automatic face alignment method based on minimizing the structured sparsity norm. Experiments on benchmark face datasets show improved performance over SRC and other alternative methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Nonnegative Least-Correlated Component Analysis for Separation of Dependent Sources by Volume Maximization

Fa-Yu Wang; Chong-Yung Chi; Tsung-Han Chan; Yue Joseph Wang

Although significant efforts have been made in developing nonnegative blind source separation techniques, accurate separation of positive yet dependent sources remains a challenging task. In this paper, a joint correlation function of multiple signals is proposed to reveal and confirm that the observations after nonnegative mixing would have higher joint correlation than the original unknown sources. Accordingly, a new nonnegative least-correlated component analysis (nLCA) method is proposed to design the unmixing matrix by minimizing the joint correlation function among the estimated nonnegative sources. In addition to a closed-form solution for unmixing two mixtures of two sources, the general algorithm of nLCA for the multisource case is developed based on an iterative volume maximization (IVM) principle and linear programming. The source identifiability and required conditions are discussed and proven. The proposed nLCA algorithm, denoted by nLCA-IVM, is evaluated with both simulation data and real biomedical data to demonstrate its superior performance over several existing benchmark methods.


IEEE Transactions on Geoscience and Remote Sensing | 2011

Chance-Constrained Robust Minimum-Volume Enclosing Simplex Algorithm for Hyperspectral Unmixing

ArulMurugan Ambikapathi; Tsung-Han Chan; Wing-Kin Ma; Chong-Yung Chi

Effective unmixing of hyperspectral data cube under a noisy scenario has been a challenging research problem in remote sensing arena. A branch of existing hyperspectral unmixing algorithms is based on Craigs criterion, which states that the vertices of the minimum-volume simplex enclosing the hyperspectral data should yield high fidelity estimates of the endmember signatures associated with the data cloud. Recently, we have developed a minimum-volume enclosing simplex (MVES) algorithm based on Craigs criterion and validated that the MVES algorithm is very useful to unmix highly mixed hyperspectral data. However, the presence of noise in the observations expands the actual data cloud, and as a consequence, the endmember estimates obtained by applying Craig-criterion-based algorithms to the noisy data may no longer be in close proximity to the true endmember signatures. In this paper, we propose a robust MVES (RMVES) algorithm that accounts for the noise effects in the observations by employing chance constraints. These chance constraints in turn control the volume of the resulting simplex. Under the Gaussian noise assumption, the chance-constrained MVES problem can be formulated into a deterministic nonlinear program. The problem can then be conveniently handled by alternating optimization, in which each subproblem involved is handled by using sequential quadratic programming solvers. The proposed RMVES is compared with several existing benchmark algorithms, including its predecessor, the MVES algorithm. Monte Carlo simulations and real hyperspectral data experiments are presented to demonstrate the efficacy of the proposed RMVES algorithm.


IEEE Transactions on Geoscience and Remote Sensing | 2013

Hyperspectral Data Geometry-Based Estimation of Number of Endmembers Using p-Norm-Based Pure Pixel Identification Algorithm

ArulMurugan Ambikapathi; Tsung-Han Chan; Chong-Yung Chi; Kannan Keizer

Hyperspectral endmember extraction is a process to estimate endmember signatures from the hyperspectral observations, in an attempt to study the underlying mineral composition of a landscape. However, estimating the number of endmembers, which is usually assumed to be known a priori in most endmember estimation algorithms (EEAs), still remains a challenging task. In this paper, assuming hyperspectral linear mixing model, we propose a hyperspectral data geometry-based approach for estimating the number of endmembers by utilizing successive endmember estimation strategy of an EEA. The approach is fulfilled by two novel algorithms, namely geometry-based estimation of number of endmembers—convex hull (GENE-CH) algorithm and affine hull (GENE-AH) algorithm. The GENE-CH and GENE-AH algorithms are based on the fact that all the observed pixel vectors lie in the convex hull and affine hull of the endmember signatures, respectively. The proposed GENE algorithms estimate the number of endmembers by using the Neyman–Pearson hypothesis testing over the endmember estimates provided by a successive EEA until the estimate of the number of endmembers is obtained. Since the estimation accuracies of the proposed GENE algorithms depend on the performance of the EEA used, a reliable, reproducible, and successive EEA, called


IEEE Transactions on Medical Imaging | 2011

Tissue-Specific Compartmental Analysis for Dynamic Contrast-Enhanced MR Imaging of Complex Tumors

Li Chen; Peter L. Choyke; Tsung-Han Chan; Chong-Yung Chi; Ge Wang; Yue Joseph Wang

p

Collaboration


Dive into the Tsung-Han Chan's collaboration.

Top Co-Authors

Avatar

Chong-Yung Chi

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Wing-Kin Ma

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Ma

ShanghaiTech University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiao Fu

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Shenghua Gao

ShanghaiTech University

View shared research outputs
Researchain Logo
Decentralizing Knowledge