Chia-Hsiang Lin
National Tsing Hua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chia-Hsiang Lin.
IEEE Transactions on Geoscience and Remote Sensing | 2015
Chia-Hsiang Lin; Wing-Kin Ma; Wei-Chiang Li; Chong-Yung Chi; ArulMurugan Ambikapathi
In blind hyperspectral unmixing (HU), the pure-pixel assumption is well known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no-pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are supported by numerical simulation results.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2013
ArulMurugan Ambikapathi; Tsung-Han Chan; Chia-Hsiang Lin; Chong-Yung Chi
Accurate estimation of number of endmembers in a given hyper-spectral data plays a vital role in effective unmixing and identification of the materials present over the scene of interest. The estimation of number of endmembers, however, is quite challenging due to the inevitable combined presence of noise and outliers. Recently, we have proposed a convex geometry based algorithm, namely geometry based estimation of number of endmembers — affine hull (GENE-AH) [1] to reliably estimate the number of endmembers in the presence of only noise. In this paper, we will demonstrate that the GENE-AH algorithm can be suitably used for reliable estimation of number of endmembers even for data corrupted by both outliers and noise, without any prior knowledge about the outliers present in the data. Initially, the GENE-AH algorithm (alongside with its inherent endmember extraction algorithm: p-norm-based pure pixel identification (TRI-P) algorithm) is used to identify the set of candidate pixels (possibly including the outlier pixels) that contribute to the affine dimension of the hyperspectral data. Inspired by the fact that the affine hull of the hyperspectral data remains intact for any data set associated with the same endmembers (that may not be in the data set), using GENE-AH again on the corrupted data with the identified candidate pixels removed, will yield a reliable estimate of the true affine dimension (number of endmembers) of that given data. Computer simulations under various scenarios are shown to demonstrate the efficacy of the proposed methodology.
international conference on acoustics, speech, and signal processing | 2013
Chia-Hsiang Lin; ArulMurugan Ambikapathi; Wei-Chiang Li; Chong-Yung Chi
Hyperspectral unmixing (HU) is a process to extract the underlying endmember signatures (or simply endmembers) and the corresponding proportions (abundances) from the observed hyperspectral data cloud. The Craigs criterion (minimum volume simplex enclosing the data cloud) and the Winters criterion (maximum volume simplex inside the data cloud) are widely used for HU. For perfect identifiability of the endmembers, we have recently shown in [1] that the presence of pure pixels (pixels fully contributed by a single endmember) for all endmembers is both necessary and sufficient condition for Winters criterion, and is a sufficient condition for Craigs criterion. A necessary condition for endmember identifiability (EI) when using Craigs criterion remains unsolved even for three-endmember case. In this work, considering a three-endmember scenario, we endeavor a statistical analysis to identify a necessary and statistically sufficient condition on the purity level (a measure of mixing levels of the endmembers) of the data, so that Craigs criterion can guarantee perfect identification of endmembers. Precisely, we prove that a purity level strictly greater than 1/√(2) is necessary for EI, while the same is sufficient for EI with probability-1. Since the presence of pure pixels is a very strong requirement which is seldom true in practice, the results of this analysis foster the practical applicability of Craigs criterion over Winters criterion, to real-world problems.
IEEE Access | 2017
Guixian Xu; Chia-Hsiang Lin; Weiguo Ma; Shanzhi Chen; Chong-Yung Chi
Heterogeneous network (HetNet), employing massive multiple-input multiple-output (MIMO), has been recognized as a promising technique to enhance network capacity, and to improve energy efficiency for the fifth generation of wireless communications. However, most existing schemes for coordinated beamforming (CoBF) for a massive MIMO HetNet unrealistically assume the availability of perfect channel state information (CSI) on one hand, and cascade of each antenna with a distinct radio-frequency chain in massive MIMO is neither power nor cost-efficient on the other hand. In this paper, we consider a massive MIMO-enabled HetNet framework, consisting of one macrocell base station (MBS) equipped with an analog beamformer, followed by a digital beamformer, and one femtocell base station (FBS) equipped with a digital beamformer. In the presence of Gaussian CSI errors, we propose a robust hybrid CoBF (HyCoBF) design, including an analog beamforming design for MBS, and a digital CoBF design for both MBS and FBS. To this end, an outage probability-constrained robust HyCoBF problem is formulated by minimizing the total transmit power. The analog beamforming mechanism at MBS is a newly devised low-complexity beam selection scheme by selecting analog beams from a discrete Fourier transform matrix codebook. Then, a conservative approximate CoBF solution is obtained via semidefinite relaxation and an extended Bernstein-type inequality. Furthermore, a distributed implementation for the obtained CoBF solution using alternating direction method of multipliers is proposed. Finally, numerical simulations are provided to demonstrate the efficacy of the proposed robust HyCoBF algorithm.
international conference on acoustics, speech, and signal processing | 2015
Chia-Hsiang Lin; Chong-Yung Chi; Yu-Hsiang Wang; Tsung-Han Chan
Hyperspectral unmixing (HU) is an essential signal processing procedure for blindly extracting the hidden spectral signatures of materials (or endmembers) from observed hyperspectral imaging data. Craigs criterion, stating that the vertices of the minimum volume enclosing simplex (MVES) of the data cloud yield high-fidelity endmember estimates, has been widely used for designing endmember extraction algorithms (EEAs) especially in the scenario of no pure pixels. However, most Craig-criterion-based EEAs generally suffer from high computational complexity due to heavy simplex volume computations, and performance sensitivity to random initialization, etc. In this work, based on the idea that Craigs simplex with N vertices can be defined by N associated hyperplanes, we develop a fast and reproducible EEA by identifying these hyperplanes from N(N - 1) data pixels extracted via simple and effective linear algebraic formulations, together with endmember identifiability analysis. Some Monte Carlo simulations are provided to demonstrate the superior efficacy of the proposed EEA over state-of-the-art Craig-criterion-based EEAs in both computational efficiency and estimation accuracy.
international conference on communications | 2017
Guixian Xu; Chia-Hsiang Lin; Weiguo Ma; Chong-Yung Chi
Heterogeneous network (HetNet), employing massive multiple-input multiple-output (MEMO), has been recognized as a promising technique to enhance network capacity, and to improve energy efficiency for fifth generation (5G) of wireless communications. However, most existing schemes for coordinated beamforming (CoBF) for a massive MIMO HetNet unrealistically assume the availability of perfect channel state information (CSQ on one hand, and cascade of each antenna with a distinct radio frequency (RF) chain in massive MEMO is neither power nor cost efficient on the other hand. In this paper, we consider a massive MEMO enabled HetNet framework, consisting of one macrocell base station (MBS) equipped with an analog beamformer, followed by a digital beamformer, and one femtocell base station (FBS) equipped with a digital beamformer. In the presence of Gaussian CSI errors, we propose a robust hybrid CoBF (HyCoBF) design, including an analog beamforming design for MBS and a digital CoBF design for both MBS and FBS. To this end, an outage probability constrained robust HyCoBF problem is formulated by minimizing the total transmit power. The analog beamforming mechanism at MBS is a newly devised low-complexity beam selection scheme by selecting analog beams from a discrete Fourier transform (DFT) matrix codebook. Then a conservative approximate CoBF solution is obtained via semidefinite relaxation (SDR) and an extended Bernsteintype inequality. Finally, numerical simulations are provided to demonstrate the efficacy of the proposed HyCoBF algorithm.
IEEE Transactions on Biomedical Engineering | 2016
ArulMurugan Ambikapathi; Tsung-Han Chan; Chia-Hsiang Lin; Fei-Shih Yang; Chong-Yung Chi; Yue Wang
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a powerful imaging modality to study the pharmacokinetics in a suspected cancer/tumor tissue. The pharmacokinetic (PK) analysis of prostate cancer includes the estimation of time activity curves (TACs), and thereby, the corresponding kinetic parameters (KPs), and plays a pivotal role in diagnosis and prognosis of prostate cancer. In this paper, we endeavor to develop a blind source separation algorithm, namely convex-optimization-based KPs estimation (COKE) algorithm for PK analysis based on compartmental modeling of DCE-MRI data, for effective prostate tumor detection and its quantification. The COKE algorithm first identifies the best three representative pixels in the DCE-MRI data, corresponding to the plasma, fast-flow, and slow-flow TACs, respectively. The estimation accuracy of the flux rate constants (FRCs) of the fast-flow and slow-flow TACs directly affects the estimation accuracy of the KPs that provide the cancer and normal tissue distribution maps in the prostate region. The COKE algorithm wisely exploits the matrix structure (Toeplitz, lower triangular, and exponential decay) of the original nonconvex FRCs estimation problem, and reformulates it into two convex optimization problems that can reliably estimate the FRCs. After estimation of the FRCs, the KPs can be effectively estimated by solving a pixel-wise constrained curve-fitting (convex) problem. Simulation results demonstrate the efficacy of the proposed COKE algorithm. The COKE algorithm is also evaluated with DCE-MRI data of four different patients with prostate cancer and the obtained results are consistent with clinical observations.
IEEE Transactions on Neural Networks | 2018
Chia-Hsiang Lin; Chong-Yung Chi; Lulu Chen; David J. Miller; Yue Wang
While non-negative blind source separation (nBSS) has found many successful applications in science and engineering, model order selection, determining the number of sources, remains a critical yet unresolved problem. Various model order selection methods have been proposed and applied to real-world data sets but with limited success, with both order over- and under-estimation reported. By studying existing schemes, we have found that the unsatisfactory results are mainly due to invalid assumptions, model oversimplification, subjective thresholding, and/or to assumptions made solely for mathematical convenience. Building on our earlier work that reformulated model order selection for nBSS with more realistic assumptions and models, we report a newly and formally revised model order selection criterion rooted in the minimum description length (MDL) principle. Adopting widely invoked assumptions for achieving a unique nBSS solution, we consider the mixing matrix as consisting of deterministic unknowns, with the source signals following a multivariate Dirichlet distribution. We derive a computationally efficient, stochastic algorithm to obtain approximate maximum-likelihood estimates of model parameters and apply Monte Carlo integration to determine the description length. Our modeling and estimation strategy exploits the characteristic geometry of the data simplex in nBSS. We validate our nBSS-MDL criterion through extensive simulation studies and on four real-world data sets, demonstrating its strong performance and general applicability to nBSS. The proposed nBSS-MDL criterion consistently detects the true number of sources, in all of our case studies.
Archive | 2017
Chong-Yung Chi; Wei-Chiang Li; Chia-Hsiang Lin
This work addresses the numerical computation of the two-dimensional ow of yield stress uids (with Bingham and Herschel-Bulkley models) based on a variational approach and a nite element discretization. The main goal of this paper is to propose an alternative optimization method to existing procedures such as penalization and augmented Lagrangian techniques. It is shown that the minimum principle for Bingham and Herschel-Bulkley yield stress uid steady ows can, indeed, be formulated as a second-order cone programming (SOCP) problem, for which very ecient primal-dual interior point solvers are available. In particular, the formulation does not require any regularization of the visco-plastic model as is usually the case for existing techniques, avoiding therefore the dicult choice of the regularization parameter. Besides, it is also unnecessary to adopt a mixed stress-velocity approach or discretize explicitly auxiliary variables as frequently proposed in existing methods. Finally, the performance of dedicated SOCP solvers, like the Mosek software package, enables to solve large-scale problems on a personal computer within seconds only. The proposed method will be validated on classical benchmark examples and used to simulate the ow generated around a plate during its withdrawal from a bath of yield stress uid.
Siam Journal on Imaging Sciences | 2018
Chia-Hsiang Lin; Ruiyuan Wu; Wing-Kin Ma; Chong-Yung Chi; Yue Wang
Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.