Hyon-Jung Kim
University of Oulu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hyon-Jung Kim.
Journal of the American Statistical Association | 2003
Alan E. Gelfand; Hyon-Jung Kim; C. F. Sirmans; Sudipto Banerjee
In many applications, the objective is to build regression models to explain a response variable over a region of interest under the assumption that the responses are spatially correlated. In nearly all of this work, the regression coefficients are assumed to be constant over the region. However, in some applications, coefficients are expected to vary at the local or subregional level. Here we focus on the local case. Although parametric modeling of the spatial surface for the coefficient is possible, here we argue that it is more natural and flexible to view the surface as a realization from a spatial process. We show how such modeling can be formalized in the context of Gaussian responses providing attractive interpretation in terms of both random effects and explaining residuals. We also offer extensions to generalized linear models and to spatio-temporal setting. We illustrate both static and dynamic modeling with a dataset that attempts to explain (log) selling price of single-family houses.
IEEE Transactions on Signal Processing | 2008
Esa Ollila; Hyon-Jung Kim; Visa Koivunen
Despite of the increased interest in independent component analysis (ICA) during the past two decades, a simple closed form expression of the Cramer-Rao bound (CRB) for the demixing matrix estimation has not been established in the open literature. In the present paper we fill this gap by deriving a simple closed-form expression for the CRB of the demixing matrix directly from its definition. A simulation study comparing ICA estimators with the CRB is given.
Statistics in Medicine | 2010
Karri Seppä; Timo Hakulinen; Hyon-Jung Kim; Esa Läärä
Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause-specific survival data of female breast cancer patients collected by the population-based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non-cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis-within-Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter-expanded model with tailor-made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10-year cause-specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival.
international symposium on communications control and signal processing | 2014
Esa Ollila; Hyon-Jung Kim; Visa Koivunen
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown sparse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from M-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
international symposium on biomedical imaging | 2011
Esa Ollila; Hyon-Jung Kim
Independent component analysis (ICA) is a widely used multivariate analysis technique with applications in many diverse fields such as medical imaging, image processing and data mining. Up to date almost all ICA research have focused on estimation of the mixing and demixing matrix but almost nothing exists on testing hypotheses of the mixing vectors or mixing coefficients. In this paper, we construct tests for this purposes using deflation-based FastICA estimator. The developed (Wald-type) test statistic utilizes the asymptotic covariance matrix of the estimator and its asymptotic normality. The developed test can be used e.g. in fMRI analysis where the mixing vectors correspond to the time courses of the independent spatial maps. In this context, it is of interest to test if the hypothesized task-related time course is significantly different from the found mixing vectors. Simulations and an example on synthetic data illustrate the validity and usefulness of our approach.
european signal processing conference | 2015
Hyon-Jung Kim; Esa Ollila; Visa Koivunen
The LASSO (Least Absolute Shrinkage and Selection Operator) has been a popular technique for simultaneous linear regression estimation and variable selection. Robust approaches for LASSO are needed in the case of heavy-tailed errors or severe outliers. We propose a novel robust LASSO method that has a non-parametric flavor: it solves a criterion function based on ranks of the residuals with LASSO penalty. The criterion is based on pairwise differences of residuals in the least absolute deviation (LAD) loss leading to a bounded influence function. With the i\-criterion we can easily incorporate other penalties such as fused LASSO for group sparsity and smoothness. For both methods, we propose efficient algorithms for computing the solutions. Our simulation study and application examples (image denoising, prostate cancer data analysis) show that our method outperform the usual LS/LASSO methods for either heavy-tailed errors or outliers, offering better variable selection than another robust competitor, LAD-LASSO method.
ieee signal processing workshop on statistical signal processing | 2014
Hyon-Jung Kim; Esa Ollila; Visa Koivunen; H. Vincent Poor
A new tensor approximation method is developed based on the CANDECOMP/PARAFAC (CP) factorization that enjoys both sparsity (i.e., yielding factor matrices with some nonzero elements) and resistance to outliers and non-Gaussian measurement noise. This method utilizes a robust bounded loss function for errors in the low-rank tensor approximation while encouraging sparsity with Lasso (or ℓ1-) regularization to the factor matrices (of a tensor data). A simple alternating, iteratively reweighted (IRW) Lasso algorithm is proposed to solve the resulting optimization problem. Simulation studies illustrate that the proposed method provides excellent performance in terms of mean square error accuracy for heavy-tailed noise conditions, with relatively small loss in conventional Gaussian noise.
international conference on acoustics, speech, and signal processing | 2013
Hyon-Jung Kim; Esa Ollila; Visa Koivunen
Multi-linear techniques using tensor decompositions provide a unifying framework for the high-dimensional data analysis. Sparsity in tensor decompositions clearly improves the analysis and inference of multi-dimensional data. Other than non-negative tensor factorizations, the literature on tensor estimation using sparsity is limited. In this paper, we introduce sparse regularization methods for tensor decompositions which are useful for dimensionality reduction, feature selection as well as signal recovery. One major challenge in most of the tensor decomposition algorithms is their heavy dependence on good initializations. To alleviate such a critical problem we propose a reliable method based on the ridge regression to provide good starting values taking advantage of sparsity. Combined with such initializations our sparse regularization methods show highly improved performance over the conventional methods in the demonstrated simulation studies.
ieee global conference on signal and information processing | 2013
Hyon-Jung Kim; Esa Ollila; Visa Koivunen; Christophe Croux
We propose novel tensor decomposition methods that advocate both properties of sparsity and robustness to outliers. The sparsity enables us to extract some essential features from a big data that are easily interpretable. The robustness ensures the resistance to outliers that appear commonly in high-dimensional data. We first propose a method that generalizes the ridge regression in M-estimation framework for tensor decompositions. The other approach we propose combines the least absolute deviation (LAD) regression and the least absolute shrinkage operator (LASSO) for the CANDECOMP/PARAFAC (CP) tensor decompositions. We also formulate various robust tensor decomposition methods using different loss functions. The simulation study shows that our robust-sparse methods outperform other general tensor decomposition methods in the presence of outliers.
Remote Sensing of Environment | 2006
Hyon-Jung Kim; Erkki Tomppo