Hien Van Nguyen
Siemens
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hien Van Nguyen.
Pattern Recognition | 2013
Zhouhui Lian; Afzal Godil; Benjamin Bustos; Mohamed Daoudi; Jeroen Hermans; Shun Kawamura; Yukinori Kurita; Guillaume Lavoué; Hien Van Nguyen; Ryutarou Ohbuchi; Yuki Ohkita; Yuya Ohishi; Fatih Porikli; Martin Reuter; Ivan Sipiran; Dirk Smeets; Paul Suetens; Hedi Tabia; Dirk Vandermeulen
Non-rigid 3D shape retrieval has become an active and important research topic in content-based 3D object retrieval. The aim of this paper is to measure and compare the performance of state-of-the-art methods for non-rigid 3D shape retrieval. The paper develops a new benchmark consisting of 600 non-rigid 3D watertight meshes, which are equally classified into 30 categories, to carry out experiments for 11 different algorithms, whose retrieval accuracies are evaluated using six commonly utilized measures. Models and evaluation tools of the new benchmark are publicly available on our web site [1].
eurographics | 2011
Zhouhui Lian; Afzal Godil; Benjamin Bustos; Mohamed Daoudi; Jeroen Hermans; Shun Kawamura; Yukinori Kurita; Guillaume Lavoué; Hien Van Nguyen; Ryutarou Ohbuchi; Yuki Ohkita; Yuya Ohishi; Fatih Porikli; Martin Reuter; Ivan Sipiran; Dirk Smeets; Paul Suetens; Hedi Tabia; Dirk Vandermeulen
Non-rigid 3D shape retrieval has become an important research topic in content-based 3D object retrieval. The aim of this track is to measure and compare the performance of non-rigid 3D shape retrieval methods implemented by different participants around the world. The track is based on a new non-rigid 3D shape benchmark, which contains 600 watertight triangle meshes that are equally classified into 30 categories. In this track, 25 runs have been submitted by 9 groups and their retrieval accuracies were evaluated using 6 commonly-utilized measures.
IEEE Transactions on Image Processing | 2013
Hien Van Nguyen; Vishal M. Patel; Nasser M. Nasrabadi; Rama Chellappa
In this paper, we present dictionary learning methods for sparse signal representations in a high dimensional feature space. Using the kernel method, we describe how the well known dictionary learning approaches, such as the method of optimal directions and KSVD, can be made nonlinear. We analyze their kernel constructions and demonstrate their effectiveness through several experiments on classification problems. It is shown that nonlinear dictionary learning approaches can provide significantly better performance compared with their linear counterparts and kernel principal component analysis, especially when the data is corrupted by different types of degradations.
computer vision and pattern recognition | 2013
Sumit Shekhar; Vishal M. Patel; Hien Van Nguyen; Rama Chellappa
Data-driven dictionaries have produced state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. Specifically, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. An efficient optimization technique is presented, which can be easily kernelized and extended to multiple domains. The algorithm is modified to learn a common discriminative dictionary, which can be further used for classification. The proposed approach does not require any explicit correspondence between the source and target domains, and shows good results even when there are only a few labels available in the target domain. Various recognition experiments show that the method performs on par or better than competitive state-of-the-art methods.
international conference on acoustics, speech, and signal processing | 2012
Hien Van Nguyen; Vishal M. Patel; Nasser M. Nasrabadi; Rama Chellappa
In this paper, we present dictionary learning methods for sparse and redundant signal representations in high dimensional feature space. Using the kernel method, we describe how the well-known dictionary learning approaches such as the method of optimal directions and K-SVD can be made nonlinear. We analyze these constructions and demonstrate their improved performance through several experiments on classification problems. It is shown that nonlinear dictionary learning approaches can provide better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by noise.
international conference on computer vision | 2013
Vishal M. Patel; Hien Van Nguyen; René Vidal
We propose a novel algorithm called Latent Space Sparse Subspace Clustering for simultaneous dimensionality reduction and clustering of data lying in a union of subspaces. Specifically, we describe a method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space. Cluster labels are then assigned by applying spectral clustering to a similarity matrix built from these sparse coefficients. An efficient optimization method is proposed and its non-linear extensions based on the kernel methods are presented. One of the main advantages of our method is that it is computationally efficient as the sparse coefficients are found in the low-dimensional latent space. Various experiments show that the proposed method performs better than the competitive state-of-the-art subspace clustering methods.
european conference on computer vision | 2012
Hien Van Nguyen; Vishal M. Patel; Nasser M. Nasrabadi; Rama Chellappa
We introduce a novel framework, called sparse embedding (SE), for simultaneous dimensionality reduction and dictionary learning. We formulate an optimization problem for learning a transformation from the original signal domain to a lower-dimensional one in a way that preserves the sparse structure of data. We propose an efficient optimization algorithm and present its non-linear extension based on the kernel methods. One of the key features of our method is that it is computationally efficient as the learning is done in the lower-dimensional space and it discards the irrelevant part of the signal that derails the dictionary learning process. Various experiments show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive algorithms on signal recovery and object classification tasks.
medical image computing and computer assisted intervention | 2015
Yefeng Zheng; David Liu; Bogdan Georgescu; Hien Van Nguyen; Dorin Comaniciu
Recently, deep learning has demonstrated great success in computer vision with the capability to learn powerful image features from a large training set. However, most of the published work has been confined to solving 2D problems, with a few limited exceptions that treated the 3D space as a composition of 2D orthogonal planes. The challenge of 3D deep learning is due to a much larger input vector, compared to 2D, which dramatically increases the computation time and the chance of over-fitting, especially when combined with limited training samples hundreds to thousands, typical for medical imaging applications. To address this challenge, we propose an efficient and robust deep learning algorithm capable of full 3D detection in volumetric data. A two-step approach is exploited for efficient detection. A shallow network with one hidden layer is used for the initial testing of all voxels to obtain a small number of promising candidates, followed by more accurate classification with a deep network. In addition, we propose two approaches, i.e., separable filter decomposition and network sparsification, to speed up the evaluation of a network. To mitigate the over-fitting issue, thereby increasing detection robustness, we extract small 3D patches from a multi-resolution image pyramid. The deeply learned image features are further combined with Haar wavelet features to increase the detection accuracy. The proposed method has been quantitatively evaluated for carotid artery bifurcation detection on a head-neck CT dataset from 455 patients. Compared to the state-of-the-art, the mean error is reduced by more than half, from 5.97 mm to 2.64 mm, with a detection speed of less than 1 s/volume.
computer vision and pattern recognition | 2010
Hien Van Nguyen; Amit Banerjee; Rama Chellappa
Recent advances in electronics and sensor design have enabled the development of a hyperspectral video camera that can capture hyperspectral datacubes at near video rates. The sensor offers the potential for novel and robust methods for surveillance by combining methods from computer vision and hyperspectral image analysis. Here, we focus on the problem of tracking objects through challenging conditions, such as rapid illumination and pose changes, occlusions, and in the presence of confusers. A new framework that incorporates radiative transfer theory to estimate object reflectance and the mean shift algorithm to simultaneously track the object based on its reflectance spectra is proposed. The combination of spectral detection and motion prediction enables the tracker to be robust against abrupt motions, and facilitate fast convergence of the mean shift tracker. In addition, the system achieves good computational efficiency by using random projection to reduce spectral dimension. The tracker has been evaluated on real hyperspectral video data.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Hien Van Nguyen; Fatih Porikli
We introduce a novel implicit representation for 2D and 3D shapes based on Support Vector Machine (SVM) theory. Each shape is represented by an analytic decision function obtained by training SVM, with a Radial Basis Function (RBF) kernel so that the interior shape points are given higher values. This empowers support vector shape (SVS) with multifold advantages. First, the representation uses a sparse subset of feature points determined by the support vectors, which significantly improves the discriminative power against noise, fragmentation, and other artifacts that often come with the data. Second, the use of the RBF kernel provides scale, rotation, and translation invariant features, and allows any shape to be represented accurately regardless of its complexity. Finally, the decision function can be used to select reliable feature points. These features are described using gradients computed from highly consistent decision functions instead from conventional edges. Our experiments demonstrate promising results.