Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trac D. Tran is active.

Publication


Featured researches published by Trac D. Tran.


IEEE Transactions on Geoscience and Remote Sensing | 2011

Hyperspectral Image Classification Using Dictionary-Based Sparse Representation

Yi Chen; Nasser M. Nasrabadi; Trac D. Tran

A new sparsity-based algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples. The sparse vector is recovered by solving a sparsity-constrained optimization problem, and it can directly determine the class label of the test sample. Two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. In the first approach, an explicit smoothing constraint is imposed on the problem formulation by forcing the vector Laplacian of the reconstructed image to become zero. In this approach, the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors. The second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples, which are weighted with a different set of coefficients for each pixel. The proposed sparsity-based algorithm is applied to several real hyperspectral images for classification. Experimental results show that our algorithm outperforms the classical supervised classifier support vector machines in most cases.


IEEE Transactions on Signal Processing | 2001

Fast multiplierless approximations of the DCT with the lifting scheme

Jie Liang; Trac D. Tran

We present the design, implementation, and application of several families of fast multiplierless approximations of the discrete cosine transform (DCT) with the lifting scheme called the binDCT. These binDCT families are derived from Chens (1977) and Loefflers (1989) plane rotation-based factorizations of the DCT matrix, respectively, and the design approach can also be applied to a DCT of arbitrary size. Two design approaches are presented. In the first method, an optimization program is defined, and the multiplierless transform is obtained by approximating its solution with dyadic values. In the second method, a general lifting-based scaled DCT structure is obtained, and the analytical values of all lifting parameters are derived, enabling dyadic approximations with different accuracies. Therefore, the binDCT can be tuned to cover the gap between the Walsh-Hadamard transform and the DCT. The corresponding two-dimensional (2-D) binDCT allows a 16-bit implementation, enables lossless compression, and maintains satisfactory compatibility with the floating-point DCT. The performance of the binDCT in JPEG, H.263+, and lossless compression is also demonstrated.


IEEE Transactions on Geoscience and Remote Sensing | 2013

Hyperspectral Image Classification via Kernel Sparse Representation

Yi Chen; Nasser M. Nasrabadi; Trac D. Tran

In this paper, a novel nonlinear technique for hyperspectral image (HSI) classification is proposed. Our approach relies on sparsely representing a test sample in terms of all of the training samples in a feature space induced by a kernel function. For each test pixel in the feature space, a sparse representation vector is obtained by decomposing the test pixel over a training dictionary, also in the same feature space, by using a kernel-based greedy pursuit algorithm. The recovered sparse representation vector is then used directly to determine the class label of the test pixel. Projecting the samples into a high-dimensional feature space and kernelizing the sparse representation improve the data separability between different classes, providing a higher classification accuracy compared to the more conventional linear sparsity-based classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model, where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training samples. Kernel greedy optimization algorithms are suggested in this paper to solve the kernel versions of the single-pixel and multi-pixel joint sparsity-based recovery problems. Experimental results on several HSIs show that the proposed technique outperforms the linear sparsity-based classification technique, as well as the classical support vector machines and sparse kernel logistic regression classifiers.


asilomar conference on signals, systems and computers | 2008

Sparsity adaptive matching pursuit algorithm for practical compressed sensing

Thong T. Do; Lu Gan; Nam P. Nguyen; Trac D. Tran

This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other state-of-the-art greedy algorithms, the most innovative feature of the SAMP is its capability of signal reconstruction without prior information of the sparsity. This makes it a promising candidate for many practical applications when the number of non-zero (significant) coefficients of a signal is not available. The proposed algorithm adopts a similar flavor of the EM algorithm, which alternatively estimates the sparsity and the true support set of the target signals. In fact, SAMP provides a generalized greedy reconstruction framework in which the orthogonal matching pursuit and the subspace pursuit can be viewed as its special cases. Such a connection also gives us an intuitive justification of trade-offs between computational complexity and reconstruction performance. While the SAMP offers a comparably theoretical guarantees as the best optimization-based approach, simulation results show that it outperforms many existing iterative algorithms, especially for compressible signals.


IEEE Transactions on Signal Processing | 2012

Fast and Efficient Compressive Sensing Using Structurally Random Matrices

Thong T. Do; Lu Gan; Nam H. Nguyen; Trac D. Tran

This paper introduces a new framework to construct fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we prerandomize the sensing signal by scrambling its sample locations or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the resulting transform coefficients to obtain the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable to that of completely random sensing matrices. Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework.


IEEE Journal of Selected Topics in Signal Processing | 2011

Sparse Representation for Target Detection in Hyperspectral Imagery

Yi Chen; Nasser M. Nasrabadi; Trac D. Tran

In this paper, we propose a new sparsity-based algorithm for automatic target detection in hyperspectral imagery (HSI). This algorithm is based on the concept that a pixel in HSI lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples. The sparse representation (a sparse vector corresponding to the linear combination of a few selected training samples) of a test sample can be recovered by solving an l0-norm minimization problem. With the recent development of the compressed sensing theory, such minimization problem can be recast as a standard linear programming problem or efficiently approximated by greedy pursuit algorithms. Once the sparse vector is obtained, the class of the test sample can be determined by the characteristics of the sparse vector on reconstruction. In addition to the constraints on sparsity and reconstruction accuracy, we also exploit the fact that in HSI the neighboring pixels have a similar spectral characteristic (smoothness). In our proposed algorithm, a smoothness constraint is also imposed by forcing the vector Laplacian at each reconstructed pixel to be minimum all the time within the minimization process. The proposed sparsity-based algorithm is applied to several hyperspectral imagery to detect targets of interest. Simulation results show that our algorithm outperforms the classical hyperspectral target detection algorithms, such as the popular spectral matched filters, matched subspace detectors, adaptive subspace detectors, as well as binary classifiers such as support vector machines.


IEEE Transactions on Signal Processing | 2000

Linear-phase perfect reconstruction filter bank: lattice structure, design, and application in image coding

Trac D. Tran; R.L. de Queiroz; Truong Q. Nguyen

A lattice structure for an M-channel linear-phase perfect reconstruction filter bank (LPPRFB) based on the singular value decomposition (SVD) is introduced. The lattice can be proven to use a minimal number of delay elements and to completely span a large class of LPPRFBs: all analysis and synthesis filters have the same FIR length, sharing the same center of symmetry. The lattice also structurally enforces both linear-phase and perfect reconstruction properties, is capable of providing fast and efficient implementation, and avoids the costly matrix inversion problem in the optimization process. From a block transform perspective, the new lattice can be viewed as representing a family of generalized lapped biorthogonal transform (GLBT) with an arbitrary number of channels M and arbitrarily large overlap. The relaxation of the orthogonal constraint allows the GLBT to have significantly different analysis and synthesis basis functions, which can then be tailored appropriately to fit a particular application. Several design examples are presented along with a high-performance GLBT-based progressive image coder to demonstrate the potential of the new transforms.


IEEE Signal Processing Letters | 2000

The binDCT: fast multiplierless approximation of the DCT

Trac D. Tran

This paper presents a family of fast biorthogonal block transforms called binDCT that can be implemented using only shift and add operations. The transform is based on a VLSI-friendly lattice structure that robustly enforces both linear phase and perfect reconstruction properties. The lattice coefficients are parameterized as a series of dyadic lifting steps providing fast, efficient, in place computation of the transform coefficients as well as the ability to map integers to integers. The new 8/spl times/8 transforms all approximate the popular 8/spl times/8 DCT closely, attaining a coding gain range of 8.77-8.82 dB, despite requiring as low as 14 shifts and 31 additions per eight input samples. Application of the binDCT in both lossy and lossless image coding yields very competitive results compared to the performance of the original floating-point DCT.


IEEE Transactions on Signal Processing | 2003

Lapped transform via time-domain pre- and post-filtering

Trac D. Tran; Jie Liang; Chengjie Tu

This paper presents a general framework of constructing a large family of lapped transforms with symmetric basis functions by adding simple time-domain pre- and post-processing modules onto existing block discrete cosine transform (DCT)-based infrastructures. A subset of the resulting solutions is closed-form, fast computable, modular, near optimal in the energy compaction sense and leads to an elegant boundary handling of finite-length data. Starting from these solutions, a general framework for block-based signal decomposition with a high degree of flexibility and adaptivity is developed. Several simplified models are also introduced to approximate the optimal solutions. These models are based on cascades of plane rotation operators and lifting steps, respectively. Despite tremendous savings in computational complexity, the optimized results of these simplified models are virtually identical to that of the complete solution. The multiplierless versions of these pre- and post-filters when combined with an appropriate multiplierless block transform, such as the binDCT, which is described in an earlier paper by Liang and Tran (see IEEE Trans. Signal Processing, vol.49, p.3032-44, Dec. 2001), generate a family of very large scale integration (VLSI)-friendly fast lapped transforms with reversible integer-to-integer mapping. Numerous design examples with arbitrary number of channels and arbitrary number of borrowed samples are presented.


IEEE Transactions on Image Processing | 2002

Context-based entropy coding of block transform coefficients for image compression

Chengjie Tu; Trac D. Tran

It has been well established that state-of-the-art wavelet image coders outperform block transform image coders in the rate-distortion (R-D) sense by a wide margin. Wavelet-based JPEG2000 is emerging as the new high-performance international standard for still image compression. An often asked question is: how much of the coding improvement is due to the transform and how much is due to the encoding strategy? Current block transform coders such as JPEG suffer from poor context modeling and fail to take full advantage of correlation in both space and frequency sense. This paper presents a simple, fast, and efficient adaptive block transform image coding algorithm based on a combination of prefiltering, postfiltering, and high-order space-frequency context modeling of block transform coefficients. Despite the simplicity constraints, coding results show that the proposed coder achieves competitive R-D performance compared to the best wavelet coders in the literature.

Collaboration


Dive into the Trac D. Tran's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Chen

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Jie Liang

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Minh Dao

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Dung N. Tran

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Yuanming Suo

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Chengjie Tu

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge