Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zixiang Xiong is active.

Publication


Featured researches published by Zixiang Xiong.


IEEE Signal Processing Magazine | 2004

Distributed source coding for sensor networks

Zixiang Xiong; Angelos D. Liveris; Samuel Cheng

In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding.


IEEE Communications Letters | 2002

Compression of binary sources with side information at the decoder using LDPC codes

Angelos D. Liveris; Zixiang Xiong; Costas N. Georghiades

We show how low-density parity-check (LDPC) codes can be used to compress close to the Slepian-Wolf limit for correlated binary sources. Focusing on the asymmetric case of compression of an equiprobable memoryless binary source with side information at the decoder, the approach is based on viewing the correlation as a channel and applying the syndrome concept. The encoding and decoding procedures are explained in detail. The performance achieved is seen to be better than recently published results using turbo codes and very close to the Slepian-Wolf limit.


IEEE Transactions on Circuits and Systems for Video Technology | 2000

Low bit-rate scalable video coding with 3-D set partitioning in hierarchical trees (3-D SPIHT)

Beong-Jo Kim; Zixiang Xiong; William A. Pearlman

We propose a low bit-rate embedded video coding scheme that utilizes a 3-D extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Three-dimensional spatio-temporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3-D SPIHT video coder so efficient that it provides comparable performance to H.263 objectively and subjectively when operated at the bit rates of 30 to 60 kbits/s with minimal system complexity. Extension to color-embedded video coding is accomplished without explicit bit allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolutional scalability in encoding and decoding in both time and space from one bit stream. This added functionality along with many desirable attributes, such as full embeddedness for progressive transmission, precise rate control for constant bit-rate traffic, and low complexity for possible software-only video applications, makes the proposed video coder an attractive candidate for multimedia applications.


IEEE Signal Processing Letters | 1996

A DCT-based embedded image coder

Zixiang Xiong; Onur G. Guleryuz; Michael T. Orchard

Since Shapiro (see ibid., vol.41, no.12, p. 445, 1993) published his work on embedded zerotree wavelet (EZW) image coding, there have been increased research activities in image coding centered around wavelets. We first point out that the wavelet transform is just one member in a family of linear transformations, and the discrete cosine transform (DCT) can also be coupled with an embedded zerotree quantizer. We then present such an image coder that outperforms any other DCT-based coder published in the literature, including that of the Joint Photographers Expert Group (JPEG). Moreover, our DCT-based embedded image coder gives higher peak signal-to-noise ratios (PSNR) than the quoted results of Shapiros EZW coder.


IEEE Transactions on Circuits and Systems for Video Technology | 1997

A deblocking algorithm for JPEG compressed images using overcomplete wavelet representations

Zixiang Xiong; Michael T. Orchard; Ya-Qin Zhang

This paper introduces a new approach to deblocking of JPEG compressed images using overcomplete wavelet representations. By exploiting cross-scale correlations among wavelet coefficients, edge information in the JPEG compressed images is extracted and protected, while blocky noise in the smooth background regions is smoothed out in the wavelet domain. Compared with the iterative methods reported in the literature, our simple wavelet-based method has much lower computational complexity, yet it is capable of achieving the same peak signal-to-noise ratio (PSNR) improvement as the best iterative method and giving visually very pleasing images as well.


IEEE Transactions on Circuits and Systems for Video Technology | 1999

A comparative study of DCT- and wavelet-based image coding

Zixiang Xiong; Kannan Ramchandran; Michael T. Orchard; Ya-Qin Zhang

We undertake a study of the performance difference of the discrete cosine transform (DCT) and the wavelet transform for both image and video coding, while comparing other aspects of the coding system on an equal footing based on the state-of-the-art coding techniques. The studies reveal that, for still images, the wavelet transform outperforms the DCT typically by the order of about 1 dB in peak signal-to-noise ratio. For video coding, the advantage of wavelet schemes is less obvious. We believe that the image and video compression algorithm should be addressed from the overall system viewpoint: quantization, entropy coding, and the complex interplay among elements of the coding system are more important than spending all the efforts on optimizing the transform.


IEEE Transactions on Image Processing | 1998

Wavelet packet image coding using space-frequency quantization

Zixiang Xiong; Kannan Ramchandran; Michael T. Orchard

We extend our previous work on space-frequency quantization (SFQ) for image coding from wavelet transforms to the more general wavelet packet transforms. The resulting wavelet packet coder offers a universal transform coding framework within the constraints of filterbank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance.


IEEE Transactions on Circuits and Systems for Video Technology | 1999

Multiresolution watermarking for images and video

Wenwu Zhu; Zixiang Xiong; Ya-Qin Zhang

This paper proposes a unified approach to digital watermarking of images and video based on the two- and three-dimensional discrete wavelet transforms. The hierarchical nature of the wavelet representation allows multiresolutional detection of the digital watermark, which is a Gaussian distributed random vector added to all the high-pass bands in the wavelet domain. We show that when subjected to distortion from compression or image halftoning, the corresponding watermark can still be correctly identified at each resolution (excluding the lowest one) in the wavelet domain. Computational savings from such a multiresolution watermarking framework is obvious, especially for the video case.


Bioinformatics | 2005

Optimal number of features as a function of sample size for various classification rules

Jianping Hua; Zixiang Xiong; James Lowey; Edward Suh; Edward R. Dougherty

MOTIVATION Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. RESULTS Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. AVAILABILITY For the companion website, please visit http://public.tgen.org/tamu/ofs/ CONTACT [email protected].


IEEE Transactions on Medical Imaging | 2003

Lossy-to-lossless compression of medical volumetric data using three-dimensional integer wavelet transforms

Zixiang Xiong; Xiaolin Wu; Samuel Cheng; Jianping Hua

We study lossy-to-lossless compression of medical volumetric data using three-dimensional (3-D) integer wavelet transforms. To achieve good lossy coding performance, it is important to have transforms that are unitary. In addition to the lifting approach, we first introduce a general 3-D integer wavelet packet transform structure that allows implicit bit shifting of wavelet coefficients to approximate a 3-D unitary transformation. We then focus on context modeling for efficient arithmetic coding of wavelet coefficients. Two state-of-the-art 3-D wavelet video coding techniques, namely, 3-D set partitioning in hierarchical trees (Kim et al., 2000) and 3-D embedded subband coding with optimal truncation (Xu et al., 2001), are modified and applied to compression of medical volumetric data, achieving the best performance published so far in the literature-both in terms of lossy and lossless compression.

Collaboration


Dive into the Zixiang Xiong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anders Høst-Madsen

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Jianping Hua

Translational Genomics Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge