Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Kieffer is active.

Publication


Featured researches published by John C. Kieffer.


IEEE Transactions on Information Theory | 2000

Grammar-based codes: a new class of universal lossless source codes

John C. Kieffer; En-hui Yang

We investigate a type of lossless source code called a grammar-based code, which, in response to any input data string x over a fixed finite alphabet, selects a context-free grammar G/sub x/ representing x in the sense that x is the unique string belonging to the language generated by G/sub x/. Lossless compression of x takes place indirectly via compression of the production rules of the grammar G/sub x/. It is shown that, subject to some mild restrictions, a grammar-based code is a universal code with respect to the family of finite-state information sources over the finite alphabet. Redundancy bounds for grammar-based codes are established. Reduction rules for designing grammar-based codes are presented.


Information & Computation | 1980

Locally Optimal Block Quantizer Design

Robert M. Gray; John C. Kieffer; Y. Linde

Several properties are developed for a recently proposed algorithm for the design of block quantizers based either on a probabilistic source model or on a long training sequence of data. Conditions on the source and general distortion measures under which the algorithm is well defined and converges to a local minimum are provided. A variation of the ergodic theorem is used to show that if the source is block stationary and ergodic, then in the limit as n → ∝, the algorithm run on a sample distribution of a training sequence of length n will produce the same result as if the algorithm were run on the “true” underlying distribution.


IEEE Transactions on Information Theory | 1978

A unified approach to weak universal source coding

John C. Kieffer

A new method of constructing a universal sequence of block codes for coding a class of ergodic sources is given. With this method, a weakly universal sequence of codes is constructed for variable-rate noise. less coding and for fixed- and variable-rate coding with respect to a fidelity criterion. In this way a unified approach to weak universal block source coding is obtained. For the noiseless variable-rate coding and the fixed-rate coding with respect to fidelity criterion, the assumptions made on the alphabets, distortion measures, and class of sources are both necessary and sufficient. For fixed-rate coding with respect to a fidelity criterion, the sample distortion of the universal code sequence converges in L^{l} norm for each source to the optimum distortion for that source. For both variable-rate noiseless coding and variable-rate coding with respect to a fidelity criterion, the sample rate of the universal code sequence converges in L^{1} norm for each source to the optimum rate for that source. Using this fact, a universal sequence of codes for fixed-rate noiseless coding is obtained. Some applications to stationary nonergodic sources are also considered. The results of Davisson, Ziv, Neuhoff, Gray, Pursley, and Mackenthun are extended.


IEEE Transactions on Information Theory | 2000

Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. I. Without context models

En-hui Yang; John C. Kieffer

A grammar transform is a transformation that converts any data sequence to be compressed into a grammar from which the original data sequence can be fully reconstructed. In a grammar-based code, a data sequence is first converted into a grammar by a grammar transform and then losslessly encoded. In this paper, a greedy grammar transform is first presented; this grammar transform constructs sequentially a sequence of irreducible grammars from which the original data sequence can be recovered incrementally. Based on this grammar transform, three universal lossless data compression algorithms, a sequential algorithm, an improved sequential algorithm, and a hierarchical algorithm, are then developed. These algorithms combine the power of arithmetic coding with that of string matching. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source. Moreover, it is proved that their worst case redundancies among all individual sequences of length are upper-bounded by log log log , where is a constant. Simulation results show that the proposed algorithms outperform the Unix Compress and Gzip algorithms, which are based on LZ78 and LZ77, respectively.


IEEE Transactions on Information Theory | 1998

On the performance of data compression algorithms based upon string matching

En-hui Yang; John C. Kieffer

Lossless and lossy data compression algorithms based on string matching are considered. In the lossless case, a result of Wyner and Ziv (1989) is extended. In the lossy case, a data compression algorithm based on approximate string matching is analyzed in the following two frameworks: (1) the database and the source together form a Markov chain of finite order; (2) the database and the source are independent with the database coming from a Markov model and the source from a general stationary, ergodic model. In either framework, it is shown that the resulting compression rate converges with probability one to a quantity computable as the infimum of an information theoretic functional over a set of auxiliary random variables; the quantity is strictly greater than the rate distortion function of the source except in some symmetric cases. In particular, this result implies that the lossy algorithm proposed by Steinberg and Gutman (1993) is not optimal, even for memoryless or Markov sources.


IEEE Transactions on Information Theory | 2000

Universal lossless compression via multilevel pattern matching

John C. Kieffer; En-hui Yang; Gregory J. Nelson; Pamela C. Cosman

A universal lossless data compression code called the multilevel pattern matching code (MPM code) is introduced. In processing a finite-alphabet data string of length n, the MPM code operates at O(log log n) levels sequentially. At each level, the MPM code detects matching patterns in the input data string (substrings of the data appearing in two or more nonoverlapping positions). The matching patterns detected at each level are of a fixed length which decreases by a constant factor from level to level, until this fixed length becomes one at the final level. The MPM code represents information about the matching patterns at each level as a string of tokens, with each token string encoded by an arithmetic encoder. From the concatenated encoded token strings, the decoder can reconstruct the data string via several rounds of parallel substitutions. A O(1/log n) maximal redundancy/sample upper bound is established for the MPM code with respect to any class of finite state sources of uniformly bounded complexity. We also show that the MPM code is of linear complexity in terms of time and space requirements. The results of some MPM code compression experiments are reported.


IEEE Transactions on Information Theory | 1996

Simple universal lossy data compression schemes derived from the Lempel-Ziv algorithm

En-hui Yang; John C. Kieffer

Two universal lossy data compression schemes, one with fixed rate and the other with fixed distortion, are presented, based on the well-known Lempel-Ziv algorithm. In the case of fixed rate R, the universal lossy data compression scheme works as follows: first pick a codebook B/sub n/ consisting of all reproduction sequences of length n whose Lempel-Ziv codeword length is /spl les/nR, and then use B/sub n/ to encode the entire source sequence n-block by n-block. This fixed-rate data compression scheme is universal in the sense that for any stationary, ergodic source or for any individual sequence, the sample distortion performance as n/spl rarr//spl infin/ is given almost surely by the distortion rate function. A similar result is shown in the context of fixed distortion lossy source coding.


IEEE Transactions on Information Theory | 1991

Sample converses in source coding theory

John C. Kieffer

The rate and distortion performance of a sequence of codes along a sample sequence of symbols generated by a stationary ergodic information source are studied. Two results are obtained: (1) the source sample sequence is encoded by an arbitrary sequence of block codes which operate at a fixed rate level R, and a sample converse is obtained which states that, with probability one, the lower limit of the code sample distortions is lower bounded by D(R), the value of the distortion rate function at R; (2) the source sample sequence is encoded by an arbitrary sequence of variable-rate codes which operate at a fixed distortion level D, and a sample converse is obtained which states that, with probability one, the lower limit of the code sample rates is lower bounded by R(D), the value of the rate distortion function at D. A novel ergodic theorem is used to obtain both sample converses. >


IEEE Transactions on Wireless Communications | 2005

Exact BER computation for cross QAM constellations

Pavan Kumar Vitthaladevuni; Mohamed-Slim Alouini; John C. Kieffer

When the number of bits per symbol is odd, the peak and average power of transmission can be reduced by using cross quadrature amplitude modulations (QAMs) instead of rectangular QAMs. However, since perfect Gray coding is not possible for cross QAMs, using Smith-style Gray coding, this paper derives the exact bit error rate (BER) for cross QAM constellations over additive white Gaussian noise (AWGN) and Rayleigh fading channels.


IEEE Transactions on Information Theory | 1982

Stochastic stability for feedback quantization schemes

John C. Kieffer

Feedback quantization schemes (such as delta modulation. adaptive quantization, differential pulse code modulation (DPCM), and adaptive differential pulse code modulation (ADPCM) encode an information source by quantizing the source letter at each time i using a quantizer, which is uniquely determined by examining some function of the past outputs and inputs called the state of the encoder at time i . The quantized output letter at time i is fed back to the encoder, which then moves to a new state at time i+1 which is a function of the state at time i and the encoder output at time i . In an earlier paper a stochastic stability result was obtained for a class of feedback quantization schemes which includes delta modulation and some adaptive quantization schemes. In this paper a similar result is obtained for a class of feedback quantization schemes which includes linear DPCM and some ADPCM encoding schemes. The type of stochastic stability obtained gives almost-sure convergence of time averages of functions of the joint input-state-output process. This is stronger than the type of stochastic stability obtained previously by Gersho, Goodman, Goldstein, and Liu, who showed convergence in distribution of the time i input-state-output as i \rightarrow \infty .

Collaboration


Dive into the John C. Kieffer's collaboration.

Top Co-Authors

Avatar

En-hui Yang

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maurice Rahe

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Marcos

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Kewu Peng

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ross Stites

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge