Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver M. Collins is active.

Publication


Featured researches published by Oliver M. Collins.


IEEE Transactions on Communications | 1993

Determinate state convolutional codes

Oliver M. Collins; Murad Hizlan

A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. This staged power transfer proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The authors analyze the decoding complexity and free distances of these new codes, determine some important statistical properties of the decoder output, and provide simulation results for performance at the low signal-to-noise ratios where a real communications system would operate. Several concise, practical examples are presented. >


IEEE Transactions on Communications | 2001

On the frame-error rate of concatenated turbo codes

Oscar Y. Takeshita; Oliver M. Collins; Peter C. Massey; Daniel J. Costello

Turbo codes with long frame lengths are usually constructed using a randomly chosen interleaver. Statistically, this guarantees excellent bit-error rate (BER) performance but also generates a certain number of low weight codewords, resulting in the appearance of an error floor in the BER curve. Several methods, including using an outer code, have been proposed to improve the error floor region of the BER curve. We study the effect of an outer BCH code on the frame-error rate (FER) of turbo codes. We show that additional coding gain is possible not only in the error floor region but also in the waterfall region. Also, the outer code improves the iterative APP decoder by providing a stopping criterion and alleviating convergence problems. With this method, we obtain codes whose performance is within 0.6 dB of the sphere packing bound at an FER of 10/sup -6/.


international symposium on information theory | 1998

A comparison of known codes, random codes, and the best codes

Samuel J. MacMullan; Oliver M. Collins

This paper calculates new bounds on the size of the performance gap between random codes and the best possible codes. The first result shows that, for large block sizes, the ratio of the error probability of a random code to the sphere-packing lower bound on the error probability of every code on the binary symmetric channel (BSC) is small for a wide range of useful crossover probabilities. Thus even far from capacity, random codes have nearly the same error performance as the best possible long codes. The paper also demonstrates that a small reduction k-k/spl tilde/ in the number of information bits conveyed by a codeword will make the error performance of an (n,k/spl tilde/) random code better than the sphere-packing lower bound for an (n,k) code as long as the channel crossover probability is somewhat greater than a critical probability. For example, the sphere-packing lower bound for a long (n,k), rate 1/2, code will exceed the error probability of an (n,k/spl tilde/) random code if k-k/spl tilde/>10 and the crossover probability is between 0.035 and 0.11=H/sup -1/(1/2). Analogous results are presented for the binary erasure channel (BEC) and the additive white Gaussian noise (AWGN) channel. The paper also presents substantial numerical evaluation of the performance of random codes and existing standard lower bounds for the BEC, BSC, and the AWGN channel. These last results provide a useful standard against which to measure many popular codes including turbo codes, e.g., there exist turbo codes that perform within 0.6 dB of the bounds over a wide range of block lengths.


IEEE Transactions on Information Theory | 2007

A Successive Decoding Strategy for Channels With Memory

Teng Li; Oliver M. Collins

This paper presents a new technique for communication over channels with memory where the channel state is unknown at the transmitter and receiver. A deep interleaver combined with successive decoding decomposes a channel with memory into an array of parallel memoryless channels on which a conventional coding system can operate individually. The problems of joint channel estimation and decoding thus are separated without loss of capacity. This technique achieves channel capacity and so may be used to evaluate the capacities of different channels. A general information-theoretic framework is developed and applied to intersymbol interference (ISI), finite-state Markov, and Rayleigh-fading channels. A full system implementation, which performs within 1.1 dB of the channel capacity upper bound, is presented for the Rayleigh-fading channel


IEEE Transactions on Communications | 1992

The subtleties and intricacies of building a constraint length 15 convolutional decoder

Oliver M. Collins

A series of algorithms, circuit designs, and analytical techniques as well as a few tricks are presented. Each is essential to the design, which created a constraint-length-15, 1-Mb/s Viterbi decoder. The focus is the maximum-likelihood decoding of very-long-constraint-length convolutional codes, but many of the concepts will find other applications ranging from extremely fast constraint-length-7 decoders to software simulations of codes with constraint lengths even longer than 15. The constraint-length-15 decoder is now working and will form the basis for the coding system used in the next generation of deep space probes. >


IEEE Transactions on Communications | 1993

Quantization loss in convolutional decoding

Ivan M. Onyszchuk; Kar-Ming Cheung; Oliver M. Collins

The loss in quantizing coded symbols in the additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) or quadrature phase-shift keying (QPSK) modulation is discussed. A quantization scheme and branch metric calculation method are presented. For the uniformly quantized AWGN channel, cutoff rate is used to determine the step size and the smallest number of quantization bits needed for a given bit-signal-to-noise ratio (E/sub b//N/sub 0/) loss. A nine-level quantizer is presented, along with 3-b branch metrics for a rate-1/2 code, which causes an E/sub b//N/sub 0/ loss of only 0.14 dB. These results also apply to soft-decision decoding of block codes. A tight upper bound is derived for the range of path metrics in a Viterbi decoder. The calculations are verified by simulations of several convolutional codes, including the memory-14, rate-1/4 or -1/6 codes used by the big Viterbi decoders at JPL. >


IEEE Transactions on Circuits and Systems | 2008

Measurement and Reduction of ISI in High-Dynamic-Range 1-bit Signal Generation

Ajay K. Gupta; Jagadish Venkataraman; Oliver M. Collins

This paper studies spurious signals produced by the nonlinear interaction of the previous output symbols of a digital-to-analog converter (DAC) with its current symbol. This effect, called nonlinear intersymbol interference (ISI), significantly degrades the spurious-free dynamic range of most high-speed DACs. Many papers have been devoted to suppressing level inaccuracies in multibit DACs. However, even when all levels are accurate, nonlinear ISI causes significant spurious output. This paper presents a simple and very general model for nonlinear ISI and uses it to design binary signals that can both measure and suppress the spurious tones that arise in a single-bit DAC. While the analysis in this paper is based on a 1-bit DAC, extension to multibit DACs is possible, since a multibit DAC is merely a collection of 1-bit DACs and exhibits the same nonlinear effects. Experimental verification is presented for three different hardware setups. Measurements first establish the presence of the spurious tones in the hardware, as predicted by the model, and then show how the spur level can be reduced by as much as 22 dB.


international symposium on information theory | 1997

The capacity of orthogonal and bi-orthogonal codes on the Gaussian channel

Samuel J. MacMullan; Oliver M. Collins

This correspondence analyzes the performance of concatenated coding systems and modulation schemes operating over the additive white Gaussian noise (AWGN) channel by examining the loss of capacity resulting from each of the processing steps. The techniques described in this correspondence allow the separate evaluation of codes and decoders and thus the identification of where loss of capacity occurs. Knowledge of this capacity loss is very useful for the overall design of a communications system, e.g., for evaluating the benefits of inner decoders that produce information beyond the maximum-likelihood (ML) estimate. The first two sections of this correspondence provide a general technique for calculating the composite capacity of an orthogonal or a bi-orthogonal code and the AWGN channel in isolation. The later sections examine the composite capacities of an orthogonal or a bi-orthogonal code, the AWGN channel, and various inner decoders including the decoder estimating the bit-by-bit probability of a one, as is used in turbo codes. The calculations in these examples show that the ML decoder introduces a large loss in capacity. Much of this capacity loss can be regained by using only slightly more complex inner decoders, e.g., a detector for M-ary frequency-shift keying (MFSK) that puts out the two most likely frequencies and the probability the ML estimate is correct produces significantly less degradation than one that puts out only the most likely frequency.


IEEE Transactions on Communications | 2007

An All-Digital Transmitter With a 1-Bit DAC

Jagadish Venkataraman; Oliver M. Collins

This paper presents a practicable scheme for building a high-frequency, direct digital-to-RF transmitter. The transmitter uses a simple look-up table to generate a binary output stream which then is filtered to produce a radiated signal, so that there is no need for precise digital-to-analog converters. The look-up table entries are produced by a new constrained list decoding algorithm operating over the real alphabet. This all-digital transmitter can be a modulator only, or a combined encoder and modulator and supports the direct generation of RF signals using currently available high-speed CMOS. The paper concludes with spectra having carrier frequencies over 10 GHz and data rates up to 1200 Mbps.


IEEE Transactions on Circuits and Systems | 2011

The Sampling Theorem With Constant Amplitude Variable Width Pulses

Jing Huang; Krishnan Padmanabhan; Oliver M. Collins

This paper proves a novel sampling theorem with constant amplitude and variable width pulses. The theorem states that any bandlimited baseband signal within ±0.637 can be represented by a pulsewidth modulation (PWM) waveform with unit amplitude. The number of pulses in the waveform is equal to the number of Nyquist samples and the peak constraint is independent of whether the waveform is two-level or three-level. The proof of the sampling theorem uses a simple iterative technique that is guaranteed to converge to the exact PWM representation whenever it exists. The paper goes on to develop a practical matrix based iterative technique to generate the PWM waveform that is guaranteed to converge exponentially. The peak constraint in the theorem is only a sufficient condition. In fact, many signals with higher peaks, e.g., lower than Nyquist frequency sinusoids, can be accurately represented by a PWM waveform.

Collaboration


Dive into the Oliver M. Collins's collaboration.

Top Co-Authors

Avatar

Ajay K. Gupta

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Huang

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kar-Ming Cheung

Jet Propulsion Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge