G.D. Forney
Motorola
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by G.D. Forney.
IEEE Communications Letters | 2001
Sae-Young Chung; G.D. Forney; Thomas Richardson; Rüdiger L. Urbanke
We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.
IEEE Transactions on Information Theory | 1970
G.D. Forney
A convolutional encoder is defined as any constant linear sequential circuit. The associated code is the set of all output sequences resulting from any set of input sequences beginning at any time. Encoders are called equivalent if they generate the same code. The invariant factor theorem is used to determine when a convolutional encoder has a feedback-free inverse, and the minimum delay of any inverse. All encoders are shown to be equivalent to minimal encoders, which are feedback-free encoders with feedback-free delay-free inverses, and which can be realized in the conventional manner with as few memory elements as any equivalent encoder, Minimal encoders are shown to be immune to catastrophic error propagation and, in fact, to lead in a certain sense to the shortest decoded error sequences possible per error event. In two appendices, we introduce dual codes and syndromes, and show that a minimal encoder for a dual code has exactly the complexity of the original encoder; we show that systematic encoders with feedback form a canonical class, and compare this class to the minimal class.
international symposium on information theory | 2000
G.D. Forney
A generalized state realization of the Wiberg (1996) type is called normal if symbol variables have degree 1 and state variables have degree 2. A natural graphical model of such a realization has leaf edges representing symbols, ordinary edges representing states, and vertices representing local constraints. Such a graph can be decoded by any version of the sum-product algorithm. Any state realization of a code can be put into normal form without essential change in the corresponding graph or in its decoding complexity. Group or linear codes are generated by group or linear state realizations. On a cycle-free graph, there exists a well-defined minimal canonical realization, and the sum-product algorithm is exact. However, the cut-set bound shows that graphs with cycles may have a superior performance-complexity tradeoff, although the sum-product algorithm is then inexact and iterative, and minimal realizations are not well-defined. Efficient cyclic and cycle-free realizations of Reed-Muller (RM) codes are given as examples. The dual of a normal group realization, appropriately defined, generates the dual group code. The dual realization has the same graph topology as the primal realization, replaces symbol and state variables by their character groups, and replaces primal local constraints by their duals. This fundamental result has many applications, including to dual state spaces, dual minimal trellises, duals to Tanner (1981) graphs, dual input/output (I/O) systems, and dual kernel and image representations. Finally a group code may be decoded using the dual graph, with appropriate Fourier transforms of the inputs and outputs; this can simplify decoding of high-rate codes.
IEEE Transactions on Information Theory | 1998
G.D. Forney; G. Ungerboeck
Shannons determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimum-bandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.
IEEE Transactions on Information Theory | 1992
G.D. Forney
The author discusses trellis shaping, a method of selecting a minimum-weight sequence from an equivalence class of possible transmitted sequences by a search through the trellis diagram of a shaping convolutional code C/sub s/. Shaping gains on the order of 1 dB may be obtained with simple four-state shaping codes and with moderate constellation expansion. The shaping gains obtained with more complicated codes approach the ultimate shaping gain of 1.53 dB. With a feedback-free syndrome-former for C/sub s/, transmitted data can be recovered without catastrophic error propagation. Constellation expansion and peak-to-average energy ratio may be effectively limited by peak constraints. With lattice-theoretic constellations, the shaping operation may be characterized as a decoding of an initial sequence in a channel trellis code by a minimum-distance decoder for a shaping trellis code based on the shaping convolutional code, and the set of possible transmitted sequences is then the set of code sequences in the channel trellis code that lie in the Voronoi region of the trellis shaping code. >
IEEE Transactions on Information Theory | 1993
G.D. Forney; Mitchell D. Trott
A group code C over a group G is a set of sequences of group elements that itself forms a group under a component-wise group operation. A group code has a well-defined state space Sigma /sub k/ at each time k. Each code sequence passes through a well-defined state sequence. The set of all state sequences is also a group code, the state code of C. The state code defines an essentially unique minimal realization of C. The trellis diagram of C is defined by the state code of C and by labels associated with each state transition. The set of all label sequences forms a group code, the label code of C, which is isomorphic to the state code of C. If C is complete and strongly controllable, then a minimal encoder in controller canonical (feedbackfree) form may be constructed from certain sets of shortest possible code sequences, called granules. The size of the state space Sigma /sub k/ is equal to the size of the state space of this canonical encoder, which is given by a decomposition of the input groups of C at each time k. If C is time-invariant and nu -controllable, then mod Sigma /sub k/ mod = Pi /sub 1 >
IEEE Transactions on Information Theory | 2000
G.D. Forney; Mitchell D. Trott; Sae-Young Chung
A simple sphere bound gives the best possible tradeoff between the volume per point of an infinite array L and its error probability on an additive white Gaussian noise (AWGN) channel. It is shown that the sphere bound can be approached by a large class of coset codes or multilevel coset codes with multistage decoding, including certain binary lattices. These codes have structure of the kind that has been found to be useful in practice. Capacity curves and design guidance for practical codes are given. Exponential error bounds for coset codes are developed, generalizing Poltyrevs (1994) bounds for lattices. These results are based on the channel coding theorems of information theory, rather than the Minkowski-Hlawka theorem of lattice theory.
IEEE Transactions on Information Theory | 1993
M.V. Eyuboglu; G.D. Forney
High-rate lattice and trellis quantizers for nonuniform sources are introduced and analyzed. The performance of these quantizers is determined by two separable quantities, the granular gain and the boundary gain, which are determined by the shapes of the granular cells and of the support region, respectively. The granular gain and boundary gain are the duals of shaping and coding gain in data transmission applications. Using this duality, it is shown for Gaussian sources that the ultimate achievable boundary gain with high-rate lattice-bounded lattice codebooks is the same as the ultimate gain that can be obtained from variable-rate entropy coding. It is observed that if lattice codebooks can achieve the ultimate granular gain of 0.255 b per dimension, then lattice-bounded lattice codebooks can approach the rate-distortion limit. The performance of lattice quantizers is compared to that of optimum vector quantizers. >
IEEE Transactions on Information Theory | 2007
G.D. Forney; Markus Grassl; S. Guha
Rate-(n-2)/n unrestricted and CSS-type quantum convolutional codes with up to 4096 states and minimum distances up to 10 are constructed as stabilizer codes from classical self-orthogonal rate-1/n F4-linear and binary linear convolutional codes, respectively. These codes generally have higher rate and less decoding complexity than comparable quantum block codes or previous quantum convolutional codes. Rate-(n-2)/n block stabilizer codes with the same rate and error-correction capability and essentially the same decoding complexity are derived from these convolutional codes via tail-biting
IEEE Transactions on Information Theory | 1996
G.D. Forney; A. Vardy
It is shown that multistage generalized minimum-distance (GMD) decoding of Euclidean-space codes and lattices can provide an excellent tradeoff between performance and complexity. We introduce a reliability metric for Gaussian channels that is easily computed from an inner product, and prove that a multistage GMD decoder using this metric is a bounded-distance decoder up to the true packing radius. The effective error coefficient of multistage GMD decoding is determined. Two simple modifications in the GMD decoding algorithm that drastically reduce this error coefficient are proposed. It is shown that with these modifications GMD decoding achieves the error coefficient of maximum-likelihood decoding for block codes and for generalized construction A lattices. Multistage GMD decoding of the lattices D/sub 4/, E/sub 8/, K/sub 12/, BW/sub 16/, and /spl Lambda//sub 24/ is investigated in detail. For K/sub 12/BW/sub 16/, and /spl Lambda//sub 24/, the GMD decoders have considerably lower complexity than the best known maximum-likelihood or bounded-distance decoding algorithms, and appear to be the most practically attractive decoders available. For high-dimensional codes and lattices (/spl ges/64 dimensions) maximum-likelihood decoding becomes infeasible, while GMD decoding algorithms remain quite practical. As an example, we devise a multistage GMD decoder for a 128-dimensional sphere packing with a nominal coding gain of 8.98 dB that attains an effective error coefficient of 1365760. This decoder requires only about 400 real operations, in addition to algebraic errors-and-erasures decoding of certain BCH and Hamming codes. It therefore appears to be practically feasible to implement algebraic multistage GMD decoders for high-dimensional sphere packings, and thus achieve high effective coding gains.