Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elwyn R. Berlekamp is active.

Publication


Featured researches published by Elwyn R. Berlekamp.


Information & Computation | 1967

Lower bounds to error probability for coding on discrete memoryless channels. II.

Claude E. Shannon; Robert G. Gallager; Elwyn R. Berlekamp

New lower bounds are presented for the minimum error probability that can be achieved through the use of block coding on noisy discrete memoryless channels. Like previous upper bounds, these lower bounds decrease exponentially with the block length N. The coefficient of N in the exponent is a convex function of the rate. From a certain rate of transmission up to channel capacity, the exponents of the upper and lower bounds coincide. Below this particular rate, the exponents of the upper and lower bounds differ, although they approach the same limit as the rate approaches zero. Examples are given and various incidental results and techniques relating to coding theory are developed. The paper is presented in two parts: the first, appearing here, summarizes the major results and treats the case of high transmission rates in detail; the second, to appear in the subsequent issue, treats the case of low transmission rates.


IEEE Transactions on Information Theory | 1982

Bit-serial Reed - Solomon encoders

Elwyn R. Berlekamp

This paper presents new concepts and techniques for implementing encoders for Reed-Solomon codes, with or without interleaving. Reed-Solomon encoders based on these concepts and techniques often require substantially less hardware than even linear cyclic binary codes of comparable redundancy. A CODEWORD of a cyclic code is a sequence of characters which can be viewed as the coefficients of a polynomial n-1 c(x) = 2 c,x’. i=o The characters C,,, C,,-2, Cn-s,. . . , C,, Co are elements in a finite field. In this paper, we consider only fields of order 2”, where m m ight be any integer. A sequence of n characters is a codeword if and only if its corresponding polynomial, C(x), is a mu ltiple of the code’s generator polynomial, g(x). Let deg g(x) = n k. The common method of encoding a cyclic code is to regard q-,9 cn-2,* * * 9 C,-, as message characters, and to divide the polynomial


Proceedings of the IEEE | 1980

The technology of error-correcting codes

Elwyn R. Berlekamp

This paper is a survey of error-correcting codes, with emphasis on the costs of encoders and decoders, and the relationship of these costs to various important system parameters such as speed and delay. Following an introductory overview, the remainder of this paper is divided into three sections corresponding to the three major types of channel noise: white Gaussian noise, interference, and digital errors of the sort which occur in secondary memories such as disks. Appendix A presents some of the more important facts about modern implementations of decoders for long high-rate Reed-Solomon codes, which play an important role throughout the paper. Appendix B investigates some important aspects of the tradeoffs between error correction and error detection.


Information & Computation | 1967

On the solution of algebraic equations over finite fields

Elwyn R. Berlekamp; H. Rumsey; G. Solomon

This article gives new fast methods for decoding certain error-correcting codes by solving certain algebraic equations. As described by Peterson (1961) , the locations of a Bose-Chaudhuri Hocquenghem code over a field of characteristic p are associated with the elements of an extension field, GF(pk). The code is designed in such a way that the weighted power-sum symmetric functions of the error locations can be obtained directly by computing appropriately chosen parity checks on the received word. Good methods for computing the elementary symmetric functions from the weighted power-sum symmetric functions have been presented by Berlekamp (1967) . The elementary symmetric functions, σ1, σ2, …, σt are the coefficients of an algebraic equation whose roots are the error locations x t + σ 1 x t − 1 + σ 2 x t − 1 + ⋯ + σ t = 0 Previous methods for finding the roots of this equation have searched all of the elements in GF(pk) ( Chien, 1964 ) or looked up the answer in a large table ( Polkinghorn, 1966 ). We present here improved procedures for extracting the roots of algebraic equations of small degrees.


IEEE Communications Magazine | 1987

The application of error control to communications

Elwyn R. Berlekamp; R. Peile; S. Pope

Aprll 4987-Vol. 25, No. 4 IEEE Communlcatlonr Magazine E rror Control is an area of increasing importance in communications. This is partly because the issue of data integrity is becoming increasingly important. There is downward pressure on the allowable error rates for communications and mass storage systems as bandwidths and volumes of data increase. Certain data cannot be wrong; for example, no one can be complacent about the effect of an undetected data error on a weapons control system. More generally, in any system which handles large amounts of data, uncorrected and undetected errors can degrade performance, response time, and possibly increase the need for intervention by human operators. Just as important as the data integrity issue is the increasing realization that error control is a system design technique that can fundamentally change the trade-offs in a communications system design. To take some examples:


IEEE Transactions on Information Theory | 1967

A lower bound to the distribution of computation for sequential decoding

Irwin M. Jacobs; Elwyn R. Berlekamp

In sequential decoding, the number of computations which the decoder must perform to decode the received digits is a random variable. In this paper, we derive a Paretian lower bound to the distribution of this random variable. We show that P [C > L] L^{-\rho} , where C is the number of computations which the sequential decoder must perform to decode a block of \Lambda transmitted bits, and is a parameter which depends on the channel and the rate of the code. Our bound is valid for all sequential decoding schemes and all discrete memoryless channels. In Section II we give an example of a special channel for which a Paretian bound can be easily derived. In Sections III and IV we treat the general channel. In Section V we relate this bound to the memory buffer requirements of real-time sequential decoders. In Section VI, we show that this bound implies that certain moments of the distribution of the computation per digit are infinite, and we determine lower bounds to the rates above which these moments diverge. In most cases, our bounds coincide with previously known upper bounds to rates above which the moments converge. We conclude that the performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error. We further note that our bound applies only to sequential decoding, and that, in certain special cases (Section II), algebraic decoding methods prove superior.


IEEE Transactions on Information Theory | 1972

Weight distributions of the cosets of the (32,6) Reed-Muller code

Elwyn R. Berlekamp; Lloyd R. Welch

In this paper we present the weight distribution of all 2^26 cosets of the (32,6) first-order Reed-Muller code. The code is invariant under the complete affine group, of order 32 \times 31 \times 30 \times 28 \times 24 \times 16. In the Appendix we show (by hand computations) that this group partitions the 2^26 cosets into only 48 equivalence classes, and we obtain the number of cosets in each class. A simple computer program then enumerated the weights of the 32 vectors ih each of the 48 cosets. These coset enumerations also answer this equivalent problem: how well are the 2^32 Boolean functions of five variables approximated by the 2^5 linear functions and their complements?


IEEE Transactions on Information Theory | 1974

Some long cyclic linear binary codes are not so bad

Elwyn R. Berlekamp; Jørn Justesen

We show that when an inner linear cyclic binary code which has an irreducible check polynomial is concatenated with an appropriately chosen maximal-distance-separable outer code, then the overall code is cyclic Over GF(2) . Using this theorem, we construct a number of linear cyclic binary codes which are better than any previously known. In particular, by taking the inner code to be a quadratic residue code, we obtain linear cyclic binary codes of length N , rate R , and distance D \geq (1 - 2R)N/ \sqrt{2 \log N} , which compares favorably with the BCH distance D \sim (2 \ln R^{-1})N/\log N , although it still fails to achieve the linear growth of distance with block length which is possible with noncyclic linear concatenated codes. While this construction yields many codes, including several with block lengths greater than 10^{10^5} , we have not been able to prove that there are arbitrarily long codes of this type without invoking the Riemann hypothesis or the revised Artin conjecture, as the existence of long codes of our type is equivalent to the existence of large primes p for which the index of 2 is (p - 1)/2 .


IEEE Transactions on Information Theory | 1968

Nonbinary BCH decoding (Abstr.)

Elwyn R. Berlekamp

The decoding of BCH codes readily reduces to the solution of a certain key equation. An iterative algorithm is presented for solving this equation over any field. Following a heuristic derivation of the algorithm, a complete statement of the algorithm and proofs of its principal properties are given. The relationship of this algorithm to the classical matrix methods and the simplification which the algorithm takes in the special case of binary codes is then discussed. The generalization of the algorithm to BCH codes with a slightly different definition, the generalization of the algorithm to decode erasures as well as errors, and the extension of the algorithm to decode more than t errors in certain eases are also presented.


IEEE Transactions on Information Theory | 1973

Extended double-error-correcting binary Goppa codes are cyclic (Corresp.)

Elwyn R. Berlekamp; Oscar Moreno

The class of codes introduced by Goppa [1]-[3] includes the BCH codes as a proper subset. It also includes a large subset of asymptotically good codes, each of which has an algebraic decoding algorithm for correcting some smaller number of errors. In Section 7 of [1], Goppa gives necessary and sufficient conditions for his codes to be isomorphic to cyclic codes under a certain correspondence. In this correspondence, we exhibit another correspondence which reveals that certain other Goppa codes (including the example of Goppas Section 6) become cyclic when extended by an overall parity check. In particular, the extended Goppa codes with (n,k,d) = (2^m + 1, 2^m - 2m, 6) are isomorphic to the reversible cyclic codes with check polynomial (x + 1)f(x) , where f(x) is an irreducible polynomial of period 2^m + 1 .

Collaboration


Dive into the Elwyn R. Berlekamp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Po Tong

University of California

View shared research outputs
Top Co-Authors

Avatar

Robert J. McEliece

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lloyd R. Welch

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Teigo Nakamura

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron N. Siegel

Mathematical Sciences Research Institute

View shared research outputs
Top Co-Authors

Avatar

G. Solomon

Jet Propulsion Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge