Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rüdiger L. Urbanke is active.

Publication


Featured researches published by Rüdiger L. Urbanke.


IEEE Transactions on Information Theory | 2001

Design of capacity-approaching irregular low-density parity-check codes

Thomas Richardson; Mohammad Amin Shokrollahi; Rüdiger L. Urbanke

We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.


IEEE Transactions on Information Theory | 2001

The capacity of low-density parity-check codes under message-passing decoding

Thomas Richardson; Rüdiger L. Urbanke

We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.


IEEE Communications Letters | 2001

On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit

Sae-Young Chung; G.D. Forney; Thomas Richardson; Rüdiger L. Urbanke

We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10/sup -6/ using a block length of 10/sup 7/.


IEEE Transactions on Information Theory | 2001

Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation

Sae-Young Chung; Thomas Richardson; Rüdiger L. Urbanke

Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under message-passing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating the means of the Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is 10. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.


IEEE Transactions on Information Theory | 2002

Finite-length analysis of low-density parity-check codes on the binary erasure channel

Changyan Di; D. Proietti; I.E. Telatar; T.J. Richardson; Rüdiger L. Urbanke

Note: 39th Allerton Conf. on Communication, Control, and Computing Reference LTHC-CONF-2001-002 Record created on 2006-11-22, modified on 2017-05-12


IEEE Transactions on Information Theory | 2000

Systematic design of unitary space-time constellations

Bertrand M. Hochwald; Thomas L. Marzetta; Thomas Richardson; Wim Sweldens; Rüdiger L. Urbanke

We propose a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation-an oblong complex-valued matrix whose columns are orthonormal-and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas.


IEEE Transactions on Information Theory | 2010

Polar Codes are Optimal for Lossy Source Coding

Satish Babu Korada; Rüdiger L. Urbanke

We consider lossy source compression of a binary symmetric source using polar codes and a low-complexity successive encoding algorithm. It was recently shown by Arikan that polar codes achieve the capacity of arbitrary symmetric binary-input discrete memoryless channels under a successive decoding strategy. We show the equivalent result for lossy source compression, i.e., we show that this combination achieves the rate-distortion bound for a binary symmetric source. We further show the optimality of polar codes for various multiterminal problems including the binary Wyner-Ziv and the binary Gelfand-Pinsker problems. Our results extend to general versions of these problems.


IEEE Communications Magazine | 2003

The renaissance of Gallager's low-density parity-check codes

Thomas Richardson; Rüdiger L. Urbanke

LDPC codes were invented in 1960 by R. Gallager. They were largely ignored until the discovery of turbo codes in 1993. Since then, LDPC codes have experienced a renaissance and are now one of the most intensely studied areas in coding. In this article we review the basic structure of LDPC codes and the iterative algorithms that are used to decode them. We also briefly consider the state of the art of LDPC design.


international symposium on information theory | 2010

Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC

Shrinivas Kudekar; Thomas Richardson; Rüdiger L. Urbanke

Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing as a function of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism which explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of the individual code structure has the effect of increasing the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold of the underlying ensemble. For this reason we call this phenomenon “threshold saturation”. This gives an entirely new way of approaching capacity. One significant advantage of such a construction is that one can create capacity-approaching ensembles with an error correcting radius which is increasing in the blocklength. Our proof makes use of the area theorem of the BP-EXIT curve and the connection between the MAP and BP threshold recently pointed out by Measson, Montanari, Richardson, and Urbanke. Although we prove the connection between the MAP and the BP threshold only for a very specific ensemble and only for the binary erasure channel, empirically the same statement holds for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar collapse of thresholds occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms as well as to new techniques for analysis.


international symposium on information theory | 2000

Multiple-antenna signal constellations for fading channels

Dakshi Agrawal; Thomas Richardson; Rüdiger L. Urbanke

In this correspondence, we show that the problem of designing efficient multiple-antenna signal constellations for fading channels can be related to the problem of finding packings with large minimum distance in the complex Grassmannian space. We describe a numerical optimization procedure for finding good packings in the complex Grassmannian space and report the best signal constellations found by this procedure. These constellations improve significantly upon previously known results.

Collaboration


Dive into the Rüdiger L. Urbanke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Mondelli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nicolas Macris

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Igal Sason

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bixio Rimoldi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Abdelaziz Amraoui

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Satish Babu Korada

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge