Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hans-Andrea Loeliger is active.

Publication


Featured researches published by Hans-Andrea Loeliger.


IEEE Signal Processing Magazine | 2004

An introduction to factor graphs

Hans-Andrea Loeliger

Graphical models such as factor graphs allow a unified approach to a number of key topics in coding and signal processing such as the iterative decoding of turbo codes, LDPC codes and similar codes, joint decoding, equalization, parameter estimation, hidden-Markov models, Kalman filtering, and recursive least squares. Graphical models can represent complex real-world systems, and such representations help to derive practical detection/estimation algorithms in a wide area of applications. Most known signal processing techniques -including gradient methods, Kalman filtering, and particle methods -can be used as components of such algorithms. Other than most of the previous literature, we have used Forney-style factor graphs, which support hierarchical modeling and are compatible with standard block diagrams.


IEEE Transactions on Information Theory | 2006

Simulation-Based Computation of Information Rates for Channels With Memory

Dieter-Michael Arnold; Hans-Andrea Loeliger; Pascal O. Vontobel; Aleksandar Kavcic; Wei Zeng

The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods


Proceedings of the IEEE | 2007

The Factor Graph Approach to Model-Based Signal Processing

Hans-Andrea Loeliger; Justin Dauwels; Junli Hu; Sascha Korl; Li Ping; Frank R. Kschischang

The message-passing approach to model-based signal processing is developed with a focus on Gaussian message passing in linear state-space models, which includes recursive least squares, linear minimum-mean-squared-error estimation, and Kalman filtering algorithms. Tabulated message computation rules for the building blocks of linear models allow us to compose a variety of such algorithms without additional derivations or computations. Beyond the Gaussian case, it is emphasized that the message-passing approach encourages us to mix and match different algorithmic techniques, which is exemplified by two different approaches - steepest descent and expectation maximization - to message passing through a multiplier node.


European Transactions on Telecommunications | 1995

Codes and iterative decoding on general graphs

Niclas Wiberg; Hans-Andrea Loeliger; Ralf Kotter

A general framework, based on ideas of Tanner, for the description of codes and iterative decoding (“turbo coding”) is developed. Just like trellis-based code descriptions are naturally matched to Viterbi decoding, code descriptions based on Tanner graphs (which may be viewed as generalized trellises) are naturally matched to iterative decoding. Two basic iterative decoding algorithms (which are versions of the algorithms of Berrou et al. and of Hagenauer, respectively) are shown to be natural generalizations of the forward-backward algorithm (Bahl et al.) and the Viterbi algorithm, respectively, to arbitrary Tanner graphs. The careful derivation of these algorithms clarifies, in particular, which a priori probabilities are admissible and how they are properly dealt with. For cycle codes (a class of binary linear block codes), a complete characterization is given of the error patterns that are corrected by the generalized Viterbi algorithm after infinitely many iterations.


international conference on communications | 2001

On the information rate of binary-input channels with memory

Dieter-Michael Arnold; Hans-Andrea Loeliger

The entropy rate of a finite-state hidden Markov model can be estimated by forward sum-product trellis processing (i.e., the forward recursion of the Baum-Welch/BCJR algorithm) of simulated model output data. This can be used to compute information rates of binary-input AWGN channels with memory.


international symposium on information theory | 1998

Probability propagation and decoding in analog VLSI

Hans-Andrea Loeliger; Felix Lustenberger; Markus Helfenstein; Felix Tarköy

The sum-product algorithm (belief/probability propagation) can be naturally mapped into analog transistor circuits. These circuits enable the construction of analog-VLSI decoders for turbo codes, low-density parity-check codes, and similar codes.


IEEE Transactions on Information Theory | 1991

Signal sets matched to groups

Hans-Andrea Loeliger

Recently, linear codes over Z/sub M/ (the ring of integers mod M) have been presented that are matched to M-ary phase modulation. The general problem of matching signal sets to generalized linear algebraic codes is addressed based on these codes. A definition is given for the notion of matching. It is shown that any signal set in N-dimensional Euclidean space that is matched to an abstract group is essentially what D. Slepian (1968) called a group code for the Gaussian channel. If the group is commutative, this further implies that any such signal set is equivalent to coded phase modulation with linear codes over Z/sub M/. Some further results on such signal sets are presented, and the signal sets matched to noncommutative groups and the linear codes over such groups are discussed. >


international symposium on information theory | 2008

A Generalization of the Blahut–Arimoto Algorithm to Finite-State Channels

Pascal O. Vontobel; Aleksandar Kavcic; Dieter-Michael Arnold; Hans-Andrea Loeliger

The classical Blahut-Arimoto algorithm (BAA) is a well-known algorithm that optimizes a discrete memoryless source (DMS) at the input of a discrete memoryless channel (DMC) in order to maximize the mutual information between channel input and output. This paper considers the problem of optimizing finite-state machine sources (FSMSs) at the input of finite-state machine channels (FSMCs) in order to maximize the mutual information rate between channel input and output. Our main result is an algorithm that efficiently solves this problem numerically; thus, we call the proposed procedure the generalized BAA. It includes as special cases not only the classical BAA but also an algorithm that solves the problem of finding the capacity-achieving input distribution for finite-state channels with no noise. While we present theorems that characterize the local behavior of the generalized BAA, there are still open questions concerning its global behavior; these open questions are addressed by some conjectures at the end of the paper. Apart from these algorithmic issues, our results lead to insights regarding the local conditions that the information-rate-maximizing FSMSs fulfill; these observations naturally generalize the well-known Kuhn-Tucker conditions that are fulfilled by capacity-achieving DMSs at the input of DMCs.


Linear Algebra and its Applications | 1994

Minimality and observability of group systems

Hans-Andrea Loeliger; G. David Forney; Thomas Mittelholzer; Mitchell D. Trott

Abstract Group systems are a generalization of Willems-type linear systems that are useful in error control coding. It is shown that the basic ideas of Willemss treatment of linear systems are easily generalized to linear systems over arbitrary rings and to group systems. The interplay between systems (behaviors) and trellises (evolution laws) is discussed with respect to completeness, minimality, controllability, and observability. It is pointed out that, for trellises of group systems and Willems-type linear systems, minimality is essentially the same as observability. The development is universal-algebraic in nature and holds unconditionally for linear systems over the real numbers.


IEEE Communications Magazine | 1999

Decoding in analog VLSI

Hans-Andrea Loeliger; Felix Tarköy; Felix Lustenberger; Markus Helfenstein

The iterative decoding of state-of-the-art error correcting codes such as turbo codes is computationally demanding. It is argued that analog implementations of such decoders can be much more efficient than digital implementations. This article gives a tutorial introduction to research on this topic. It is estimated that analog decoders can outperform digital decoders by two orders of magnitude in speed and/or power consumption.

Collaboration


Dive into the Hans-Andrea Loeliger's collaboration.

Top Co-Authors

Avatar

Justin Dauwels

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Felix Lustenberger

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Tarköy

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal O. Vontobel

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Matthias Frey

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge