Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerhard Kramer is active.

Publication


Featured researches published by Gerhard Kramer.


Foundations and Trends in Communications and Information Theory | 2007

Topics in Multi-User Information Theory

Gerhard Kramer

This survey reviews fundamental concepts of multi-user information theory. Starting with typical sequences, the survey builds up knowledge on random coding, binning, superposition coding, and capacity converses by introducing progressively more sophisticated tools for a selection of source and channel models. The problems addressed include: Source Coding; Rate-Distortion and Multiple Descriptions; Capacity-Cost; The Slepian–Wolf Problem; The Wyner-Ziv Problem; The Gelfand-Pinsker Problem; The Broadcast Channel; The Multiaccess Channel; The Relay Channel; The Multiple Relay Channel; and The Multiaccess Channel with Generalized Feedback. The survey also includes a review of basic probability and information theory.


asilomar conference on signals, systems and computers | 2005

The Capacity Region of the Strong Interference Channel with Common Information

Ivana Maric; Roy D. Yates; Gerhard Kramer

Transmitter cooperation enabled by dedicated links allows for a partial message exchange between encoders. After cooperation, each encoder knows a common message partially describing the two original messages, and its own private message containing the information that the encoders were not able to exchange. We consider the interference channel with both private and common messages at the encoders. A private message at an encoder is intended for a corresponding decoder whereas the common message is to be received at both decoders. We derive conditions under which the capacity region of this channel coincides with the capacity region of the channel in which both private messages are required at both receivers. We show that the obtained conditions are equivalent to the strong interference conditions determined by Costa and El Gamal for the interference channel with independent messages.


IEEE Transactions on Information Theory | 2016

Short Message Noisy Network Coding With a Decode–Forward Option

Jie Hou; Gerhard Kramer

Short message noisy network coding (SNNC) differs from long message noisy network coding (LNNC) in that one transmits many short messages in blocks rather than using one long message with repetitive encoding. Two properties of SNNC are developed. First, SNNC with backward decoding achieves the same rates as SNNC with offset encoding and sliding window decoding for memoryless networks where each node transmits a multicast message. The rates are the same as LNNC with joint decoding. Second, SNNC enables early decoding if the channel quality happens to be good. This leads to mixed strategies that unify the advantages of decode-forward and noisy network coding.


IEEE Transactions on Communications | 2012

Low-Precision A/D Conversion for Maximum Information Rate in Channels with Memory

Georg Zeitler; Andrew C. Singer; Gerhard Kramer

Analog-to-digital converters that maximize the information rate between the quantized channel output sequence and the channel input sequence are designed for discrete-time channels with intersymbol-interference, additive noise, and for independent and identically distributed signaling. Optimized scalar quantizers with Λ regions achieve the full information rate of log2(Λ) bits per channel use with a transmit alphabet of size Λ at infinite signal-to-noise ratio; these quantizers, however, are not necessarily uniform quantizers. Low-precision scalar and two-dimensional analog-to-digital converters are designed at finite signal-to-noise ratio, and an upper bound on the information rate is derived. Simulation results demonstrate the effectiveness of the designed quantizers over conventional quantizers. The advantage of the new quantizers is further emphasized by an example of a channel for which a slicer (with a single threshold at zero) and a carefully optimized channel input with memory fail to achieve a rate of one bit per channel use at high signal-to-noise ratio, in contrast to memoryless binary signaling and an optimized quantizer.


information theory workshop | 2011

On message lengths for noisy network coding

Gerhard Kramer; Jie Hou

Quantize-map-forward (QMF) and noisy network coding (NNC) differ primarily from compress-forward relaying in that relays do not hash their quantization bits. Two further differences are that source nodes use “long”-message repetitive encoding and destination nodes use simultaneous joint decoding. Recent work has shown that classic “short”-message encoding combined with backward decoding achieves the same rates as QMF and NNC. A simplified proof of this result is given.


information theory workshop | 2015

Upper bound on the capacity of a cascade of nonlinear and noisy channels

Gerhard Kramer; Mansoor I. Yousefi; Frank R. Kschischang

An upper bound on the capacity of a cascade of nonlinear and noisy channels is presented. The cascade mimics the split-step Fourier method for computing waveform propagation governed by the stochastic generalized nonlinear Schrödinger equation. It is shown that the spectral efficiency of the cascade is at most log(1+SNR), where SNR is the receiver signal-to-noise ratio. The results may be applied to optical fiber channels. However, the definition of bandwidth is subtle and leaves open interpretations of the bound. Some of these interpretations are discussed.


global communications conference | 2013

Multi-sample receivers increase information rates for Wiener phase noise channels

Hassan Ghozlan; Gerhard Kramer

A waveform channel is considered where the transmitted signal is corrupted by Wiener phase noise and additive white Gaussian noise (AWGN). A discrete-time channel model is introduced that is based on a multi-sample receiver. Tight lower bounds on the information rates achieved by the multi-sample receiver are computed by means of numerical simulations. The results show that oversampling at the receiver is beneficial for both strong and weak phase noise at high signal-to-noise ratios. The results are compared with results obtained when using other discrete-time models.


international symposium on information theory | 2012

Short message noisy network coding for multiple sources

Jie Hou; Gerhard Kramer

Short message noisy network coding (SNNC) transmits independent short messages in blocks rather than using long message repetitive encoding. SNNC is shown to achieve the same rates as noisy network coding (NNC) for discrete memoryless networks where each node transmits a multicast message. One advantage of SNNC is that backward decoding may be used which simplifies the analysis and understanding of the achievability proof. The analysis reveals that each decoder may ignore certain other nodes rather than including their message in the decoding procedure. Additionally, SNNC enables early decoding at nodes if the channel quality happens to be good.


IEEE Transactions on Information Theory | 2017

Capacity Bounds for Discrete-Time, Amplitude-Constrained, Additive White Gaussian Noise Channels

Andrew Thangaraj; Gerhard Kramer; Georg Böcherer

The capacity-achieving input distribution of the discrete-time, additive white Gaussian noise (AWGN) channel with an amplitude constraint is discrete and seems difficult to characterize explicitly. A dual capacity expression is used to derive analytic capacity upper bounds for scalar and vector AWGN channels. The scalar bound improves on McKellips’ bound and is within 0.1 bit of capacity for all signal-to-noise ratios (SNRs). The 2-D bound is within 0.15 bits of capacity provably up to 4.5 dB; numerical evidence suggests a similar gap for all SNRs. As the SNR tends to infinity, these bounds are accurate and match with a volume-based lower bound. For the 2-D complex case, an analytic lower bound is derived by using a concentric constellation and is shown to be within 1 bit of capacity.


arXiv: Information Theory | 2015

Upper bound on the capacity of the nonlinear Schrödinger channel

Mansoor I. Yousefi; Gerhard Kramer; Frank R. Kschischang

It is shown that the capacity of the channel modeled by (a discretized version of) the stochastic nonlinear Schrödinger (NLS) equation is upper-bounded by log(l + SNR) with SNR = P<sub>0</sub>/σ<sup>2</sup>(z), where P<sub>0</sub> is the average input signal power and σ<sup>2</sup>(z) is the total noise power up to distance z. The result is a consequence of the fact that the deterministic NLS equation is a Hamiltonian energy-preserving dynamical system.

Collaboration


Dive into the Gerhard Kramer's collaboration.

Top Co-Authors

Avatar

Haim H. Permuter

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Ziv Goldfeld

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Agrell

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Johnny Karout

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge