Featured Researches

Information Theory

Dictionary-Sparse Recovery From Heavy-Tailed Measurements

The recovery of signals that are sparse not in a basis, but rather sparse with respect to an over-complete dictionary is one of the most flexible settings in the field of compressed sensing with numerous applications. As in the standard compressed sensing setting, it is possible that the signal can be reconstructed efficiently from few, linear measurements, for example by the so-called ??1 -synthesis method. However, it has been less well-understood which measurement matrices provably work for this setting. Whereas in the standard setting, it has been shown that even certain heavy-tailed measurement matrices can be used in the same sample complexity regime as Gaussian matrices, comparable results are only available for the restrictive class of sub-Gaussian measurement vectors as far as the recovery of dictionary-sparse signals via ??1 -synthesis is concerned. In this work, we fill this gap and establish optimal guarantees for the recovery of vectors that are (approximately) sparse with respect to a dictionary via the ??1 -synthesis method from linear, potentially noisy measurements for a large class of random measurement matrices. In particular, we show that random measurements that fulfill only a small-ball assumption and a weak moment assumption, such as random vectors with i.i.d. Student- t entries with a logarithmic number of degrees of freedom, lead to comparable guarantees as (sub-)Gaussian measurements. Our results apply for a large class of both random and deterministic dictionaries. As a corollary of our results, we also obtain a slight improvement on the weakest assumption on a measurement matrix with i.i.d. rows sufficient for uniform recovery in standard compressed sensing, improving on results by Mendelson and Lecué and Dirksen, Lecué and Rauhut.

Read more
Information Theory

Differential Entropy Rate Characterisations of Long Range Dependent Processes

A quantity of interest to characterise continuous-valued stochastic processes is the differential entropy rate. The rate of convergence of many properties of LRD processes is slower than might be expected, based on the intuition for conventional processes, e.g. Markov processes. Is this also true of the entropy rate? In this paper we consider the properties of the differential entropy rate of stochastic processes that have an autocorrelation function that decays as a power law. We show that power law decaying processes with similar autocorrelation and spectral density functions, Fractional Gaussian Noise and ARFIMA(0,d,0), have different entropic properties, particularly for negatively correlated parameterisations. Then we provide an equivalence between the mutual information between past and future and the differential excess entropy for stationary Gaussian processes, showing the finiteness of this quantity is the boundary between long and short range dependence. Finally, we analyse the convergence of the conditional entropy to the differential entropy rate and show that for short range dependence that the rate of convergence is of the order O( n ?? ) , but it is slower for long range dependent processes and depends on the Hurst parameter.

Read more
Information Theory

Differential Privacy for Binary Functions via Randomized Graph Colorings

We present a framework for designing differentially private (DP) mechanisms for binary functions via a graph representation of datasets. Datasets are nodes in the graph and any two neighboring datasets are connected by an edge. The true binary function we want to approximate assigns a value (or true color) to a dataset. Randomized DP mechanisms are then equivalent to randomized colorings of the graph. A key notion we use is that of the boundary of the graph. Any two neighboring datasets assigned a different true color belong to the boundary. Under this framework, we show that fixing the mechanism behavior at the boundary induces a unique optimal mechanism. Moreover, if the mechanism is to have a homogeneous behavior at the boundary, we present a closed expression for the optimal mechanism, which is obtained by means of a \emph{pullback} operation on the optimal mechanism of a line graph. For balanced mechanisms, not favoring one binary value over another, the optimal (ϵ,δ) -DP mechanism takes a particularly simple form, depending only on the minimum distance to the boundary, on ϵ , and on δ .

Read more
Information Theory

Distance Enumerators for Number-Theoretic Codes

The number-theoretic codes are a class of codes defined by single or multiple congruences and are mainly used for correcting insertion and deletion errors. Since the number-theoretic codes are generally non-linear, the analysis method for such codes is not established enough. The distance enumerator of a code is a unary polynomial whose i th coefficient gives the number of the pairs of codewords with distance i . The distance enumerator gives the maximum likelihood decoding error probability of the code. This paper presents an identity of the distance enumerators for the number-theoretic codes. Moreover, as an example, we derive the Hamming distance enumerator for the Varshamov-Tenengolts (VT) codes.

Read more
Information Theory

Distributed Arithmetic Coding for Sources with Hidden Markov Correlation

Distributed arithmetic coding (DAC) has been shown to be effective for Slepian-Wolf coding, especially for short data blocks. In this letter, we propose to use the DAC to compress momery-correlated sources. More specifically, the correlation between sources is modeled as a hidden Markov process. Experimental results show that the performance is close to the theoretical Slepian-Wolf limit.

Read more
Information Theory

Distributed Conditional Generative Adversarial Networks (GANs) for Data-Driven Millimeter Wave Communications in UAV Networks

In this paper, a novel framework is proposed to perform data-driven air-to-ground (A2G) channel estimation for millimeter wave (mmWave) communications in an unmanned aerial vehicle (UAV) wireless network. First, an effective channel estimation approach is developed to collect mmWave channel information, allowing each UAV to train a stand-alone channel model via a conditional generative adversarial network (CGAN) along each beamforming direction. Next, in order to expand the application scenarios of the trained channel model into a broader spatial-temporal domain, a cooperative framework, based on a distributed CGAN architecture, is developed, allowing each UAV to collaboratively learn the mmWave channel distribution in a fully-distributed manner. To guarantee an efficient learning process, necessary and sufficient conditions for the optimal UAV network topology that maximizes the learning rate for cooperative channel modeling are derived, and the optimal CGAN learning solution per UAV is subsequently characterized, based on the distributed network structure. Simulation results show that the proposed distributed CGAN approach is robust to the local training error at each UAV. Meanwhile, a larger airborne network size requires more communication resources per UAV to guarantee an efficient learning rate. The results also show that, compared with a stand-alone CGAN without information sharing and two other distributed schemes, namely: A multi-discriminator CGAN and a federated CGAN method, the proposed distributed CGAN approach yields a higher modeling accuracy while learning the environment, and it achieves a larger average data rate in the online performance of UAV downlink mmWave communications.

Read more
Information Theory

Distributed Generative Adversarial Networks for mmWaveChannel Modeling in Wireless UAV Networks

In this paper, a novel framework is proposed to enable air-to-ground channel modeling over millimeter wave (mmWave) frequencies in an unmanned aerial vehicle (UAV) wireless network. First, an effective channel estimation approach is developed to collect mmWave channel information allowing each UAV to train a local channel model via a generative adversarial network (GAN). Next, in order to share the channel information between UAVs in a privacy-preserving manner, a cooperative framework, based on a distributed GAN architecture, is developed to enable each UAV to learn the mmWave channel distribution from the entire dataset in a fully distributed approach. The necessary and sufficient conditions for the optimal network structure that maximizes the learning rate for information sharing in the distributed network are derived. Simulation results show that the learning rate of the proposed GAN approach will increase by sharing more generated channel samples at each learning iteration, but decrease given more UAVs in the network. The results also show that the proposed GAN method yields a higher learning accuracy, compared with a standalone GAN, and improves the average rate for UAV downlink communications by over 10%, compared with a baseline real-time channel estimation scheme.

Read more
Information Theory

Distributed Quantum Faithful Simulation and Function Computation Using Algebraic Structured Measurements

In this work, we consider the task of faithfully simulating a distributed quantum measurement and function computation, and demonstrate a new achievable information-theoretic rate-region. For this, we develop the technique of randomly generating structured POVMs using algebraic codes. To overcome the challenges caused by algebraic construction, we develop a Pruning Trace inequality which is a tighter version of the known operator Markov inequality. In addition, we develop a covering lemma which is independent of the operator Chernoff inequality so as to be applicable for pairwise-independent codewords. We demonstrate rate gains for this problem over traditional coding schemes. Combining these techniques, we provide a multi-party distributed faithful simulation and function computation protocol.

Read more
Information Theory

Distributed Source Coding with Encryption Using Correlated Keys

We pose and investigate the distributed secure source coding based on the common key cryptosystem. This cryptosystem includes the secrecy amplification problem for distributed encrypted sources with correlated keys using post-encryption-compression, which was posed investigated by Santoso and Oohama. In this paper we propose another new security criterion which is generally more strict compared to the commonly used security criterion which is based on the upper-bound of mutual information between the plaintext and the ciphertext. Under this criterion, we establish the necessary and sufficient condition for the secure transmission of correlated sources.

Read more
Information Theory

Distributed Spectrum and Power Allocation for D2D-U Networks: A Scheme based on NN and Federated Learning

In this paper, a Device-to-Device communication on unlicensed bands (D2D-U) enabled network is studied. To improve the spectrum efficiency (SE) on the unlicensed bands and fit its distributed structure while ensuring the fairness among D2D-U links and the harmonious coexistence with WiFi networks, a distributed joint power and spectrum scheme is proposed. In particular, a parameter, named as price, is defined, which is updated at each D2D-U pair by a online trained Neural network (NN) according to the channel state and traffic load. In addition, the parameters used in the NN are updated by two ways, unsupervised self-iteration and federated learning, to guarantee the fairness and harmonious coexistence. Then, a non-convex optimization problem with respect to the spectrum and power is formulated and solved on each D2D-U link to maximize its own data rate. Numerical simulation results are demonstrated to verify the effectiveness of the proposed scheme.

Read more

Ready to get started?

Join us today