Featured Researches

Information Theory

New Singly and Doubly Even Binary [72,36,12] Self-Dual Codes from M 2 (R)G -- Group Matrix Rings

In this work, we present a number of generator matrices of the form [ I 2n | ? k (v)], where I kn is the kn?kn identity matrix, v is an element in the group matrix ring M 2 (R)G and where R is a finite commutative Frobenius ring and G is a finite group of order 18. We employ these generator matrices and search for binary [72,36,12] self-dual codes directly over the finite field F 2 . As a result, we find 134 Type I and 1 Type II codes of this length, with parameters in their weight enumerators that were not known in the literature before. We tabulate all of our findings.

Read more
Information Theory

New Type I Binary [72, 36, 12] Self-Dual Codes from Composite Matrices and R1 Lifts

In this work, we define three composite matrices derived from group rings. We employ these composite matrices to create generator matrices of the form [In | {\Omega}(v)], where In is the identity matrix and {\Omega}(v) is a composite matrix and search for binary self-dual codes with parameters [36, 18, 6 or 8]. We next lift these codes over the ring R1 = F2 + uF2 to obtain codes whose binary images are self-dual codes with parameters [72,36,12]. Many of these codes turn out to have weight enumerators with parameters that were not known in the literature before. In particular, we find 30 new Type I binary self-dual codes with parameters [72, 36, 12].

Read more
Information Theory

New Upper Bounds in the Hypothesis Testing Problem with Information Constraints

We consider a hypothesis testing problem where a part of data cannot be observed. Our helper observes the missed data and can send us a limited amount of information about them. What kind of this limited information will allow us to make the best statistical inference? In particular, what is the minimum information sufficient to obtain the same results as if we directly observed all the data? We derive estimates for this minimum information and some other similar results.

Read more
Information Theory

New identities for the Shannon function and applications

We show how the Shannon entropy function H(p,q)is expressible as a linear combination of other Shannon entropy functions involving quotients of polynomials in p,q of degree n for any given positive integer n. An application to cryptographic keys is presented.

Read more
Information Theory

New upper bounds for (b,k) -hashing

For fixed integers b?�k , the problem of perfect (b,k) -hashing asks for the asymptotic growth of largest subsets of {1,2,??b } n such that for any k distinct elements in the set, there is a coordinate where they all differ. An important asymptotic upper bound for general b,k , was derived by Fredman and Komlós in the '80s and improved for certain b?�k by Körner and Marton and by Arikan. Only very recently better bounds were derived for the general b,k case by Guruswami and Riazanov, while stronger results for small values of b=k were obtained by Arikan, by Dalai, Guruswami and Radhakrishnan and by Costa and Dalai. In this paper, we both show how some of the latter results extend to b?�k and further strengthen the bounds for some specific small values of b and k . The method we use, which depends on the reduction of an optimization problem to a finite number of cases, shows that further results might be obtained by refined arguments at the expense of higher complexity.

Read more
Information Theory

Noise Is Useful: Exploiting Data Diversity for Edge Intelligence

Edge intelligence requires to fast access distributed data samples generated by edge devices. The challenge is using limited radio resource to acquire massive data samples for training machine learning models at edge server. In this article, we propose a new communication-efficient edge intelligence scheme where the most useful data samples are selected to train the model. Here the usefulness or values of data samples is measured by data diversity which is defined as the difference between data samples. We derive a close-form expression of data diversity that combines data informativeness and channel quality. Then a joint data-and-channel diversity aware multiuser scheduling algorithm is proposed. We find that noise is useful for enhancing data diversity under some conditions.

Read more
Information Theory

Non-Asymptotic Converse Bounds Via Auxiliary Channels

This paper presents a new derivation method of converse bounds on the non-asymptotic achievable rate of memoryless discrete channels. It is based on the finite blocklength statistics of the channel, where with the use of an auxiliary channel the converse bound is produced. This methodology is general and initially presented for an arbitrary channel. Afterwards, the main result is specialized for the q -ary erasure (QEC), binary symmetric (BSC), and Z channels.

Read more
Information Theory

Non-Symmetric Coded Caching for Location-Dependent Content Delivery

Immersive viewing is emerging as the next interface evolution for human-computer interaction. A truly wireless immersive application necessitates immense data delivery with ultra-low latency, raising stringent requirements for next-generation wireless networks. A potential solution for addressing these requirements is through the efficient usage of in-device storage and computation capabilities. This paper proposes a novel location-based coded cache placement and delivery scheme, which leverages the nested code modulation (NCM) to enable multi-rate multicasting transmission. To provide a uniform quality of experience in different network locations, we formulate a linear programming cache allocation problem. Next, based on the users' spatial realizations, we adopt an NCM based coded delivery algorithm to efficiently serve a distinct group of users during each transmission. Numerical results demonstrate that the proposed location-based delivery method significantly increases transmission efficiency compared to state of the art.

Read more
Information Theory

Nonconvex Regularized Gradient Projection Sparse Reconstruction for Massive MIMO Channel Estimation

Novel sparse reconstruction algorithms are proposed for beamspace channel estimation in massive multiple-input multiple-output systems. The proposed algorithms minimize a least-squares objective having a nonconvex regularizer. This regularizer removes the penalties on a few large-magnitude elements from the conventional l1-norm regularizer, and thus it only forces penalties on the remaining elements that are expected to be zeros. Accurate and fast reconstructions can be achieved by performing gradient projection updates within the framework of difference of convex functions (DC) programming. A double-loop algorithm and a single-loop algorithm are proposed via different DC decompositions, and these two algorithms have distinct computation complexities and convergence rates. Then, an extension algorithm is further proposed by designing the step sizes of the single-loop algorithm. The extension algorithm has a faster convergence rate and can achieve approximately the same level of accuracy as the proposed double-loop algorithm. Numerical results show significant advantages of the proposed algorithms over existing reconstruction algorithms in terms of reconstruction accuracies and runtimes. Compared to the benchmark channel estimation techniques, the proposed algorithms also achieve smaller mean squared error and higher achievable spectral efficiency.

Read more
Information Theory

Off-grid Channel Estimation with Sparse Bayesian Learning for OTFS Systems

This paper proposes an off-grid channel estimation scheme for orthogonal time-frequency space (OTFS) systems adopting the sparse Bayesian learning (SBL) framework. To avoid channel spreading caused by the fractional delay and Doppler shifts and to fully exploit the channel sparsity in the delay-Doppler (DD) domain, we estimate the original DD domain channel response rather than the effective DD domain channel response as commonly adopted in the literature. OTFS channel estimation is first formulated as a one-dimensional (1D) off-grid sparse signal recovery (SSR) problem based on a virtual sampling grid defined in the DD space, where the on-grid and off-grid components of the delay and Doppler shifts are separated for estimation. In particular, the on-grid components of the delay and Doppler shifts are jointly determined by the entry indices with significant values in the recovered sparse vector. Then, the corresponding off-grid components are modeled as hyper-parameters in the proposed SBL framework, which can be estimated via the expectation-maximization method. To strike a balance between channel estimation performance and computational complexity, we further propose a two-dimensional (2D) off-grid SSR problem via decoupling the delay and Doppler shift estimations. In our developed 1D and 2D off-grid SBL-based channel estimation algorithms, the hyper-parameters are updated alternatively for computing the conditional posterior distribution of channels, which can be exploited to reconstruct the effective DD domain channel. Compared with the 1D method, the proposed 2D method enjoys a much lower computational complexity while only suffers slight performance degradation. Simulation results verify the superior performance of the proposed channel estimation schemes over state-of-the-art schemes.

Read more

Ready to get started?

Join us today