Featured Researches

Information Theory

Let's Share VMs: Optimal Placement and Pricing across Base Stations in MEC Systems

In mobile edge computing (MEC) systems, users offload computationally intensive tasks to edge servers at base stations. However, with unequal demand across the network, there might be excess demand at some locations and underutilized resources at other locations. To address such load-unbalanced problem in MEC systems, in this paper we propose virtual machines (VMs) sharing across base stations. Specifically, we consider the joint VM placement and pricing problem across base stations to match demand and supply and maximize revenue at the network level. To make this problem tractable, we decompose it into master and slave problems. For the placement master problem, we propose a Markov approximation algorithm MAP on the design of a continuous time Markov chain. As for the pricing slave problem, we propose OPA - an optimal VM pricing auction, where all users are truthful. Furthermore, given users' potential untruthful behaviors, we propose an incentive compatible auction iCAT along with a partitioning mechanism PUFF, for which we prove incentive compatibility and revenue guarantees. Finally, we combine MAP and OPA or PUFF to solve the original problem, and analyze the optimality gap. Simulation results show that collaborative base stations increases revenue by up to 50%.

Read more
Information Theory

Linear Computation Coding

We introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions. Particularizing to linear functions, we present an algorithm to reduce the computational cost of multiplying an arbitrary given matrix with an unknown column vector. The algorithm decomposes the given matrix into the product of codebook wiring matrices whose entries are either zero or signed integer powers of two. For a typical implementation of deep neural networks, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.

Read more
Information Theory

List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier

We consider the problem of coded computing where a computational task is performed in a distributed fashion in the presence of adversarial workers. We propose techniques to break the adversarial toleration threshold barrier previously known in coded computing. More specifically, we leverage list-decoding techniques for folded Reed-Solomon (FRS) codes and propose novel algorithms to recover the correct codeword using side information. In the coded computing setting, we show how the master node can perform certain carefully designed extra computations in order to obtain the side information. This side information will be then utilized to prune the output of list decoder in order to uniquely recover the true outcome. We further propose folded Lagrange coded computing, referred to as folded LCC or FLCC, to incorporate the developed techniques into a specific coded computing setting. Our results show that FLCC outperforms LCC by breaking the barrier on the number of adversaries that can be tolerated. In particular, the corresponding threshold in FLCC is improved by a factor of two compared to that of LCC.

Read more
Information Theory

Local Differential Privacy Is Equivalent to Contraction of E γ -Divergence

We investigate the local differential privacy (LDP) guarantees of a randomized privacy mechanism via its contraction properties. We first show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the E γ -divergence. We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary f -divergences. When combined with standard estimation-theoretic tools (such as Le Cam's and Fano's converse methods), this result allows us to study the trade-off between privacy and utility in several testing and minimax and Bayesian estimation problems.

Read more
Information Theory

Low Latency Scheduling Algorithms for Full-Duplex V2X Networks

Vehicular communication systems have been an active subject of research for many years and are important technologies in the 5G and the post-5G era. One important use case is platooning which is seemingly the first step towards fully autonomous driving systems. Furthermore, a key performance parameter in all vehicular communication systems is the end-to-end packet latency. Towards this goal, full-duplex (FD) transceivers can potentially be an efficient and practical solution towards reducing the delay in platooning networks. In this paper, we study the delay performance of dynamic and TDMAbased scheduling algorithms and assess the effect of FD-enabled vehicles with imperfect self-interference cancellation (SIC). By simulations, we demonstrate the performance-complexity tradeoff of these algorithms and show that a TDMA centralized scheme with low-complexity and overhead can achieve comparable performance with a fully-dynamic, centralized algorithm.

Read more
Information Theory

Low-Complexity Interference Cancellation Algorithms for Detection in Media-based Modulated Uplink Massive-MIMO Systems

Media-based modulation (MBM) is a novel modulation technique that can improve the spectral efficiency of the existing wireless systems. In MBM, multiple radio frequency (RF) mirrors are placed near the transmit antenna(s) and are switched ON/OFF to create different channel fade realizations. In such systems, additional information is conveyed through the ON/OFF status of RF mirrors along with conventional modulation symbols. A challenging task at the receiver is to detect the transmitted information symbols and extract the additional information from the channel fade realization used for transmission. In this paper, we consider a massive MIMO (mMIMO) system where each user relies on MBM for transmitting information to the base station, and investigate the problem of symbol detection at the base station. First, we propose a mirror activation pattern (MAP) selection based modified iterative sequential detection algorithm. With the proposed algorithm, the most favorable MAP is selected, followed by the detection of symbol corresponding to the selected MAP. Each solution is subjected to the reliability check before getting the update. Next, we introduce a K favorable MAP search based iterative interference cancellation (KMAP-IIC) algorithm. In particular, a selection rule is introduced in KMAP-IIC for deciding the set of favorable MAPs over which iterative interference cancellation is performed, followed by a greedy update scheme for detecting the MBM symbols corresponding to each user. Simulation results show that the proposed detection algorithms exhibit superior performance-complexity trade-off over the existing detection techniques in MBM-mMIMO systems.

Read more
Information Theory

Low-Power Status Updates via Sleep-Wake Scheduling

We consider the problem of optimizing the freshness of status updates that are sent from a large number of low-power sources to a common access point. The source nodes utilize carrier sensing to reduce collisions and adopt an asynchronized sleep-wake scheduling strategy to achieve a target network lifetime (e.g., 10 years). We use age of information (AoI) to measure the freshness of status updates, and design sleep-wake parameters for minimizing the weighted-sum peak AoI of the sources, subject to per-source battery lifetime constraints. When the sensing time (i.e., the time duration of carrier sensing) is zero, this sleep-wake design problem can be solved by resorting to a two-layer nested convex optimization procedure; however, for positive sensing times, the problem is non-convex. We devise a low-complexity solution to solve this problem and prove that, for practical sensing times that are short, the solution is within a small gap from the optimum AoI performance. When the mean transmission time of status-update packets is unknown, we devise a reinforcement learning algorithm that adaptively performs the following two tasks in an ``efficient way'': a) it learns the unknown parameter, b) it also generates efficient controls that make channel access decisions. We analyze its performance by quantifying its ``regret'', i.e., the sub-optimality gap between its average performance and the average performance of a controller that knows the mean transmission time. Our numerical and NS-3 simulation results show that our solution can indeed elongate the batteries lifetime of information sources, while providing a competitive AoI performance.

Read more
Information Theory

Lower Bound on the Optimal Access Bandwidth of ( K+2,K,2 )-MDS Array Code with Degraded Read Friendly

Accessing the data in the failed disk (degraded read) with low latency is crucial for an erasure-coded storage system. In this work, the maximum distance separable (MDS) array code with the property of degraded-read friendly (DRF) is discussed. For the DRF MDS array code with 2 redundant nodes and the sub-packetization level of 2, the lower bound of its access bandwidth is derived.

Read more
Information Theory

Lower Bounds on Information Requirements for Causal Network Inference

Recovery of the causal structure of dynamic networks from noisy measurements has long been a problem of intense interest across many areas of science and engineering. Many algorithms have been proposed, but there is no work that compares the performance of the algorithms to converse bounds in a non-asymptotic setting. As a step to address this problem, this paper gives lower bounds on the error probability for causal network support recovery in a linear Gaussian setting. The bounds are based on the use of the Bhattacharyya coefficient for binary hypothesis testing problems with mixture probability distributions. Comparison of the bounds and the performance achieved by two representative recovery algorithms are given for sparse random networks based on the Erd?s-Rényi model.

Read more
Information Theory

Lower bound on Wyner's Common Information

An important notion of common information between two random variables is due to Wyner. In this paper, we derive a lower bound on Wyner's common information for continuous random variables. The new bound improves on the only other general lower bound on Wyner's common information, which is the mutual information. We also show that the new lower bound is tight for the so-called "Gaussian channels" case, namely, when the joint distribution of the random variables can be written as the sum of a single underlying random variable and Gaussian noises. We motivate this work from the recent variations of Wyner's common information and applications to network data compression problems such as the Gray-Wyner network.

Read more

Ready to get started?

Join us today