Qiwen Wang
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Qiwen Wang.
international conference on communications | 2017
Qiwen Wang; Mikael Skoglund
A user wants to retrieve a file from a database without revealing the identity of the file retrieved at the database, which is known as the problem of private information retrieval (PIR). If it is further required that the user obtains no information about the database other than the desired file, the concept of symmetric private information retrieval (SPIR) is introduced to guarantee privacy for both parties. In this paper, the problem of SPIR is studied for a database stored among N nodes in a distributed way, by using an (N, M)-MDS storage code. The information-theoretic capacity of SPIR, defined as the maximum number of information bits of the desired file retrieved per downloaded bit, for the coded database is derived. It is shown that the SPIR capacity for coded database is 1-M/N, when the amount of the shared common randomness of distributed nodes (unavailable at the user) is at least M/N-M times the file size. Otherwise, the SPIR capacity for the coded database equals zero.
information theory workshop | 2015
Qiwen Wang; Viveck R. Cadambe; Sidharth Jaggi; Moshe Schwartz; Muriel Médard
The problem of one-way file synchronization, henceforth called “file updates”, is studied in this paper. Specifically, a client edits a file, where the edits are modeled by insertions and deletions (InDels). An old copy of the file is stored remotely at a data-centre, and is also available to the client. We consider the problem of throughput- and computationally-efficient communication from the client to the data-centre, to enable the data-centre to update its old copy to the newly edited file. Two models for the source files and edit patterns are studied: the random pre-edit sequence left-to-right random InDel (RPES-LtRRID) process, and the arbitrary pre-edit sequence arbitrary InDel (APES-AID) process. In both models, we consider the regime, in which the number of insertions and deletions is a small (but constant) fraction of the length of the original file. For both models, information-theoretic lower bounds on the best possible compression rates that enable file updates are derived (up to first order terms). Conversely, a simple compression algorithm using dynamic programming (DP) and entropy coding (EC), henceforth called DP-EC algorithm, achieves rates that are within constant additive gap (which diminishes as the alphabet size increases) to information-theoretic lower bounds for both models. For the RPES-LtRRID model, a dynamic-programming-run-length-compression (DP-RLC) algorithm is proposed, which achieves a compression rate matching the information-theoretic lower bound up to first order terms. Therefore, when the insertion and deletion probabilities are small (such that first order terms dominate), the achievable rate by DP-RLC is nearly optimal for the RPES-LtRRID model.
information theory workshop | 2011
Qiwen Wang; Sidharth Jaggi; Shuo-Yen Robert Li
We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes ([10], [11]) can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes [6]. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.
IEEE Transactions on Information Theory | 2018
Qiwen Wang; Sidharth Jaggi
In highly dynamic wireless networks, communications face several challenges. In the first place, noise levels between nodes might be difficult to predict a priori. Besides, a Byzantine attacker hidden in the network, with knowledge of the network topology and observation of all transmissions, can choose arbitrary locations to inject corrupted packets. Considering that transmissions are usually in bits and hardware in wireless networks usually use modulation schemes with the size of modulation alphabet being powers of two, e.g. BPSK, QPSK, 16-QAM, 64-QAM, and so on, to address the above problem, we study coding for networks experiencing worst case bit errors, and with network codes over binary extension fields. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. A new transform metric for errors under the considered model is proposed. Using this metric, we replicate many of the classical results from coding theory. Specifically, new Hamming-type, Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity are derived. A commensurate lower bound is shown based on Gilbert–Varshamov (GV)-type codes for error-correction. The GV codes used to attain the lower bound can be non-coherent, that is, they require neither prior knowledge of the network topology nor network coding kernels. We also propose a computationally efficient concatenation scheme. The rate achieved by our concatenated codes is characterized by a Zyablov-type lower bound. We provide a generalized minimum-distance decoding algorithm which decodes up to half the minimum distance of the concatenated codes. The end-to-end nature of our design enables our codes to be overlaid on the classical distributed random linear network codes. The other advantage of the end-to-end strategy over the link-by-link error-correction is that it reduces the computational cost at the internal nodes for performing error-correction.
data compression conference | 2017
Hanwei Wu; Qiwen Wang; Markus Flierl
This paper considers the problem of compression for similarity queries and discusses tree-structured vector quantizers. Here, the focus is on the trade oF between the rate of thecompressed data and the reliability of the answers to a given query. This problem is different from classical quantization as there is no need to reconstruct the original data. Instead, compression is determined by the reliability of answering given queries. We consider compression schemes that do not allow false negatives when answering queries. Hence, classicalvector quantization needs to be modied. We propose quantizers that hierarchically cluster the data into sphere-shaped quantization cells. The query process will be guided bydecision rules that avoid false negatives. In particular, we discuss two classic clusteringmethods, namely k-means and k-center. We use P{maybe}, a probability that is relatedto the occurrence of false positives, and the computational cost of queries to assess ourscheme. Our experiments show that k-center clustering generally performs better than k-means clustering, while tree-structured clustering reduces the computational cost of queriesfor both methods.
information theory workshop | 2016
Qiwen Wang; Muriel Médard; Mikael Skoglund
The problem of one-way file synchronization from the client to the data-center, namely, file updates, is studied. The problem is investigated in particular when an old version of a file which can be available at both client and data-center, is edited by the client to a new version. The edits are modeled as random insertions and deletions (InDels). Based on the updated and the previous version of the file, the client transmits a message to the data-center via a noiseless link, such that the data-center can update the file. A dynamic-programming-run-length-coding (DP-RLC) scheme is proposed for the message encoding in this paper. The lower order terms of the achievable rate are computed explicitly. It is worth noting that these terms match the information-theoretic lower bound derived in our previous work [1]. Therefore, when the insertion and deletion probabilities are small, the achievable rate is nearly optimal.
allerton conference on communication, control, and computing | 2017
Qiwen Wang; Mikael Skoglund
information theory workshop | 2017
Qiwen Wang; Mikael Skoglund
international symposium on information theory | 2018
Qiwen Wang; Mikael Skoglund
arXiv: Information Theory | 2018
Qiwen Wang; Hua Sun; Mikael Skoglund