Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chien-Yi Wang is active.

Publication


Featured researches published by Chien-Yi Wang.


IEEE Transactions on Information Theory | 2014

Approximate Ergodic Capacity of a Class of Fading Two-User Two-Hop Networks

Sang-Woon Jeon; Chien-Yi Wang; Michael Gastpar

The fading AWGN two-user two-hop network is considered where the channel coefficients are independent and identically distributed (i.i.d.) according to a continuous distribution and vary over time. For a broad class of channel distributions, the ergodic sum capacity is characterized to within a constant number of bits/second/hertz, independent of the signal-to-noise ratio. The achievability follows from the analysis of an interference neutralization scheme where the relays are partitioned into M pairs, and interference is neutralized separately by each pair of relays. When M = 1, the proposed ergodic interference neutralization characterizes the ergodic sum capacity to within 4 bits/sec/Hz for i.i.d. uniform phase fading and approximately 4.7 bits/sec/Hz for i.i.d. Rayleigh fading. It is further shown that this gap can be tightened to 4 log π-4 bits/sec/Hz (approximately 2.6) for i.i.d. uniform phase fading and 4-4 log(3π/8) bits/sec/Hz (approximately 3.1) for i.i.d. Rayleigh fading in the limit of large M1.


IEEE Transactions on Information Theory | 2014

Computation Over Gaussian Networks With Orthogonal Components

Sang-Woon Jeon; Chien-Yi Wang; Michael Gastpar

Function computation over Gaussian networks with orthogonal components is studied for arbitrarily correlated discrete memoryless sources. Two classes of functions are considered: 1) the arithmetic sum function and 2) the type function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of the sources and estimating each of the sources as special cases. The type or frequency histogram function counts the number of occurrences of each argument, which yields various fundamental statistics, such as mean, variance, maximum, minimum, median, and so on. The proposed computation coding first abstracts Gaussian networks into the corresponding modulo sum multiple-access channels via nested lattice codes and linear network coding and then computes the desired function using linear Slepian-Wolf source coding. For orthogonal Gaussian networks (with no broadcast and multiple-access components), the computation capacity is characterized for a class of networks. For Gaussian networks with multiple-access components (but no broadcast), an approximate computation capacity is characterized for a class of networks.


international symposium on information theory | 2012

Approximate ergodic capacity of a class of fading 2-user 2-hop networks

Sang-Woon Jeon; Chien-Yi Wang; Michael Gastpar

We study a 2-user 2-hop network with 2 relays in which channel coefficients are independently drawn from continuous distributions and vary over time. For a broad class of channel distributions, we characterize the ergodic sum capacity within a constant number of bits/sec/Hz, independent of signal-to-noise ratio. Specifically, we characterize the ergodic sum capacity within 4 bits/sec/Hz for independent and identically distributed (i.i.d.) uniform phase fading and approximately 4.7 bits/sec/Hz for i.i.d. Rayleigh fading. For achievability, we propose ergodic interference neutralization in which the relays amplify and forward their received signals with appropriate delays such that interference can be neutralized at each destination.


international symposium on information theory | 2015

Information-theoretic caching

Chien-Yi Wang; Sung Hoon Lim; Michael Gastpar

Motivated by the caching problem introduced by Maddah-Ali and Niesen, a problem of distributed source coding with side information is formulated, which captures a distinct interesting aspect of caching. For the single-user case, a single-letter characterization of the optimal rate region is presented. For the cases where the source is composed of either independent or nested components, the exact optimal rate regions are found and some intuitive caching strategies are confirmed to be optimal. When the components are arbitrarily correlated with uniform requests, the optimal caching strategy is found to be closely related to total correlation and Wyners common information. For the two-user case, some subproblems are solved which draw connections to the Gray-Wyner system and distributed successive refinement. Finally, inner and outer bounds are given for the case of two private caches with a common update.


international symposium on information theory | 2013

Multi-round computation of type-threshold functions in collocated Gaussian networks

Chien-Yi Wang; Sang-Woon Jeon; Michael Gastpar

In wireless sensor networks, various applications involve learning one or multiple functions of the measurements observed by sensors, rather than the measurements themselves. This paper focuses on the computation of type-threshold functions which include the maximum, minimum, and indicator functions as special cases. Previous work studied this problem under the collocated collision network model and showed that under many probabilistic models for the measurements, the achievable computation rates tend to zero as the number of sensors increases. In this paper, wireless sensor networks are modeled as fully connected Gaussian networks with equal channel gains, which are termed collocated Gaussian networks. A general multi-round coding scheme exploiting not only the broadcast property but also the superposition property of Gaussian networks is developed. Through careful scheduling of concurrent transmissions to reduce redundancy, it is shown that given any independent measurement distribution, all type-threshold functions can be computed reliably with a non-vanishing rate even if the number of sensors tends to infinity.


IEEE Transactions on Information Theory | 2015

Interactive Computation of Type-Threshold Functions in Collocated Gaussian Networks

Chien-Yi Wang; Sang-Woon Jeon; Michael Gastpar

In wireless sensor networks, various applications involve learning one or multiple functions of the measurements observed by sensors, rather than the measurements themselves. This paper focuses on the class of type-threshold functions, e.g., the maximum and the indicator functions. A simple network model capturing both the broadcast and superposition properties of wireless channels is considered: the collocated Gaussian network. A general multiround coding scheme exploiting superposition and interaction (through broadcast) is developed. Through careful scheduling of concurrent transmissions to reduce redundancy, it is shown that given any independent measurement distribution, all type-threshold functions can be computed reliably with a nonvanishing rate in the collocated Gaussian network, even if the number of sensors tends to infinity.


information theory and applications | 2012

Approximate ergodic capacity of a class of fading 2 × 2 × 2 Networks

Sang-Woon Jeon; Chien-Yi Wang; Michael Gastpar

We study a 2-user 2-hop network with 2 relays in which channel coefficients are independently drawn from continuous distributions and vary over time. For a broad class of channel distributions, we characterize the ergodic sum capacity within a constant number of bits/sec/Hz, independent of signal-to-noise ratio. Specifically, we characterize the ergodic sum capacity within 4 bits/sec/Hz for independent and identically distributed (i.i.d.) uniform phase fading and approximately 4.7 bits/sec/Hz for i.i.d. Rayleigh fading. For achievability, we propose ergodic interference neutralization in which the relays amplify and forward their received signals with appropriate delays such that interference can be neutralized at each destination.


international symposium on information theory | 2014

On Distributed Successive Refinement with Lossless Recovery

Chien-Yi Wang; Michael Gastpar

The problem of successive refinement in distributed source coding and in joint source-channel coding is considered. The emphasis is placed on the case where the sources have to be recovered losslessly in the second stage. In distributed source coding, it is shown that all sources are successively refinable in sum rate, with respect to any (joint) distortion measure in the first stage. In joint source-channel coding, the sources are assumed independent and only a (per letter) function is to be recovered losslessly in the first stage. For a class of multiple access channels, it is shown that all sources are successively refinable with respect to a class of linear functions. Finally, when the sources have equal entropy, a simple sufficient condition of successive refinability is provided for partially invertible functions.


international symposium on information theory | 2013

Computation over Gaussian networks with orthogonal components

Sang-Woon Jeon; Chien-Yi Wang; Michael Gastpar

Function computation of arbitrarily correlated discrete sources over Gaussian networks with multiple access components but no broadcast is studied. Two classes of functions are considered: the arithmetic sum function and the frequency histogram function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of sources and estimating each of the sources as special cases. The frequency histogram function counts the number of occurrences of each argument, which yields many important statistics such as mean, variance, maximum, minimum, median, and so on. For a class of networks, an approximate computation capacity is characterized. The proposed approach first abstracts Gaussian networks into the corresponding modulo-sum multiple-access channels via lattice codes and linear network coding and then computes the desired function by using linear Slepian-Wolf source coding.


IEEE Transactions on Information Theory | 2017

Information-Theoretic Caching: The Multi-User Case

Sung Hoon Lim; Chien-Yi Wang; Michael Gastpar

In this paper, we consider a cache aided network in which each user is assumed to have individual caches, while upon users’ requests, an update message is sent through a common link to all users. First, we formulate a general information theoretic setting that represents the database as a discrete memoryless source, and the users’ requests as side information that is available everywhere except at the cache encoder. The decoders’ objective is to recover a function of the source and the side information. By viewing cache aided networks in terms of a general distributed source coding problem and through information theoretic arguments, we present inner and outer bounds on the fundamental tradeoff of cache memory size and update rate. Then, we specialize our general inner and outer bounds to a specific model of content delivery networks: file selection networks, in which the database is a collection of independent equal-size files and each user requests one of the files independently. For file selection networks, we provide an outer bound and two inner bounds (for centralized and decentralized caching strategies). For the case when the user request information is uniformly distributed, we characterize the rate versus cache size tradeoff to within a multiplicative gap of 4. By further extending our arguments to the framework of Maddah-Ali and Niesen, we also establish a new outer bound and two new inner bounds in which it is shown to recover the centralized and decentralized strategies, previously established by Maddah-Ali and Niesen. Finally, in terms of rate versus cache size tradeoff, we improve the previous multiplicative gap of 72 to 4.7 for the average case with uniform requests.

Collaboration


Dive into the Chien-Yi Wang's collaboration.

Top Co-Authors

Avatar

Michael Gastpar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Sang-Woon Jeon

Andong National University

View shared research outputs
Top Co-Authors

Avatar

Sung Hoon Lim

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge