Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anoosheh Heidarzadeh is active.

Publication


Featured researches published by Anoosheh Heidarzadeh.


information theory workshop | 2010

Overlapped Chunked network coding

Anoosheh Heidarzadeh; Amir H. Banihashemi

Network coding is known to improve the throughput and the resilience to losses in most network scenarios. In a practical network scenario, however, the accurate modeling of the traffic is often too complex and/or infeasible. The goal is thus to design codes that perform close to the capacity of any network (with arbitrary traffic) efficiently. In this context, random linear network codes are known to be capacity-achieving while requiring a decoding complexity quadratic in the message length. Chunked Codes (CC) were proposed by Maymounkov et al. to improve the computational efficiency of random codes by partitioning the message into a number of non-overlapping chunks. CC can also be capacity-achieving but have a lower encoding/decoding complexity at the expense of slower convergence to the capacity. In this paper, we propose and analyze a generalized version of CC called Overlapped Chunked Codes (OCC) in which chunks are allowed to overlap. Our theoretical analysis and simulation results show that compared to CC, OCC can achieve the capacity with a faster speed while maintaining almost the same advantage in computational efficiency.


IEEE Transactions on Information Theory | 2012

Density Evolution Analysis of Node-Based Verification-Based Algorithms in Compressed Sensing

Yaser Eftekhari; Anoosheh Heidarzadeh; Amir H. Banihashemi; Ioannis Lambadaris

In this paper, we present a new approach for the analysis of iterative node-based verification-based (NB-VB) recovery algorithms in the context of compressed sensing. These algorithms are particularly interesting due to their low complexity (linear in the signal dimension n). The asymptotic analysis predicts the fraction of unverified signal elements at each iteration l in the asymptotic regime where n→∞. The analysis is similar in nature to the well-known density evolution technique commonly used to analyze iterative decoding algorithms. To perform the analysis, a message-passing interpretation of NB-VB algorithms is provided. This interpretation lacks the extrinsic nature of standard message-passing algorithms to which density evolution is usually applied. This requires a number of nontrivial modifications in the analysis. The analysis tracks the average performance of the recovery algorithms over the ensembles of input signals and sensing matrices as a function of l. Concentration results are devised to demonstrate that the performance of the recovery algorithms applied to any choice of the input signal over any realization of the sensing matrix follows the deterministic results of the analysis closely. Simulation results are also provided which demonstrate that the proposed asymptotic analysis matches the performance of recovery algorithms for large but finite values of n . Compared to the existing technique for the analysis of NB-VB algorithms, which is based on numerically solving a large system of coupled differential equations, the proposed method is more accurate and simpler to implement.


allerton conference on communication, control, and computing | 2015

Cooperative data exchange with unreliable clients

Anoosheh Heidarzadeh; Alex Sprintson

Consider a set of clients in a broadcast network, each of which holds a subset of packets in the ground set X. In the (coded) cooperative data exchange problem, the clients need to recover all packets in X by exchanging coded packets over a lossless broadcast channel. Several previous works analyzed this problem under the assumption that each client initially holds a random subset of packets in X. In this paper we consider a generalization of this problem for settings in which an unknown (but of a certain size) subset of clients are unreliable and their packet transmissions are subject to arbitrary erasures. For the special case of one unreliable client, we derive a closed-form expression for the minimum number of transmissions required for each reliable client to obtain all packets held by other reliable clients (with probability approaching 1 as the number of packets tends to infinity). Furthermore, for the cases with more than one unreliable client, we provide an approximation solution in which the number of transmissions per packet is within an arbitrarily small additive factor from the value of the optimal solution.


international symposium on information theory | 2016

Cooperative data exchange with priority classes

Anoosheh Heidarzadeh; Muxi Yan; Alex Sprintson

This paper considers the problem of cooperative data exchange with different client priority classes. In this problem, each client initially knows a subset of packets in the ground set X of size K, and all clients wish to learn all packets in X. The clients exchange packets by broadcasting coded combinations of their packets. The primary objective is to satisfy all high-priority clients in the first round of transmissions with minimum sum-rate, and the secondary objective is to satisfy low-priority clients in the second round of transmissions with minimum sum-rate, subject to minimizing the sum-rate in the first round. For any arbitrary problem instance, we provide a linear programming-based approach to find the minimum sum-rate in each round. Moreover, for the case in which the packets are randomly distributed among clients, we derive a closed-form expression for the minimum sum-rate in each round, which holds with probability approaching 1 as K tends to infinity.


international symposium on information theory | 2011

Analysis of overlapped chunked codes with small chunks over line networks

Anoosheh Heidarzadeh; Amir H. Banihashemi

To lower the complexity of network codes over packet line networks with arbitrary schedules, chunked codes (CC) and overlapped chunked codes (OCC) were proposed in earlier works. These codes have been previously analyzed for relatively large chunks. In this paper, we prove that for smaller chunks, CC and OCC asymptotically approach the capacity with an arbitrarily small but non-zero constant gap. We also show that unlike the case for large chunks, the larger is the overlap size, the better would be the tradeoff between the speed of convergence and the message or packet error rate. This implies that OCC are superior to CC for shorter chunks. Simulations consistent with the theoretical results are also presented, suggesting great potential for the application of OCC for multimedia transmission over packet networks.


international symposium on information theory | 2017

An algebraic-combinatorial proof technique for the GM-MDS conjecture

Anoosheh Heidarzadeh; Alex Sprintson

This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given m χ n binary matrix M, the GM-MDS conjecture, due to Dau et al., states that if M satisfies the so-called MDS condition, then for any field F of size q ≥ n + m − 1, there exists an [n, m]q MDS code whose generator matrix G, with entries in F, fits M (i.e., M is the support matrix of G). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by Yan et al. and Dau et al., that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if M satisfies the MDS condition, then the determinant of a transformation matrix T, such that TV fits M, is not identically zero, where V is a Vandermonde matrix with distinct parameters. In this work, we generalize the TM-MDS conjecture, and present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof techniques strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows (m) of M is upper bounded by 5. For this class of special cases of M where the only additional constraint is on m, only cases with m < 4 were previously proven theoretically, and the previously used proof techniques are not applicable to cases with m > 4.


international symposium on information theory | 2012

How fast can dense codes achieve the min-cut capacity of line networks?

Anoosheh Heidarzadeh; Amir H. Banihashemi

In this paper, we study the coding delay and the average coding delay of random linear network codes (dense codes) over line networks with deterministic regular and Poisson transmission schedules. We consider both lossless networks and networks with Bernoulli losses. The upper bounds derived in this paper, which are in some cases more general, and in some other cases tighter, than the existing bounds, provide a more clear picture of the speed of convergence of dense codes to the min-cut capacity of line networks.


arXiv: Information Theory | 2012

Coding delay analysis of Chunked codes over line networks

Anoosheh Heidarzadeh; Amir H. Banihashemi

In this paper, we analyze the coding delay and the average coding delay of Chunked network Codes (CC) over line networks with Bernoulli losses and deterministic regular or Poisson transmissions. Chunked codes are an attractive alternative to random linear network codes due to their lower complexity. Our results, which include upper bounds on the delay and the average delay, are the first of their kind for CC over networks with such probabilistic traffics. These results demonstrate that a stand-alone CC or a precoded CC provides a better tradeoff between the computational complexity and the convergence speed to the network capacity over the probabilistic traffics compared to arbitrary deterministic traffic. The performance of CC over the latter traffic has already been studied in the literature.


information theory and applications | 2016

Optimal exchange of data over broadcast networks with adversaries

Anoosheh Heidarzadeh; Alex Sprintson

In the cooperative data exchange problem, a set of clients share a lossless broadcast channel. Each client initially has a subset of packets in the ground set X, and wishes to learn all packets in X. The clients exchange their packets with each other by broadcasting coded or uncoded packets. In this paper, we consider a generalization of this problem for the settings in which an unknown (but of a bounded size) subset of clients are adversarial. The adversarial clients can introduce erasures or errors in their packet transmissions in an arbitrary manner. The problem is to find the minimum total number of transmissions required such that, regardless of the configuration of adversarial clients, all non-adversarial clients can learn the maximally recoverable subset of packets in X. For arbitrary problem instances (i.e., arbitrary sets of packets available at the clients), this problem is NP-hard. Focusing on the settings where the packets are distributed randomly among clients, in this work, we propose a linear-time algorithm which solves (with high probability) the special case of the problem with one adversarial client. This result can also be extended to more general cases with arbitrary number of adversarial clients.


international symposium on information theory | 2017

Successive local and successive global omniscience

Anoosheh Heidarzadeh; Alex Sprintson

This paper considers two generalizations of the cooperative data exchange problem, referred to as the successive local omniscience (SLO) and the successive global omniscience (SGO). The users are divided into ℓ nested sub-groups. Each user initially knows a subset of packets in a ground set X of size k, and all users wish to learn all packets in X. The users exchange their packets by broadcasting coded or uncoded packets. In SLO or SGO, in the lth (1≤ l ≤ ℓ) round of transmissions, the lth smallest ub-group of users need to learn all packets they collectively hold or all packets in X, respectively. The problem is to find the minimum sum-rate (i.e., the total transmission rate by all users) for each round, subject to minimizing the sum-rate for the previous round. To solve this problem, we use a linear-programming approach. For the cases in which the packets are randomly distributed among users, we construct a system of linear equations whose solution characterizes the minimum sum-rate for each round with high probability as k tends to infinity. Moreover, for the special case of two nested groups, we derive closed-form expressions, which hold with high probability as k tends to infinity, for the minimum sum-rate for each round.

Collaboration


Dive into the Anoosheh Heidarzadeh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Salim El Rouayheb

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tracey Ho

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge