Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satyajit Thakor is active.

Publication


Featured researches published by Satyajit Thakor.


international symposium on information theory | 2009

Network coding capacity: A functional dependence bound

Satyajit Thakor; Alex J. Grant; Terence Chan

Explicit characterization and computation of the multi-source network coding capacity region (or even bounds) is long standing open problem. In fact, finding the capacity region requires determination of the set of all entropic vectors Γ*, which is known to be an extremely hard problem. On the other hand, calculating the explicitly known linear programming bound is very hard in practice due to an exponential growth in complexity as a function of network size. We give a new, easily computable outer bound, based on characterization of all functional dependencies in networks. We also show that the proposed bound is tighter than some known bounds.


2011 International Symposium on Networking Coding | 2011

On Complexity Reduction of the LP Bound Computation and Related Problems

Satyajit Thakor; Alex J. Grant; Terence Chan

Computing the LP bound for network coding capacity and proving a basic information inequality are linear optimization problems. The number of dimensions and constraints of the problems increase exponentially with the number of random variables involved. In the first instance, producing constraints with exponential size exhausts computational memory resources as the number of random variables increases. Secondly, the well known simplex algorithm for solving linear programming problems has exponential worst case complexity in the problem size, making it doubly exponential in the number of random variables. In this correspondence, we focus on generating a set of constraints with significantly reduced size and yet characterizing the same feasible region for these optimization problems. As a result, it is now possible to produce constraint sets for problems with larger number of random variables which was practically impossible due to limited memory resources. Moreover, reduction in problem size also means solving the problems faster.


information theory workshop | 2013

Characterising correlation via entropy functions

Satyajit Thakor; Terence Chan; Alexander James Grant

Characterising the capacity region for a network can be extremely difficult. Even with independent sources, determining the capacity region can be as hard as the open problem of characterising all information inequalities. The majority of computable outer bounds in the literature are relaxations of the Linear Programming bound which involves entropy functions of random variables related to the sources and link messages. When sources are not independent, the problem is even more complicated. Extension of Linear Programming bounds to networks with correlated sources is largely open. Source dependence is usually specified via a joint probability distribution, and one of the main challenges in extending linear program bounds is the difficulty (or impossibility) of characterising arbitrary dependencies via entropy functions. This paper tackles the problem by answering the question of how well entropy functions can characterise correlation among sources. We show that by using carefully chosen auxiliary random variables, the characterisation can be fairly “accurate”.


australian communications theory workshop | 2011

Bounds for network information flow with correlated sources

Satyajit Thakor; Terence Chan; Alex J. Grant

In [1], the authors derived an outer bound, called functional dependence bound, for network information flow with independent sources. In this work, we derive outer bounds for network information flow with correlated sources and establish that the functional dependence bound is an outer bound for achievable region for networks with correlated sources. We also show that the bounds are loose and can be tightened by introducing auxiliary random variables describing structural correlation between source random variables. Finally, we discuss an important practical problem of constructing such auxiliary random variables given correlated source random variables.


international symposium on information theory | 2013

Symmetry in distributed storage systems

Satyajit Thakor; Terence Chan; Kenneth W. Shum

The max-flow outer bound is achievable by regenerating codes for functional repair distributed storage system. However, the capacity of exact repair distributed storage system is an open problem. In this paper, the linear programming bound for exact repair distributed storage systems is formulated. A notion of symmetrical sets for a set of random variables is given and equalities of joint entropies for certain subsets of random variables in a symmetrical set is established. Concatenation coding scheme for exact repair distributed storage systems is proposed and it is shown that concatenation coding scheme is sufficient to achieve any admissible rate for any exact repair distributed storage system. Equalities of certain joint entropies of random variables induced by concatenation scheme is shown. These equalities of joint entropies are new tools to simplify the linear programming bound and to obtain stronger converse results for exact repair distributed storage systems.


2012 International Symposium on Network Coding (NetCod) | 2012

Reduced Functional dependence graphs and their applications

Xiaoli Xu; Satyajit Thakor; Yong Liang Guan

Functional dependence graphs (FDG) are an important class of directed graph that capture the functional dependence relationship among a set of random variables. FDGs are frequently used in characterizing and calculating network coding capacity bounds. However, the order of an FDG is usually much larger than the original network and the complexity of computing bounds grows exponentially with the order of an FDG. In this paper, we introduce graph pre-processing techniques which deliver reduced FDGs. These reduced FDGs are obtained from the original FDG by removing nodes that are not “essential”. We show that the reduced FDGs give the same capacity region/bounds obtained using original FDGs, but require much less computation. The application of reduced FDGs for algebraic formulation of scalar linear network coding is also discussed.


IEEE Transactions on Information Theory | 2016

Cut-Set Bounds on Network Information Flow

Satyajit Thakor; Alex J. Grant; Terence Chan

Explicit characterization of the capacity region of communication networks is a long-standing problem. While it is known that network coding can outperform routing and replication, the set of feasible rates is not known in general. Characterizing the network coding capacity region requires the determination of the set of all entropic vectors. Furthermore, computing the explicitly known linear programming bound is infeasible in practice due to an exponential growth in complexity as a function of network size. This paper focuses on the fundamental problems of characterization and computation of outer bounds for multi-source multi-sink networks. Starting from the known local functional dependence induced by the communication network, we introduce the notion of irreducible sets, which characterize implied functional dependence. We provide recursions for the computation of all maximal irreducible sets. These sets act as information-theoretic bottlenecks, and provide an easily computable outer bound for networks with correlated sources. We extend the notion of irreducible sets (and resulting outer bound) for networks with independent sources. We compare our bounds with existing bounds in the literature. We find that our new bounds are the best among the known graph theoretic bounds for networks with correlated sources and for networks with independent sources.


information theory workshop | 2013

On the mutual information between random variables in networks

Xiaoli Xu; Satyajit Thakor; Yong Liang Guan

This paper presents a lower bound on the mutual information between any two sets of source/edge random variables in a general multi-source multi-sink network. This bound is useful to derive a new class of better information-theoretic upper bounds on the network coding capacity given existing edge-cut based bounds. A refined functional dependence bound is characterized from the functional dependence bound using the lower bound. It is demonstrated that the refined versions of the existing edge-cut based outer bounds obtained using the mutual information lower bound are stronger.


international symposium on information theory | 2017

A minimal set of shannon-type inequalities for functional dependence structures

Satyajit Thakor; Terence Chan; Alex J. Grant

The minimal set of Shannon-type inequalities (referred to as elemental inequalities), plays a central role in determining whether a given inequality is Shannon-type. Often, there arises a situation where one needs to check whether a given inequality is a constrained Shannon-type inequality. Another important application of elemental inequalities is to formulate and compute the Shannon outer bound for multi-source multi-sink network coding capacity. Under this formulation, it is the region of feasible source rates subject to the elemental inequalities and network coding constraints that is of interest. Hence it is of fundamental interest to identify the redundancies induced amongst elemental inequalities when given a set of functional dependence constraints. In this paper, we characterize a minimal set of Shannon-type inequalities when functional dependence constraints are present.


IEEE Transactions on Information Theory | 2017

Capacity Bounds for Networks With Correlated Sources and Characterisation of Distributions by Entropies

Satyajit Thakor; Terence Chan; Alex J. Grant

Characterising the capacity region for a network can be extremely difficult. Even with independent sources, determining the capacity region can be as hard as the open problem of characterising all information inequalities. The majority of computable outer bounds in the literature are relaxations of the linear programming bound, which involves entropy functions of random variables related to the sources and link messages. When sources are not independent, the problem is even more complicated. Extension of linear programming bounds to networks with correlated sources is largely open. Source dependence is usually specified through a joint probability distribution, and one of the main challenges in extending linear program bounds is the difficulty (or impossibility) of characterising arbitrary dependences via entropy functions. This paper tackles the problem by answering the question of how well entropy functions can characterise correlation among sources. We show that by using carefully chosen auxiliary random variables, the characterisation can be fairly “accurate”. Using such auxiliary random variables, we also give implicit and explicit outer bounds on the capacity of networks with correlated sources. The characterisation of correlation or joint distribution via Shannon entropy functions is also applicable to other information measures, such as Rényi entropy and Tsallis entropy.

Collaboration


Dive into the Satyajit Thakor's collaboration.

Top Co-Authors

Avatar

Terence Chan

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Alex J. Grant

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Xiaoli Xu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yong Liang Guan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Syed Abbas

Indian Institute of Technology Mandi

View shared research outputs
Top Co-Authors

Avatar

Sultan Alam

Indian Institute of Technology Mandi

View shared research outputs
Top Co-Authors

Avatar

Alexander James Grant

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Kenneth W. Shum

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge