Chenyang Zhou
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chenyang Zhou.
critical information infrastructures security | 2014
Anisha Mazumder; Chenyang Zhou; Arun Das; Arunabha Sen
A number of models have been proposed to analyze interdependent networks in recent years. However most of the models are unable to capture the complex interdependencies between such networks. To overcome the limitations, we have recently proposed a new model. Utilizing this model, we provide techniques for progressive recovery from failure. The goal of the progressive recovery problem is to maximize the system utility over the entire duration of the recovery process. We show that the problem can be solved in polynomial time in some special cases, whereas for some others, the problem is NP-complete. We provide two approximation algorithms with performance bounds of 2 and 4 respectively. We provide an optimal solution utilizing Integer Linear Programming and a heuristic. We evaluate the efficacy of our heuristic with both synthetic and real data collected from Phoenix metropolitan area. The experiments show that our heuristic almost always produces near optimal solution.
conference on computer communications workshops | 2015
Joydeep Banerjee; Arun Das; Chenyang Zhou; Anisha Mazumder; Arunabha Sen
The power grid and the communication network are highly interdependent on each other for their well being. In recent times the research community has shown significant interest in modeling such interdependent networks and studying the impact of failures on these networks. Although a number of models have been proposed, many of them are simplistic in nature and fail to capture the complex interdependencies that exist between the entities of these networks. To overcome the limitations, recently an Implicative Interdependency Model that utilizes Boolean Logic, was proposed and a number of problems were studied. In this paper we study the “entity hardening” problem, where by “entity hardening” we imply the ability of the network operator to ensure that an adversary (be it Nature or human) cannot take a network entity from operative to inoperative state. Given that the network operator with a limited budget can only harden k entities, the goal of the entity hardening problem is to identify the set of k entities whose hardening will ensure maximum benefit for the operator, i.e. maximally reduce the ability of the adversary to degrade the network. We classify the problem into four cases and show that the problem is solvable in polynomial time for the first case, whereas for others it is NP-complete. We provide an inapproximability result for the second case, an approximation algorithm for the third case, and a heuristic for the fourth (general) case. We evaluate the efficacy of our heuristic using power and communication network data of Maricopa County, Arizona. The experiments show that our heuristic almost always produces near optimal results.
Networks | 2015
Arunabha Sen; Anisha Mazumder; Sujogya Banerjee; Arun Das; Chenyang Zhou; Shahrzad Shirazipourazad
Distributed storage of data files in different nodes of a network enhances its fault tolerance capability by offering protection against node and link failures. Reliability is often achieved through redundancy in one of the following two ways: i storage of multiple copies of the entire file at different locations nodes or ii storage of file segments not entire files at different node locations. In the N,K file distribution scheme, N file segments from a file F are created in such a way that it is possible to reconstruct the entire file, just by accessing any Ki¾?N segments. For the reconstruction scheme to work, it is essential that the K segments of the file are stored in nodes that are connected in the network. However, in the event of node/link failures, the network might become disconnected i.e., split into several connected components. We focus on node failures that are spatially correlated or region based. Such failures are often encountered in disaster situations or natural calamities where only the nodes in the disaster zone are affected. The first goal of this research is to design a least cost file storage scheme to ensure that no matter which region is destroyed; resulting in fragmentation of the network, a largest connected component of the residual network will have enough file segments with which to reconstruct the entire file. In case the least cost to ensure this objective is within the allocated budget, the storage design will be all region fault-tolerant ARFT. In case the least cost exceeds the allocated budget, design of an ARFT file storage system design is impossible. The second goal of this research is to design file storage schemes that will be maximum region fault-tolerant within the allocated budget. The third goal of this research is to investigate the impact of the coding parameters N and K on storage requirements for ensuring all region or \textit{maximum region} fault-tolerant design. We provide maximum region fault-tolerant design. We provide approximation algorithms for the problems and evaluate their performance through simulation using two real networks and compare their results to the optimal solutions obtained using Integer Linear Program. The simulation results demonstrate that the approximation algorithms almost always produce near optimal results in a fraction of the time needed to find the optimal solution.
high performance switching and routing | 2014
Chenyang Zhou; Anisha Mazumder; Arunabha Sen; Martin Reisslein; Andréa W. Richa
Fiber-Wireless (FiWi) networks have received considerable attention in the research community in the last few years as they offer an attractive way of integrating optical and wireless technology. As in every other type of networks, routing plays a major role in FiWi networks. Accordingly, a number of routing algorithms for FiWi networks have been proposed. Most of the routing algorithms attempt to find the “shortest path” from the source to the destination. A recent paper proposed a novel path length metric, where the contribution of a link towards path length computation depends not only on that link but also every other link that constitutes the path from the source to the destination. In this paper we address the problem of computing the shortest path using this path length metric. Moreover, we consider a variation of the metric and also provide an algorithm to compute the shortest path using this variation. As multipath routing provides a number of advantages over single path routing, we consider disjoint path routing with the new path length metric. We show that while the single path computation problem can be solved in polynomial time in both the cases, the disjoint path computation problem is NP-complete. We provide optimal solution for the NP-complete problem using integer linear programming and also provide two approximation algorithms with a performance bound of 4 and 2 respectively. The experimental evaluation of the approximation algorithms produced a near optimal solution in a fraction of a second.
2014 6th International Workshop on Reliable Networks Design and Modeling (RNDM) | 2014
Anisha Mazumder; Arun Das; Chenyang Zhou; Arunabha Sen
Two independent lines of research, (i) erasure code based file storage system design, and (ii) fault-tolerant network design for spatially correlated (or region-based) failures, have received considerable attention in the networking research community in recent times. A recently proposed (N,K)-coding based distributed file storage scheme ensures complete reconstruction of a file after network fragmentation due to any single region-based fault. For every region of the network, it stores K distinct file segments in one of the largest connected component that results from the fragmentation of the network due to the failure of a region. This distribution scheme provides an all-region fault-tolerant storage system, in the sense that no matter which region of the network fails, a largest connected component of the fragmented network will still have enough distinct file segments with which to reconstruct the file. However, the storage requirement and the associated cost for such an all-region-fault-tolerant storage system may be quite high. As such, with a limited budget it may not be possible to realize such an all-region fault-tolerant storage system. We consider a budget constrained distributed file system design problem and provide solutions that maximizes the number of regions that can be made fault-tolerant, within the specified budget. We show that the problem is NP-complete, and provide an approximation algorithm for the problem. The performance of the approximation algorithm is evaluated through simulation on two real networks. The simulation results demonstrate that the worst case experimental performance is significantly better than the worst case theoretical bound. Moreover, the approximation algorithm almost always produce near optimal solution in a fraction of time needed to find the optimal solution.
military communications conference | 2016
Anisha Mazumder; Chenyang Zhou; Arun Das; Arunabha Sen
The relay node placement problem in the wireless sensor network have been studied extensively in the last few years. The goal of most of these problems is to place the fewest number of relay nodes in the deployment area so that the network formed by the sensors nodes and the relay nodes is connected. Most of these studies are conducted for the unconstrained budget scenario, in the sense that there is an underlying assumption that no matter however many relay nodes are needed to make the network connected, they can be procured and deployed. However, in a fixed budget scenario, the expenses involved in procuring the minimum number of relay nodes to make the network connected may exceed the budget. Although in this scenario, one has to give up the idea of having a network connecting all the sensor nodes, one would still like to have a network with high level of “connectedness”. In the paper we introduce two metrics for measuring “connectedness” of a disconnected graph and study the problem whose goal is to design a network with maximal “connectedness”, subject to a fixed budget constraint. We show that both versions of the problem are NP-complete and provide heuristics for their solution. We show that the problem is non-trivial even when the number of sensor nodes is as few as three. We evaluate the performance of heuristics through simulation.
international conference on network of future | 2016
Arunabha Sen; Arun Das; Chenyang Zhou; Anisha Mazumder; Nathalie Mitton; Abdoul Aziz Mbacké
“Reader” and “Tag” type devices are utilized in the Radio-Frequency IDentification technology for identification and tracking of objects. A tag can be “read” by a reader when the tag is within the readers sensing range. However, when tags are present in the intersection area of the sensing ranges of two or more readers, simultaneous activation of the readers may cause “reader collision”. In order to ensure collision-free reading, a scheduling scheme is needed to read tags in the shortest possible time. We study this scheduling problem in a stationary setting and the reader minimization problem in a mobile setting. We show that the optimal schedule construction problem is NP-complete and provide an approximation algorithm and techniques that we evaluate through simulation.
international conference on communications | 2015
Ran Wang; Chenyang Zhou; Anisha Mazumder; Arun Das; Hal A. Kierstead; Arunabha Sen
A cellular network is often modeled as a graph and the channel assignment problem is formulated as a coloring problem of the graph. Cellular graphs are used to model hexagonal cell structure of a cellular network. Assuming a 2-band buffering system where the interference does not extend beyond two cells away from the call originating cell, we study a version of the channel assignment problem in cellular graphs that been studied only minimally. In this version, each node has a fixed set of frequency channels where only a subset of which may be available at a given time for communication (as other channels may be busy). Assuming that only a subset of frequency channels are available for communication at each node, we try to determine the size of the smallest set of free channels in a node that will guarantee that each node of the cellular graph can be assigned a channel (from its own set of free channels) that will be interference free in a two band buffering system. The mathematical abstraction of this problem is known as the Choice Number computation problem and is closely related to the List Coloring problem in Graph Theory. In this paper we establish a lower and an upper bound of the distance-2 Choice Number of cellular graphs. In addition we also conduct extensive experimentation to study the impact of the availability of the number of free channels in a node to the percentage of the total number of nodes in the network that can be assigned an interference free channel in a two band buffering system.
critical information infrastructures security | 2015
Joydeep Banerjee; Chenyang Zhou; Arun Das; Arunabha Sen
Critical Infrastructures like power and communication networks are highly interdependent on each other for their full functionality. Many significant research have been pursued to model the interdependency and failure analysis of these interdependent networks. However most of these models fail to capture the complex interdependencies that might actually exist between the infrastructures. The Implicative Interdependency Model that utilizes Boolean Logic to capture complex interdependencies was recently proposed which overcome the limitations of the existing models. A number of problems were studied based on this model. In this paper we study the Robustness problem in Interdependent Power and Communication Network. The robustness is defined with respect to two parameters \(K \in I^{+} \cup \{0\}\) and \(\rho \in (0,1]\). We utilized the Implicative Interdependency Model to capture the complex interdependencies between the two networks. The problem is solved using an Integer Linear Program and the solution is used to study the robustness of power and communication interdependent network of Maricopa County, Arizona, USA.
international conference on social computing | 2018
Arunabha Sen; Victoria Horan Goliber; Chenyang Zhou; Kaustav Basu
On multiple incidences of terrorist attacks in recent times across Europe, it has been observed that the perpetrators of the attack were in the suspect databases of the law enforcement authorities, but weren’t under active surveillance at the time of the attack due to resource limitations on the part of the authorities. As the suspect databases in various European countries are very large, and it takes significant amount of technical and human resources to monitor a suspect in the database, monitoring all the suspects in the database may be an impossible task. In this paper, we propose a scheme utilizing Identifying Codes that will significantly reduce the resource requirement of law enforcement authorities, and will have the capability of uniquely identifying a suspect in case the suspect becomes active in planning a terrorist attack. The scheme relies on the assumption that, when an individual becomes active in planning a terrorist attack, his/her friends/associates will have some inkling of the individuals plan. Accordingly, even if the individual is not under active surveillance by the authorities, but the individual’s friends/associates are, the individual planning the attack can be uniquely identified. We applied our technique on two terrorist networks, one involved in an attack in Paris and the other involved in the 9/11 attack. We show that, in the Paris network, if 5 of the 10 individuals were monitored, the attackers most likely would have been exposed. If only 15 out of the 37 individuals involved in the 9/11 attack were under surveillance, specific individuals involved in the planning of the 9/11 attack would have been exposed.