Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandros G. Dimakis is active.

Publication


Featured researches published by Alexandros G. Dimakis.


IEEE Transactions on Information Theory | 2010

Network Coding for Distributed Storage Systems

Alexandros G. Dimakis; P B Godfrey; Yunnan Wu; Martin J. Wainwright; Kannan Ramchandran

Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.


arXiv: Distributed, Parallel, and Cluster Computing | 2010

Gossip Algorithms for Distributed Signal Processing

Alexandros G. Dimakis; Soummya Kar; José M. F. Moura; Michael G. Rabbat; Anna Scaglione

Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This paper presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.


international conference on computer communications | 2012

FemtoCaching: Wireless video content delivery through distributed caching helpers

Negin Golrezaei; Karthikeyan Shanmugam; Alexandros G. Dimakis; Andreas F. Molisch; Giuseppe Caire

We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in wireless/mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capacity. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. Due to the short distances between helpers and requesting devices, the transmission of cached files can be done very efficiently. A key question for such a system is the wireless distributed caching problem, i.e., which files should be cached by which helpers. If every mobile device has only access to a exactly one helper, then clearly each helper should cache the same files, namely the most popular ones. However, for the case that each mobile device can access multiple caches, the assignment of files to helpers becomes nontrivial. The theoretical contribution of our paper lies in (i) formalizing the distributed caching problem, (ii) showing that this problem is NP-hard, and (iii) presenting approximation algorithms that lie within a constant factor of the theoretical optimum. On the practical side, we present a detailed simulation of a university campus scenario covered by a single 3GPP LTE R8 cell and several helpers using a simplified 802.11n protocol. We use a real campus trace of video requests and show how distributed caching can increase the number served users by as much as 400 - 500%.


arXiv: Information Theory | 2011

A Survey on Network Codes for Distributed Storage

Alexandros G. Dimakis; Kannan Ramchandran; Yunnan Wu; Changho Suh

Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic.


very large data bases | 2013

XORing elephants: novel erasure codes for big data

Maheswaran Sathiamoorthy; Megasthenis Asteris; Dimitris S. Papailiopoulos; Alexandros G. Dimakis; Ramkumar Venkat Vadali; Scott Shaobing Chen; Dhruba Borthakur

Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2× on the repair disk I/O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14% more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.


IEEE Transactions on Information Theory | 2013

FemtoCaching: Wireless Content Delivery Through Distributed Caching Helpers

Karthikeyan Shanmugam; Negin Golrezaei; Alexandros G. Dimakis; Andreas F. Molisch; Giuseppe Caire

Video on-demand streaming from Internet-based servers is becoming one of the most important services offered by wireless networks today. In order to improve the area spectral efficiency of video transmission in cellular systems, small cells heterogeneous architectures (e.g., femtocells, WiFi off-loading) are being proposed, such that video traffic to nomadic users can be handled by short-range links to the nearest small cell access points (referred to as “helpers”). As the helper deployment density increases, the backhaul capacity becomes the system bottleneck. In order to alleviate such bottleneck we propose a system where helpers with low-rate backhaul but high storage capacity cache popular video files. Files not available from helpers are transmitted by the cellular base station. We analyze the optimum way of assigning files to the helpers, in order to minimize the expected downloading time for files. We distinguish between the uncoded case (where only complete files are stored) and the coded case, where segments of Fountain-encoded versions of the video files are stored at helpers. We show that the uncoded optimum file assignment is NP-hard, and develop a greedy strategy that is provably within a factor 2 of the optimum. Further, for a special case we provide an efficient algorithm achieving a provably better approximation ratio of 1-(1-1/d )d, where d is the maximum number of helpers a user can be connected to. We also show that the coded optimum cache assignment problem is convex that can be further reduced to a linear program. We present numerical results comparing the proposed schemes.


IEEE Communications Magazine | 2013

Femtocaching and device-to-device collaboration: A new architecture for wireless video distribution

Negin Golrezaei; Andreas F. Molisch; Alexandros G. Dimakis; Giuseppe Caire

We present a new architecture to handle the ongoing explosive increase in the demand for video content in wireless networks. It is based on distributed caching of the content in femtobasestations with small or non-existing backhaul capacity but with considerable storage space, called helper nodes. We also consider using the wireless terminals themselves as caching helpers, which can distribute video through device-todevice communications. This approach allows an improvement in the video throughput without deployment of any additional infrastructure. The new architecture can improve video throughput by one to two orders-of-magnitude.


IEEE Transactions on Information Theory | 2006

Decentralized erasure codes for distributed networked storage

Alexandros G. Dimakis; Vinod M. Prabhakaran; Kannan Ramchandran

In this correspondence, we consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce decentralized erasure codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.


IEEE Transactions on Wireless Communications | 2014

Base-Station Assisted Device-to-Device Communications for High-Throughput Wireless Video Networks

Negin Golrezaei; Parisa Mansourifard; Andreas F. Molisch; Alexandros G. Dimakis

We propose a new scheme for increasing the throughput of video files in cellular communications systems. This scheme exploits (1) the redundancy of user requests as well as (2) the considerable storage capacity of smartphones and tablets. Users cache popular video files and-after receiving requests from other users-serve these requests via device-to-device localized transmissions. The file placement is optimal when a central control knows a priori the locations of wireless devices when file requests occur. However, even a purely random caching scheme shows only a minor performance loss compared to such a “genie-aided” scheme. We then analyze the optimal collaboration distance, trading off frequency reuse with the probability of finding a requested file within the collaboration distance. We show that an improvement of spectral efficiency of one to two orders of magnitude is possible, even if there is not very high redundancy in video requests.


international symposium on information theory | 2012

Locally repairable codes

Dimitris S. Papailiopoulos; Alexandros G. Dimakis

Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main limitation of off-the-shelf erasure codes is their high-repair cost during single node failure events. A major open problem in this area has been the design of codes that: 1) are repair efficient and 2) achieve arbitrarily high data rates. In this paper, we explore the repair metric of locality, which corresponds to the number of disk accesses required during a single node repair. Under this metric, we characterize an information theoretic tradeoff that binds together the locality, code distance, and storage capacity of each node. We show the existence of optimal locally repairable codes (LRCs) that achieve this tradeoff. The achievability proof uses a locality aware flow-graph gadget, which leads to a randomized code construction. Finally, we present an optimal and explicit LRC that achieves arbitrarily high data rates. Our locality optimal construction is based on simple combinations of Reed-Solomon blocks.

Collaboration


Dive into the Alexandros G. Dimakis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karthikeyan Shanmugam

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Arash Saber Tehrani

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sriram Vishwanath

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Caire

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Andreas F. Molisch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Negin Golrezaei

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megasthenis Asteris

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge