Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sergey Yekhanin is active.

Publication


Featured researches published by Sergey Yekhanin.


IEEE Transactions on Information Theory | 2012

On the Locality of Codeword Symbols

Parikshit Gopalan; Cheng Huang; Huseyin Simitci; Sergey Yekhanin

Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance.


Communications of The ACM | 2010

Private information retrieval

Sergey Yekhanin

Cryptographic protocols safeguard the privacy of user queries to public databases.


ieee international conference computer and communications | 2007

Non-Adaptive Fault Diagnosis for All-Optical Networks via Combinatorial Group Testing on Graphs

Nicholas J. A. Harvey; Mihai Patrascu; Yonggang Wen; Sergey Yekhanin; Vincent W. S. Chan

We consider the problem of detecting failures for all-optical networks, with the objective of keeping the diagnosis cost low. Compared to the passive paradigm based on parity check in SONET, optical probing signals are sent proactively along lightpaths to probe their state of health and failure pattern is identified through the set of test results (i.e., probe syndromes). As an alternative to our previous adaptive approach where all the probes are sent sequentially, we consider in this work a non-adaptive approach where all the probes are sent in parallel. The design objective is to minimize the number of parallel probes, so as to keep network cost low. The non-adaptive fault diagnosis approach motivates a new technical framework that we introduce: combinatorial group testing with graph-based constraints. Using this framework, we develop several new probing schemes to detect network faults for all-optical networks with different topologies. The efficiency of our schemes often depends on the network topology; in many cases we can show that our schemes are optimal in minimizing the number of probes.


Foundations and Trends in Theoretical Computer Science | 2011

Locally Decodable Codes

Sergey Yekhanin

Over 60 years of research in coding theory, that started with the works of Shannon and Hamming, have given us nearly optimal ways to add redundancy to messages, encoding bit strings representing messages into longer bit strings called codewords, in a way that the message can still be recovered even if a certain fraction of the codeword bits are corrupted. Classical error-correcting codes, however, do not work well when messages are modern massive datasets, because their decoding time increases (at least) linearly with the length of the message. As a result in typical applications large datasets are first partitioned into small blocks, each of which is then encoded separately. Such encoding allows efficient randomaccess retrieval of the data, but yields poor noise resilience. Locally decodable codes are codes intended to address this seeming conflict between efficient retrievability and reliability. They are codes that simultaneously provide efficient random-access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary data bit from looking at only a small number of randomly chosen codeword bits. Apart from the natural application to data transmission and storage such codes have important applications in cryptography and computational complexity theory. This review introduces and motivates locally decodable codes, and discusses the central results of the subject. Locally Decodable Codes assumes basic familiarity with the properties of finite fields and is otherwise self-contained. It will benefit computer scientists, electrical engineers, and mathematicians with an interest in coding theory.


international cryptology conference | 2008

New Efficient Attacks on Statistical Disclosure Control Mechanisms

Cynthia Dwork; Sergey Yekhanin

The goal of a statistical database is to provide statistics about a population while simultaneously protecting the privacy of the individual records in the database. The tension between privacy and usability of statistical databases has attracted much attention in statistics, theoretical computer science, security, and database communities in recent years. A line of research initiated by Dinur and Nissim investigates for a particular type of queries, lower bounds on the distortion needed in order to prevent gross violations of privacy. The first result in the current paper simplifies and sharpens the Dinur and Nissim result. The Dinur-Nissim style results are strong because they demonstrate insecurity of all low-distortion privacy mechanisms. The attacks have an all-or-nothing flavor: letting ndenote the size of the database, i¾?(n) queries are made before anything is learned, at which point i¾?(n) secret bits are revealed. Restricting attention to a wide and realistic subset of possible low-distortion mechanisms, our second result is a more acute attack, requiring only a fixed number of queries for each bit revealed.


symposium on the theory of computing | 2007

Towards 3-query locally decodable codes of subexponential length

Sergey Yekhanin

A q-query Locally Decodable Code (LDC) encodes an n-bitmessage x as an n-bit codeword C(x), such that one canprobabilistically recover any bit xi of the message by queryingonly q bits of the codeword C(x), even after some constantfraction of codeword bits has been corrupted.We give new constructions of three query LDCs of vastly shorterlength than that of previous constructions. Specifically, givenany Mersenne prime p = 2t - 1, we design three query LDCs of length N=(n1/t), for every n. Based on thelargest known Mersenne prime, this translates to a length of less than exp(n10-7), compared to exp(n1/2) in the previous constructions. It hasoften been conjectured that there are infinitely many Mersenneprimes. Under this conjecture, our constructions yield three querylocally decodable codes of length N=exp(nO(1/(log log n))) forinfinitely many n. We also obtain analogous improvements for Private InformationRetrieval (PIR) schemes. We give 3-server PIR schemes withcommunication complexity of O(n10-7) to accessan n-bit database, compared to the previous best scheme withcomplexity O(n1/5.25). Assuming again that there areinfinitely many Mersenne primes, we get 3-server PIR schemes ofcommunication complexity nO(1/(log log n))for infinitely many n. Previous families of LDCs and PIR schemes were based on theproperties of low-degree multivariate polynomials over finitefields. Our constructions are completely different and areobtained by constructing a large number of vectors in a smalldimensional vector space whose inner products are restricted tolie in an algebraically nice set.


symposium on the theory of computing | 2011

High-rate codes with sublinear-time decoding

Swastik Kopparty; Shubhangi Saraf; Sergey Yekhanin

Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(kε) were known only for codes of rate exp(1/ε), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial locality has been achieved. In this paper we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ε > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 - α, and is locally decodable from a constant fraction of errors using O(kε) queries and time. The high rate and local decodability are evident even in concrete settings (and not just in asymptotic behavior), giving hope that local decoding techniques may have practical implications. These codes, which we call multiplicity codes, are based on evaluating high degree multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial based codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in their rate and distance.


conference on computational complexity | 2005

A geometric approach to information-theoretic private information retrieval

David P. Woodruff; Sergey Yekhanin

A t-private private information retrieval (PIR) scheme allows a user to retrieve the ith bit of an n-bit string x replicated among k servers, while any coalition of up to t servers learns no information about i. We present a new geometric approach to PIR, and obtain: 1) a t-private k-server protocol with communication O((k/sup 2//t) log k n/sup 1//spl lfloor//(2k - 1)/spl rfloor/) removing the (t) term of previous schemes. This answers an open question of Ishai and Kushilevitz (1999). 2) A 2-server protocol with O(n/sup 1/3/) communication, polynomial preprocessing, and online work O(n/log/sup r/ n) for any constant r. This improves the O(n/log/sup 2/ n) work of Beimel et al. (2000). 3) Smaller communication for instance hiding, PIR with a polylogarithmic number of servers, robust PIR, and PIR with fixed answer sizes. To illustrate the power of our approach, we also give alternative, geometric proofs of some of the best 1-private upper bounds.


IEEE Transactions on Information Theory | 2014

Explicit Maximally Recoverable Codes With Locality

Parikshit Gopalan; Cheng Huang; Bob Jenkins; Sergey Yekhanin

Consider a systematic linear code where some (local) parity symbols depend on few prescribed symbols, whereas other (heavy) parity symbols may depend on all data symbols. Such codes have been studied recently in the context of erasure coding for data storage, where the local parities facilitate fast recovery of any single symbol when it is erased, whereas the heavy parities provide tolerance to a large number of simultaneous erasures. A code as above is maximally recoverable, if it corrects all erasure patterns, which are information theoretically correctable given the prescribed dependence relations between data symbols and parity symbols. In this paper, we present explicit families of maximally recoverable codes with locality. We also initiate the general study of the tradeoff between maximal recoverability and alphabet size.


Electronic Colloquium on Computational Complexity | 2012

Locally decodable codes and private information retrieval schemes

Sergey Yekhanin

Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve records from public databases while completely hiding the identity of the retrieved records from database owners. In this book the author provides a fresh algebraic look at the theory of locally decodable codes and private information retrieval schemes, obtaining new families of each which have much better parameters than those of previously known constructions, and he also proves limitations of two server PIRs in a restricted setting that covers all currently known schemes. The authors related thesis won the ACM Dissertation Award in 2007, and this book includes some expanded sections and proofs, and notes on recent developments.

Collaboration


Dive into the Sergey Yekhanin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Samorodnitsky

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge