Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clayton Schoeny is active.

Publication


Featured researches published by Clayton Schoeny.


IEEE Transactions on Information Theory | 2017

Exact Reconstruction From Insertions in Synchronization Codes

Frederic Sala; Ryan Gabrys; Clayton Schoeny; Lara Dolecek

This paper studies problems in data reconstruction, an important area with numerous applications. In particular, we examine the reconstruction of binary and nonbinary sequences from synchronization (insertion/deletion-correcting) codes. These sequences have been corrupted by a fixed number of symbol insertions (larger than the minimum edit distance of the code), yielding a number of distinct traces to be used for reconstruction. We wish to know the minimum number of traces needed for exact reconstruction. This is a general version of a problem tackled by Levenshtein for uncoded sequences. We introduce an exact formula for the maximum number of common supersequences shared by sequences at a certain edit distance, yielding an upper bound on the number of distinct traces necessary to guarantee exact reconstruction. Without specific knowledge of the code words, this upper bound is tight. We apply our results to the famous single deletion/insertion-correcting Varshamov–Tenengolts (VT) codes and show that a significant number of VT code word pairs achieve the worst case number of outputs needed for exact reconstruction. We also consider extensions to other channels, such as adversarial deletion and insertion/deletion channels and probabilistic channels.


international symposium on information theory | 2016

Codes correcting a burst of deletions or insertions

Clayton Schoeny; Antonia Wachter-Zeh; Ryan Gabrys; Eitan Yaakobi

This paper studies codes that correct bursts of deletions. Namely, a code will be called a b-burst-correcting code if it can correct a deletion of any b consecutive bits. While the lower bound on the redundancy of such codes was shown by Levenshtein to be asymptotically log(n) + b − 1, the redundancy of the best code construction by Cheng et al. is b(log(n/b + 1)). In this paper we close on this gap and provide codes with redundancy at most log(n) + (b − 1) log(log(n)) + b − log(b). We also extend the burst deletion model to two more cases: 1. a deletion burst of at most b consecutive bits and 2. a deletion burst of size at most b (not necessarily consecutive). We extend our code construction for the first case and study the second case for b = 3, 4. The equivalent models for insertions are also studied and are shown to be equivalent to correcting the corresponding burst of deletions.


international symposium on information theory | 2015

Analysis and coding schemes for the flash normal-laplace mixture channel

Clayton Schoeny; Frederic Sala; Lara Dolecek

Error-correcting codes are a critical need for modern flash memories. Such codes are typically designed under the assumption that the voltage threshold distributions in flash cells are Gaussian. This assumption, however, is not realistic. This is particularly the case late in the lifetime of flash devices. A recent work by Parnell et al. provides a parameterized model of MLC (2-bit cell) flash which accurately represents the voltage threshold distributions for an operating period up to 10 times longer than the devices specified lifetime. We analyze this model from an information-theoretic perspective and compute capacity for the resulting channel. We extrapolate the channel from an MLC to a TLC (3-bit cell) model and we characterize the resulting errors. We show that errors under the improved model are highly asymmetric. We introduce a code construction explicitly designed to exploit the asymmetric nature of these errors, and measure its improvement against existing codes at large P/E cycle counts.


international symposium on information theory | 2015

Three novel combinatorial theorems for the insertion/deletion channel

Frederic Sala; Ryan Gabrys; Clayton Schoeny; Lara Dolecek

Although the insertion/deletion problem has been studied for more than fifty years, many results still remain elusive. The goal of this work is to present three novel theorems with a combinatorial flavor that shed further light on the structure and nature of insertions/deletions. In particular, we give an exact result for the maximum number of common supersequences between two sequences, extending older work by Levenshtein. We then generalize this result for sequences that have different lengths. Finally, we compute the exact neighborhood size for the binary circular (alternating) string Cn = 0101 ... 01. In addition to furthering our understanding of the insertion/deletion channel, these theorems can be used as building blocks in other applications. One such application is developing improved lower bounds on the sizes of insertion/deletion-correcting codes.


international symposium on information theory | 2016

The weight consistency matrix framework for general non-binary LDPC code optimization: Applications in flash memories

Ahmed Hareedy; Chinmayi Lanka; Clayton Schoeny; Lara Dolecek

Transmission channels underlying modern memory systems, e.g., Flash memories, possess a significant amount of asymmetry. While existing LDPC codes optimized for symmetric, AWGN-like channels are being actively considered for Flash applications, we demonstrate that, due to channel asymmetry, such approaches are fairly inadequate. We propose a new, general, combinatorial framework for the analysis and design of non-binary LDPC (NB-LDPC) codes for asymmetric channels. We introduce a refined definition of absorbing sets, which we call general absorbing sets (GASs), and an important subclass of GASs, which we refer to as general absorbing sets of type two (GASTs). Additionally, we study the combinatorial properties of GASTs. We then present the weight consistency matrix (WCM), which succinctly captures key properties in a GAST. Based on these new concepts, we then develop a general code optimization framework, and demonstrate its effectiveness on the realistic highly-asymmetric normal-Laplace mixture (NLM) Flash channel. Our optimized codes enjoy over one order (resp., half of an order) of magnitude performance gain in the uncorrectable BER (UBER) relative to the unoptimized codes (resp. the codes optimized for symmetric channels).


dependable systems and networks | 2016

Software-Defined Error-Correcting Codes

Mark Gottscho; Clayton Schoeny; Lara Dolecek; Puneet Gupta

Conventional error-correcting codes (ECCs) and system-level fault-tolerance mechanisms are currently treated as separate abstraction layers. This can reduce the overall efficacy of error detection and correction (EDAC) capabilities, impacting the reliability of memories by causing crashes or silent data corruption. To address this shortcoming, we propose Software-Defined ECC (SWD-ECC), a new class of heuristic techniques to recover from detected but uncorrectable errors (DUEs) in memory. It uses available side information to estimate the original message by first filtering and then ranking the possible candidate codewords for a DUE. SWD-ECC does not incur any hardware or software overheads in the cases where DUEs do not occur.As an exemplar for SWD-ECC, we show through offline analysis on SPEC CPU2006 benchmarks how to heuristically recover from 2-bit DUEs in MIPS instruction memory using a common (39,32) single-error-correcting, double-error-detecting (SECDED) code. We first apply coding theory to compute all of the candidate codewords for a given DUE. Second, we filter out the candidates that are not legal MIPS instructions, increasing the chance of successful recovery. Finally, we choose a valid candidate whose logical operation (e.g., add or load) occurs most frequently in the application binary image. Our results show that on average, 34% of all possible 2-bit DUEs in the evaluated set of instructions can be successfully recovered using this heuristic recovery strategy. If a DUE affects the bit fields used for instruction decoding, we are able to recover correctly up to 99% of the time. We believe these results to be a significant achievement compared to an otherwise-guaranteed crash which can be undesirable in many systems and applications. Moreover, there is room for future improvement of this result with more sophisticated uses of side information. We look forward to future work in this area.


international symposium on information theory | 2015

Asymmetric error-correcting codes for Flash memories in high-radiation environments

Frederic Sala; Clayton Schoeny; Dariush Divsalar; Lara Dolecek

Research works exploring coding for Flash memories typically seek to correct errors taking place during normal device operation. In this paper, we study the design of codes that protect Flash devices dealing with the unusual class of errors caused by exposure to large radiation dosages. Significant radiation exposure can take place, for example, when Flash is used as on-board memory in satellites and space probes. We introduce an error model that captures the effects of radiation exposure. Such errors are asymmetric, with the additional feature that the degree (and direction) of asymmetry depends on the stored sequence. We develop an appropriate distance and an upper bound on the sizes of codes which correct such errors. We introduce and analyze several simple code constructions.


design, automation, and test in europe | 2016

Error resilience and energy efficiency: An LDPC decoder design study

Philipp Schläfer; Chu-Hsiang Huang; Clayton Schoeny; Christian Weis; Yao Li; Norbert Wehn; Lara Dolecek

Iterative decoding algorithms for low-density parity check (LDPC) codes have an inherent fault tolerance. In this paper, we exploit this robustness and optimize an LDPC decoder for high energy efficiency: we reduce energy consumption by opportunistically increasing error rates in decoder memories, while still achieving successful decoding in the final iteration. We develop a theory-guided unequal error protection (UEP) technique. UEP is implemented using dynamic voltage scaling that controls the error probability in the decoder memories on a per iteration basis. Specifically, via a density evolution analysis of an LDPC decoder, we first formulate the optimization problem of choosing an appropriate error rate for the decoder memories to achieve successful decoding under minimal energy consumption. We then propose a low complexity greedy algorithm to solve this optimization problem and map the resulting error rates to the corresponding supply voltage levels of the decoder memories in each iteration of the decoding algorithm. We demonstrate the effectiveness of our approach via ASIC synthesis results of a decoder for the LDPC code in the IEEE 802.11ad standard, implemented in 28nm FD-SOI technology. The proposed scheme achieves an increase in energy efficiency of up to 40% compared to the state-of-the-art solution.


IEEE Transactions on Communications | 2016

Synchronizing Files From a Large Number of Insertions and Deletions

Frederic Sala; Clayton Schoeny; Nicolas Bitouzé; Lara Dolecek

Developing efficient algorithms to synchronize between different versions of files is an important problem with numerous applications. We consider the interactive synchronization protocol introduced by Yazdi and Dolecek, based on an earlier synchronization algorithm by Venkataramanan et al. Unlike preceding synchronization algorithms, Yazdi and Doleceks algorithm is specifically designed to handle a number of deletions linear in the length of the file. We extend this algorithm in three ways. First, we handle nonbinary files. Second, these files contain symbols chosen according to nonuniform distributions. Finally, the files are modified by both insertions and deletions. We take into consideration the collision entropy of the source and refine the matching graph developed by Yazdi and Dolecek by appropriately placing weights on the matching graph edges. We compare our protocol with the widely used synchronization software rsync, and with the synchronization protocol by Venkataramanan et al. In addition, we provide tradeoffs between the number of rounds of communication and the total amount of bandwidth required to synchronize the two files under various implementation choices of the baseline algorithm. Finally, we show the robustness of the protocol under imperfect knowledge of the properties of the edit channel, which is the expected scenario in practice.


asilomar conference on signals, systems and computers | 2014

Efficient file synchronization: Extensions and simulations

Clayton Schoeny; Nicolas Bitouze; Frederic Sala; Lara Dolecek

We study the problem of synchronizing two files X and Y at two distant nodes A and B that are connected through a two-way communication channel. In our setup, Y is an edited version of X; edits are insertions and deletions that are potentially numerous. We previously proposed an order-wise optimal synchronization protocol for reconstructing file X at node B with an exponentially low probability of error. In this paper, we introduce an adaptive algebraic code to the synchronization protocol in order to increase efficiency in the presence of substitution errors. In addition, we expand on our previous results by presenting experimental results from several scenarios including different types of files and a variety of realistic error patterns.

Collaboration


Dive into the Clayton Schoeny's collaboration.

Top Co-Authors

Avatar

Lara Dolecek

University of California

View shared research outputs
Top Co-Authors

Avatar

Frederic Sala

University of California

View shared research outputs
Top Co-Authors

Avatar

Puneet Gupta

University of California

View shared research outputs
Top Co-Authors

Avatar

Mark Gottscho

University of California

View shared research outputs
Top Co-Authors

Avatar

Dariush Divsalar

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Irina Alam

University of California

View shared research outputs
Top Co-Authors

Avatar

Shahroze Kabir

University of California

View shared research outputs
Top Co-Authors

Avatar

Zehui Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Antonia Wachter-Zeh

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eitan Yaakobi

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge