Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack K. Wolf is active.

Publication


Featured researches published by Jack K. Wolf.


IEEE Transactions on Information Theory | 1973

Noiseless coding of correlated information sources

David Slepian; Jack K. Wolf

Correlated information sequences \cdots ,X_{-1},X_0,X_1, \cdots and \cdots,Y_{-1},Y_0,Y_1, \cdots are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_{XY} (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region \mathcal{R} in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X \geq H (X) for faithful reproduction.


IEEE Transactions on Information Theory | 1978

Efficient maximum likelihood decoding of linear block codes using a trellis

Jack K. Wolf

It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states. For cyclic codes, the trellis is periodic. When this technique is applied to the decoding of product codes, the number of states in the trellis can be much fewer than q^{n-k} . For a binary (n,n - 1) single parity check code, the Viterbi algorithm is equivalent to the Wagner decoding algorithm.


international symposium on microarchitecture | 2009

Characterizing flash memory: anomalies, observations, and applications

Laura M. Grupp; Adrian M. Caulfield; Joel Coburn; Steven Swanson; Eitan Yaakobi; Paul H. Siegel; Jack K. Wolf

Despite flash memorys promise, it suffers from many idiosyncrasies such as limited durability, data integrity problems, and asymmetry in operation granularity. As architects, we aim to find ways to overcome these idiosyncrasies while exploiting flash memorys useful characteristics. To be successful, we must understand the trade-offs between the performance, cost (in both power and dollars), and reliability of flash memory. In addition, we must understand how different usage patterns affect these characteristics. Flash manufacturers provide conservative guidelines about these metrics, and this lack of detail makes it difficult to design systems that fully exploit flash memorys capabilities. We have empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability. We demonstrate that performance varies significantly between vendors, devices, and from publicly available datasheets. We also demonstrate and quantify some unexpected device characteristics and show how we can use them to improve responsiveness and energy consumption of solid state disks by 44% and 13%, respectively, as well as increase flash device lifetime by 5.2x.


IEEE Transactions on Information Theory | 1967

On linear unequal error protection codes

Burt Masnick; Jack K. Wolf

The class of codes discussed in this paper has the property that its error-correction capability is described in terms of correcting errors in specific digits of a code word even though other digits in the code may be decoded incorrectly. To each digit of the code words is assigned an error protection level f_{i} . Then, if f errors occur in the reception of a code word, all digits which have protection f_{i} greater than or equal to f will be decoded correctly even though the entire code word may not be decoded correctly. Methods for synthesizing these codes are described and illustrated by examples. One method of synthesis involves combining the parity check matrices of two or more ordinary random error-correcting codes to form the parity check matrix of the new code. A decoding algorithm based upon the decoding algorithms of the component codes is presented. A second method of code generation is described which follows from the observation that for a linear code, the columns of the parity check matrix corresponding to the check positions must span the column space of the matrix. Upper and lower bounds are derived for the number of check digits required for such codes. The lower bound is based upon counting the number of unique syndromes required for a specified error-correction capability. The upper bound is the result of a constructive procedure for forming the parity check matrices of these codes. Tables of numerical values for the upper and lower bounds are presented.


IEEE Communications Magazine | 1989

A pragmatic approach to trellis-coded modulation

Andrew J. Viterbi; Jack K. Wolf; Ephraim Zehavi; Roberto Padovani

Since the early 1970s, for power-limited applications, the convolutional code constraint length K=7 and rate 1/2, optimum in the sense of maximum free distance and minimum number of bit errors caused by remerging paths at the free distance, has become the de facto standard for coded digital communication. This was reinforced when punctured versions of this code became the standard for rate 3/4 and 7/8 codes for moderately bandlimited channels. Methods are described for using the same K=7, rate 1/2 convolutional code with signal phase constellations of 8-PSK and 160PSK and quadrature amplitude constellations of 16-QASK, 64-QASK, and 256-QASK to achieve, respectively, 2 and 3, and 2, 4, and 6 b/s/Hz bandwidth efficiencies while providing power efficiency that in most cases is virtually equivalent to that of the best Ungerboeck codes for constraint length 7 or 64 states. This pragmatic approach to all coding applications permits the use of a single basic coder and decoder to achieve respectable coding (power) gains for bandwidth efficiencies from 1 b/s/Hz to 6 b/s/Hz.<<ETX>>


IEEE Transactions on Communications | 1986

On Tail Biting Convolutional Codes

Howard H. Ma; Jack K. Wolf

In this paper, we introduce generalized tail biting encoding as a means to ameliorate the rate deficiency caused by zero-tail convolutional encoding. This technique provides an important link between quasi-cyclic block and convolutional codes. Optimum and suboptimum decoding algorithms for these codes are described and their performance determined by analytical and simulation techniques.


IEEE Journal on Selected Areas in Communications | 1992

Finite-state modulation codes for data storage

Brian Marcus; Paul H. Siegel; Jack K. Wolf

The authors provide a self-contained exposition of modulation code design methods based upon the state splitting algorithm. They review the necessary background on finite state transition diagrams, constrained systems, and Shannon (1948) capacity. The state splitting algorithm for constructing finite state encoders is presented and summarized in a step-by-step fashion. These encoders automatically have state-dependent decoders. It is shown that for the class of finite-type constrained systems, the encoders constructed can be made to have sliding-block decoders. The authors consider practical techniques for reducing the number of encoder states as well as the size of the sliding-block decoder window. They discuss the class of almost-finite-type systems and state the general results which yield noncatastrophic encoders. The techniques are applied to the design of several codes of interest in digital data recording. >


IEEE Transactions on Information Theory | 1998

Codes for digital recorders

K.E. Schouhamer Immink; Paul H. Siegel; Jack K. Wolf

Constrained codes are a key component in digital recording devices that have become ubiquitous in computer data storage and electronic entertainment applications. This paper surveys the theory and practice of constrained coding, tracing the evolution of the subject from its origins in Shannons classic 1948 paper to present-day applications in high-density digital recorders. Open problems and future research directions are also addressed.


IEEE Transactions on Communications | 1983

Redundancy, the Discrete Fourier Transform, and Impulse Noise Cancellation

Jack K. Wolf

The relationship between the discrete Fourier transform and error-control codes is examined. Under certain conditions we show that discrete-time sequences carry redundant information which then allow for the detection and correction of errors. An application of this technique to impulse noise cancellation for pulse amplitude modulation transmission is described.


IEEE Communications Magazine | 1991

Modulation and coding for information storage

Paul H. Siegel; Jack K. Wolf

Many of the types of modulation codes designed for use in storage devices using magnetic recording are discussed. The codes are intended to minimize the negative effects of intersymbol interference. The channel model is first presented. The peak detection systems used in most commercial disk drives are described, as are the run length-limited (d,k) codes they use. Recently introduced recording channel technology based on sampling detection-partial-response (or PRML) is then considered. Several examples are given to illustrate that the introduction of partial response equalization, sampling detection, and digital signal processing has set the stage for the invention and application of advanced modulation and coding techniques in future storage products.<<ETX>>

Collaboration


Dive into the Jack K. Wolf's collaboration.

Top Co-Authors

Avatar

Paul H. Siegel

University of California

View shared research outputs
Top Co-Authors

Avatar

Eitan Yaakobi

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H.N. Bertram

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald F. Towsley

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge