Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul H. Siegel is active.

Publication


Featured researches published by Paul H. Siegel.


international symposium on microarchitecture | 2009

Characterizing flash memory: anomalies, observations, and applications

Laura M. Grupp; Adrian M. Caulfield; Joel Coburn; Steven Swanson; Eitan Yaakobi; Paul H. Siegel; Jack K. Wolf

Despite flash memorys promise, it suffers from many idiosyncrasies such as limited durability, data integrity problems, and asymmetry in operation granularity. As architects, we aim to find ways to overcome these idiosyncrasies while exploiting flash memorys useful characteristics. To be successful, we must understand the trade-offs between the performance, cost (in both power and dollars), and reliability of flash memory. In addition, we must understand how different usage patterns affect these characteristics. Flash manufacturers provide conservative guidelines about these metrics, and this lack of detail makes it difficult to design systems that fully exploit flash memorys capabilities. We have empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability. We demonstrate that performance varies significantly between vendors, devices, and from publicly available datasheets. We also demonstrate and quantify some unexpected device characteristics and show how we can use them to improve responsiveness and energy consumption of solid state disks by 44% and 13%, respectively, as well as increase flash device lifetime by 5.2x.


IEEE Journal on Selected Areas in Communications | 2001

Performance analysis and code optimization of low density parity-check codes on Rayleigh fading channels

Jilei Hou; Paul H. Siegel; Laurence B. Milstein

A numerical method has been presented to determine the noise thresholds of low density parity-check (LDPC) codes that employ the message passing decoding algorithm on the additive white Gaussian noise (AWGN) channel. In this paper, we apply the technique to the uncorrelated flat Rayleigh fading channel. Using a nonlinear code optimization technique, we optimize irregular LDPC codes for such a channel. The thresholds of the optimized irregular LDPC codes are very close to the Shannon limit for this channel. For example, at rate one-half, the optimized irregular LDPC code has a threshold only 0.07 dB away from the capacity of the channel. Furthermore, we compare simulated performance of the optimized irregular LDPC codes and turbo codes on a land mobile channel, and the results indicate that at a block size of 3072, irregular LDPC codes can outperform turbo codes over a wide range of mobile speeds.


IEEE Journal on Selected Areas in Communications | 1992

Finite-state modulation codes for data storage

Brian Marcus; Paul H. Siegel; Jack K. Wolf

The authors provide a self-contained exposition of modulation code design methods based upon the state splitting algorithm. They review the necessary background on finite state transition diagrams, constrained systems, and Shannon (1948) capacity. The state splitting algorithm for constructing finite state encoders is presented and summarized in a step-by-step fashion. These encoders automatically have state-dependent decoders. It is shown that for the class of finite-type constrained systems, the encoders constructed can be made to have sliding-block decoders. The authors consider practical techniques for reducing the number of encoder states as well as the size of the sliding-block decoder window. They discuss the class of almost-finite-type systems and state the general results which yield noncatastrophic encoders. The techniques are applied to the design of several codes of interest in digital data recording. >


IEEE Transactions on Information Theory | 2003

Capacity-approaching bandwidth-efficient coded modulation schemes based on low-density parity-check codes

Jilei Hou; Paul H. Siegel; Laurence B. Milstein; Henry D. Pfister

We design multilevel coding (MLC) and bit-interleaved coded modulation (BICM) schemes based on low-density parity-check (LDPC) codes. The analysis and optimization of the LDPC component codes for the MLC and BICM schemes are complicated because, in general, the equivalent binary-input component channels are not necessarily symmetric. To overcome this obstacle, we deploy two different approaches: one based on independent and identically distributed (i.i.d.) channel adapters and the other based on coset codes. By incorporating i.i.d. channel adapters, we can force the symmetry of each binary-input component channel. By considering coset codes, we extend the concentration theorem based on previous work by Richardson et al. ( see ibid., vol.47, p.599-618, Feb. 2001) and Kavc/spl caron/ic/spl acute/ et al.(see ibid., vol.49, p.1636-52, July 2003) We also discuss the relation between the systems based on the two approaches and show that they indeed have the same expected decoder behavior. Next, we jointly optimize the code rates and degree distribution pairs of the LDPC component codes for the MLC scheme. The optimized irregular LDPC codes at each level of MLC with multistage decoding (MSD) are able to perform well at signal-to-noise ratios (SNR) very close to the capacity of the additive white Gaussian noise (AWGN) channel. We also show that the optimized BICM scheme can approach the parallel independent decoding (PID) capacity as closely as does the MLC/PID scheme. Simulations with very large codeword length verify the accuracy of the analytical results. Finally, we compare the simulated performance of these coded modulation schemes at finite codeword lengths, and consider the results from the perspective of a random coding exponent analysis.


global communications conference | 2001

On the achievable information rates of finite state ISI channels

Henry D. Pfister; Joseph Binamira Soriaga; Paul H. Siegel

In this paper, we present two simple Monte Carlo methods for estimating the achievable information rates of general finite state channels. Both methods require only the ability to simulate the channel with an a posteriori probability (APP) detector matched to the channel. The first method estimates the mutual information rate between the input random process and the output random process, provided that both processes are stationary and ergodic. When the inputs are iid equiprobable, this rate is known as the Symmetric Information Rate (SIR). The second method estimates the achievable information rate of an explicit coding system which interleaves m independent codes onto the channel and employs multistage decoding. For practical values of m, numerical results show that this system nearly achieves the SIR. Both methods are applied to the class of partial response channels commonly used in magnetic recording.


information theory workshop | 1989

Matched Spectral Null Codes for Partial Response Channels

Razmik Karabed; Paul H. Siegel

A new family of codes that improve the reliability of digital communication over noisy, partial-response channels is described. The codes are intended for use on channels where the input alphabet size is limited. These channels arise in the context of digital data recording and certain data transmission applications. The codes-called matched-spectral-null codes-satisfy the property that the frequencies at which the code power spectral density vanishes correspond precisely to the frequencies at which the channel transfer function is zero. It is shown that matched-spectral-null sequences provide a distance gain on the order of 3 dB and higher for a broad class of partial-response channels. The embodiment of the system incorporates a sliding-block code and a Viterbi detector based upon a reduced-complexity trellis structure. The detectors are shown to achieve the same asymptotic average performance as maximum-likelihood sequence detectors, and the sliding-block codes exclude quasi-catastrophic trellis sequences in order to reduce the required path memory length and improve worst-case detector performance. Several examples are described in detail. >


IEEE Transactions on Information Theory | 1998

Codes for digital recorders

K.E. Schouhamer Immink; Paul H. Siegel; Jack K. Wolf

Constrained codes are a key component in digital recording devices that have become ubiquitous in computer data storage and electronic entertainment applications. This paper surveys the theory and practice of constrained coding, tracing the evolution of the subject from its origins in Shannons classic 1948 paper to present-day applications in high-density digital recorders. Open problems and future research directions are also addressed.


IEEE Transactions on Magnetics | 1985

Recording codes for digital magnetic storage

Paul H. Siegel

This paper provides a tutorial introduction to recording codes for magnetic disk storage devices and a review of progress in code construction algorithms. Topics covered include: a brief description of typical magnetic recording channels; motivation for use of recording codes; methods of selecting codes to maximize data density and reliability; and techniques for code design and implementation.


IEEE Communications Magazine | 1991

Modulation and coding for information storage

Paul H. Siegel; Jack K. Wolf

Many of the types of modulation codes designed for use in storage devices using magnetic recording are discussed. The codes are intended to minimize the negative effects of intersymbol interference. The channel model is first presented. The peak detection systems used in most commercial disk drives are described, as are the run length-limited (d,k) codes they use. Recently introduced recording channel technology based on sampling detection-partial-response (or PRML) is then considered. Several examples are given to illustrate that the introduction of partial response equalization, sampling detection, and digital signal processing has set the stage for the invention and application of advanced modulation and coding techniques in future storage products.<<ETX>>


international conference on communications | 1990

VLSI architectures for metric normalization in the Viterbi algorithm

C.B. Shung; Paul H. Siegel; G. Ungerboeck; Hemant K. Thapar

In the realization of Viterbi decoders with finite precision arithmetic, the values of the survivor metrics computed by the add-compare-select (ACS) recursion must remain within a finite numerical range to avoid catastrophic overflow (or underflow) situations. The authors compare several metric normalization techniques which are suitable for VLSI implementations with fixed-point arithmetic. The modulo normalization technique is found to be the most local and uniform approach. An efficient VLSI design of ACS units based on this technique is discussed. The modified comparison rule is found to produce a more efficient ACS architecture than previous results based on subtraction.<<ETX>>

Collaboration


Dive into the Paul H. Siegel's collaboration.

Top Co-Authors

Avatar

Jack K. Wolf

University of California

View shared research outputs
Top Co-Authors

Avatar

Eitan Yaakobi

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minghai Qin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pengfei Huang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge