Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Evangelos Eleftheriou is active.

Publication


Featured researches published by Evangelos Eleftheriou.


IEEE Transactions on Information Theory | 2005

Regular and irregular progressive edge-growth tanner graphs

Xiao-Yu Hu; Evangelos Eleftheriou; Dieter-Michael Arnold

We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.


IEEE Transactions on Control Systems and Technology | 2007

A Survey of Control Issues in Nanopositioning

Santosh Devasia; Evangelos Eleftheriou; S.O.R. Moheimani

Nanotechnology is the science of understanding matter and the control of matter at dimensions of 100 nm or less. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulation of matter at this level of precision. An important aspect of research in nanotechnology involves precision control and manipulation of devices and materials at a nanoscale, i.e., nanopositioning. Nanopositioners are precision mechatronic systems designed to move objects over a small range with a resolution down to a fraction of an atomic diameter. The desired attributes of a nanopositioner are extremely high resolution, accuracy, stability, and fast response. The key to successful nanopositioning is accurate position sensing and feedback control of the motion. This paper presents an overview of nanopositioning technologies and devices emphasizing the key role of advanced control techniques in improving precision, accuracy, and speed of operation of these systems.


IEEE Transactions on Communications | 2005

Reduced-complexity decoding of LDPC codes

Jinghu Chen; Ajay Dholakia; Evangelos Eleftheriou; Marc P. C. Fossorier; Xiao-Yu Hu

Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.


global communications conference | 2001

Progressive edge-growth Tanner graphs

Xiao-Yu Hu; Evangelos Eleftheriou; Dieter-Michael Arnold

We propose a general method for constructing Tanner (1981) graphs with large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms or parameters of the graphs. Encoding of LDPC codes based on the PEG principle is also investigated. We show how to exploit the PEG graph construction to obtain LDPC codes that allow linear time encoding. The advantages of PEG Tanner graphs over randomly constructed graphs are demonstrated by extensive simulation results on code performance.


global communications conference | 2001

Efficient implementations of the sum-product algorithm for decoding LDPC codes

Xiao–Yu Hu; Evangelos Eleftheriou; Dieter–Michael Arnold; Ajay Dholakia

Efficient implementations of the sum-product algorithm (SPA) are presented for decoding low-density parity-check (LDPC) codes using log-likelihood ratios (LLR) as messages between symbol and parity-check nodes. Various reduced-complexity derivatives of the LLR-SPA are proposed. Both serial and parallel implementations are investigated, leading to trellis and tree topologies, respectively. Furthermore, by exploiting the inherent robustness of LLRs, it is shown, via simulations, that coarse quantization tables are sufficient to implement complex core operations with negligible or no loss in performance. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate design point in high-speed applications from a performance, latency and computational complexity perspective.


IEEE Transactions on Communications | 1989

Decoding of trellis-encoded signals in the presence of intersymbol interference and noise

Pierre R. Chevillat; Evangelos Eleftheriou

A novel receiver for data-transmission systems using trellis-coded modulation is investigated. It comprises a whitened-matched filter and a trellis decoder which combines the previously separated functions of equalization and trellis-coded modulation (TCM) decoding. TCM encoder, transmission channel, and whitened-matched filter are modeled by a single finite-state machine with combined intersymbol interference and code states. Using ISI-state truncation techniques and the set-partitioning principles inherent in TCM, a systematic method is then developed for reducing the state complexity of the corresponding ISI and code trellis. A modified branch metric is used for canceling those ISI terms which are not represented by the trellis states. The approach leads to a family of Viterbi decoders which offer a tradeoff between decoding complexity and performance. An adaptive version of the proposed receiver is discussed, and an efficient structure for reduced-state decoding is given. Simulation results are presented for channels with severe amplitude and phase distortion. It is shown that the proposed receiver achieves a significant gain in noise margin over a conventional receiver which uses separate linear equalization and TCM decoding. >


IEEE Journal on Selected Areas in Communications | 2002

Filtered multitone modulation for very high-speed digital subscriber lines

Giovanni Cherubini; Evangelos Eleftheriou; Sedat Ölçer

A filter-bank modulation technique called filtered multitone (FMT) and its application to data transmission for very high-speed digital subscriber line technology are described. The proposed scheme leads to significantly lower spectral overlapping between adjacent subchannels than for known multicarrier techniques such as discrete multitone (DMT) or discrete wavelet multitone. FMT modulation mitigates interference due to echo and near-end crosstalk signals, and increases the system throughput and reach. Signal equalization in an FMT receiver is accomplished in the form of per-subchannel symbol-spaced or fractionally spaced linear or decision-feedback equalization. The problem of channel coding for this type of modulation is also addressed, and an approach that allows combined removal of intersymbol-interference via precoding and trellis coding is described. Furthermore, practical design aspects regarding filter-bank realization, initial transceiver training, adaptive equalization, and timing recovery are discussed. Finally, simulation results of the performance achieved by FMT modulation for very high-speed digital subscriber line systems, where upstream and downstream signals are separated by frequency-division duplexing, are presented and compared with DMT modulation.


Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference on | 2009

Write amplification analysis in flash-based solid state drives

Xiao-Yu Hu; Evangelos Eleftheriou; Robert Haas; Ilias Iliadis; Roman A. Pletka

Write amplification is a critical factor limiting the random write performance and write endurance in storage devices based on NAND-flash memories such as solid-state drives (SSD). The impact of garbage collection on write amplification is influenced by the level of over-provisioning and the choice of reclaiming policy. In this paper, we present a novel probabilistic model of write amplification for log-structured flash-based SSDs. Specifically, we quantify the impact of over-provisioning on write amplification analytically and by simulation assuming workloads of uniformly-distributed random short writes. Moreover, we propose modified versions of the greedy garbage-collection reclaiming policy and compare their performance. Finally, we analytically evaluate the benefits of separating static and dynamic data in reducing write amplification, and how to address endurance with proper wear leveling.


asia pacific magnetic recording conference | 2002

Millipede: a MEMS-based scanning-probe data-storage system

Evangelos Eleftheriou; Theodore Antonakopoulos; G. Binnig; Giovanni Cherubini; Michel Despont; Ajay Dholakia; U. Dürig; H. Pozidis; Hugo E. Rothuizen; Peter Vettiger

Ultrahigh storage densities of up to 1 Tbit/in./sup 2/ or more can be achieved by local-probe techniques to write, read back, and erase data in very thin polymer films. The thermomechanical scanning-probe-based data-storage concept called Millipede combines ultrahigh density, small form factor, and high data rate. After illustrating the principles of operation of the Millipede, we introduce system aspects related to the read-back process, multiplexing, and position-error-signal generation for tracking.


international conference on communications | 2002

Low-density parity-check codes for digital subscriber lines

Evangelos Eleftheriou; Sedat Ölçer

Rate-compatible low-density parity-check (LDPC) codes obtained from the class of array LDPC codes are presented. The design methodology described herein retains practical advantages of array LDPC codes such as excellent performance and efficient encodability across all the codes in a rate-compatible family. Different codes in the rate-compatible family can be specified by a small number of parameters and constructed algebraically with a small amount of preprocessing. The rate-compatible codes can be decoded using a generic decoder architecture, leading to efficient implementations. These properties make the codes attractive for use in DSL systems that need to support a large number of code parameters to cope with channel variability.

Researchain Logo
Decentralizing Knowledge