Yunus Emre
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yunus Emre.
IEEE Transactions on Very Large Scale Integration Systems | 2012
Chengen Yang; Yunus Emre; Chaitali Chakrabarti
Error control coding (ECC) is essential for correcting soft errors in Flash memories. In this paper we propose use of product code based schemes to support higher error correction capability. Specifically, we propose product codes which use Reed-Solomon (RS) codes along rows and Hamming codes along columns and have reduced hardware overhead. Simulation results show that product codes can achieve better performance compared to both Bose-Chaudhuri-Hocquenghem codes and plain RS codes with less area and low latency. We also propose a flexible product code based ECC scheme that migrates to a stronger ECC scheme when the numbers of errors due to increased program/erase cycles increases. While these schemes have slightly larger latency and require additional parity bit storage, they provide an easy mechanism to increase the lifetime of the Flash memory devices.
Journal of the Acoustical Society of America | 2008
Yunus Emre; Vinod Kandasamy; Tolga M. Duman; Paul Hursky; Subhadeep Roy
We investigate performance of turbo coded multiple‐input multiple‐output (MIMO)‐OFDM systems with layered space time (LST) architectures for underwater acoustic (UWA) channels by using simulations and results from the AUVfest experiment performed in June 2007. MIMO systems have been shown to be promising in the sense that they increase the reliable transmission rates significantly without consuming additional bandwidth and power. Robustness of OFDM systems with cyclic prefix or zero padding to ISI channels are also well known; so the combination of MIMO techniques and OFDM can be regarded as a promising technology for shallow water UWA communications which is characterized by severe bandwidth limitations and long intersymbol interference (ISI) spans. The paper reviews necessary components of a MIMO‐OFDM communication system, including, time and frequency synchronization, channel estimation, and tracking of the varying channel parameters. Modifications necessary to make the system suitable for UWA channels are summarized. Results of the AUVFest 2007 experiment are very promising; for instance, 2×2 MIMO‐OFDM can reach up to 60 Kbps transmission for a bandwidth of 16 KHz with simple receiver structures for a range of 2000 m. In addition to the coherent system, differential and unitary space‐time coded MIMO‐OFDM scenarios are also considered.
signal processing systems | 2012
Yunus Emre; Chengen Yang; Ketul B. Sutaria; Yu Cao; Chaitali Chakrabarti
Spin torque transfer random access memory (STT-RAM) is a promising memory technology because of its fast read access, high storage density, and very low standby power. These memories have reliability issues that need to be better understood before they can be adopted as a mainstream memory technology. In this paper, we first study the causes of errors for a single STT memory cell. We see that process variations and variations in the device geometry affect their failure rate and develop error models to capture these effects. Next we propose a joint technique based on tuning of circuit level parameters and error control coding (ECC) to achieve very high reliability. Such a combination allows the use of weaker ECC with smaller overhead. For instance, we show that by applying voltage boosting and write pulse width adjustment, the error correction capability (t) of ECC can be reduced from t=11 to t=3 to achieve a block failure rate (BFR) of 10-9.
signal processing systems | 2010
Yunus Emre; Chaitali Chakrabarti
This paper presents novel algorithm-specific techniques to mitigate the effects of failures in SRAM memory caused by voltage scaling and random dopant fluctuations in scaled technologies. We focus on JPEG2000 as a case study. We propose three techniques that exploit the fact that the high frequency subband outputs of the discrete wavelet transform (DWT) have small dynamic range and so errors in the most significant bits can be identified and corrected easily. These schemes do not require any additional memory and have low circuit overhead. We also study several error control coding schemes that are effective in combating errors when the memory failure rates are low. We compare the PSNR versus compression rate performance of the proposed schemes for different memory failure rates. Simulation results show that for high bit error rates (10−2), the error control coding techniques are not effective and that the algorithm-specific techniques can improve the PSNR quality of up to 10dB higher compared to that of the no-correction case.
EURASIP Journal on Advances in Signal Processing | 2012
Chengen Yang; Yunus Emre; Yu Cao; Chaitali Chakrabarti
Non-volatile resistive memories, such as phase-change RAM (PRAM) and spin transfer torque RAM (STT-RAM), have emerged as promising candidates because of their fast read access, high storage density, and very low standby power. Unfortunately, in scaled technologies, high storage density comes at a price of lower reliability. In this article, we first study in detail the causes of errors for PRAM and STT-RAM. We see that while for multi-level cell (MLC) PRAM, the errors are due to resistance drift, in STT-RAM they are due to process variations and variations in the device geometry. We develop error models to capture these effects and propose techniques based on tuning of circuit level parameters to mitigate some of these errors. Unfortunately for reliable memory operation, only circuit-level techniques are not sufficient and so we propose error control coding (ECC) techniques that can be used on top of circuit-level techniques. We show that for STT-RAM, a combination of voltage boosting and write pulse width adjustment at the circuit-level followed by a BCH-based ECC scheme can reduce the block failure rate (BFR) to 10–8. For MLC-PRAM, a combination of threshold resistance tuning and BCH-based product code ECC scheme can achieve the same target BFR of 10–8. The product code scheme is flexible; it allows migration to a stronger code to guarantee the same target BFR when the raw bit error rate increases with increase in the number of programming cycles.
signal processing systems | 2011
Chengen Yang; Yunus Emre; Chaitali Chakrabarti; Trevor N. Mudge
Error control coding (ECC) is essential for correcting soft errors in Flash memories. In such memories, as the number of erase/program cycles increases over time, the number of errors increases. In this paper we propose a flexible product code based ECC scheme that can support ECC of higher strength when needed. Specifically, we propose product codes which use Reed-Solomon (RS) codes along rows and Hamming codes along columns. When higher ECC is needed, the Hamming code along columns is replaced by two shorter Hamming codes. For instance, when the raw bit error rate increases from 2.2∗10−3 to 4.0∗10−3, the proposed ECC scheme migrates from RS(127, 121) along rows and Hamming(72,64) along columns to RS(127, 121) along rows and two Hamming(39, 32) along columns to achieve the same BER of 10−6. While the resulting implementation has 12% higher decoding latency, it increases the lifetime of the device significantly.
international conference on acoustics, speech, and signal processing | 2011
Yunus Emre; Chaitali Chakrabarti
This paper presents a novel technique to mitigate effects of data-path and memory errors in JPEG implementations. These errors are mainly caused by voltage scaling and process variation in scaled technologies. We characterize the data-path and memory errors and derive a probability distribution of the total number of errors. We propose an algorithm-specific technique that corrects most errors after quantization in the JPEG encoder by exploiting the characteristics of the quantized coefficients. The technique achieves high performance with small circuit overhead. Simulation results show that the proposed technique has a PSNR performance degradation of around 1.5 dB compared to the error-free case, and 4 dB improvement compared to the no correction case at compression rate of 0.75 bpp when BER = 10−4.
IEEE Transactions on Multimedia | 2013
Yunus Emre; Chaitali Chakrabarti
This paper presents techniques to reduce energy with minimal degradation in system performance for multimedia signal processing algorithms. It first provides a survey of energy-saving techniques such as those based on voltage scaling, reducing number of computations and reducing dynamic range. While these techniques reduce energy, they also introduce errors that affect the performance quality. To compensate for these errors, techniques that exploit algorithm characteristics are presented. Next, several hybrid energy-saving techniques that further reduce the energy consumption with low performance degradation are presented. For instance, a combination of voltage scaling and dynamic range reduction is shown to achieve 85% energy saving in a low pass FIR filter for a fairly low noise level. A combination of computation reduction and dynamic reduction for Discrete Cosine Transform shows, on average, 33% to 46% reduction in energy consumption while incurring 0.5 dB to 1.5 dB loss in PSNR. Both of these techniques have very little overhead and achieve significant energy reduction with little quality degradation.
signal processing systems | 2012
Chengen Yang; Yunus Emre; Yu Cao; Chaitali Chakrabarti
Phase change RAM (PRAM) is a promising memory technology because of its fast read access time, high storage density and very low standby power. Multi-level Cell (MLC) PRAM which has been introduced to further improve the storage density, comes at a price of lower reliability. This paper focuses on a cost-effective solution for improving the reliability of MLC-PRAM. As the first step, we study in detail the causes of hard and soft errors and develop error models to capture these effects. Next we propose a multi-tiered approach that spans architecture, circuit and system levels to increase the reliability. At the architecture level, we use a combination of Gray code encoding and 2-bit interleaving to partition the errors so that a lower strength error correction coding (ECC) can be used for half of the bits that are in the odd block. We use sub block flipping and threshold resistance tuning to reduce the number of errors in the even block. For even higher reliability, we use a simple BCH based ECC on top of these techniques. We show that the propose multi-tiered approach enables us to use a low cost ECC with 2-error correction capability (t=2) instead of one with t=8 to achieve a block failure rate of 10-8.
IEEE Transactions on Very Large Scale Integration Systems | 2013
Yunus Emre; Chaitali Chakrabarti
This paper presents novel techniques to mitigate the effects of SRAM memory failures caused by low voltage operation in JPEG2000 implementations. We investigate error control coding schemes, specifically single error correction double error detection code based schemes, and propose an unequal error protection scheme tailored for JPEG2000 that reduces memory overhead with minimal effect in performance. Furthermore, we propose algorithm-specific techniques that exploit the characteristics of the discrete wavelet transform coefficients to identify and remove SRAM errors. These techniques do not require any additional memory, have low circuit overhead, and more importantly, reduce the memory power consumption significantly with only a small reduction in image quality.