Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cemil Turan is active.

Publication


Featured researches published by Cemil Turan.


Circuits Systems and Signal Processing | 2015

Zero-Attracting Function Controlled VSSLMS Algorithm with Analysis

Cemil Turan; Mohammad Shukri Salman

The recently proposed function controlled variable step-size least-mean-square (FCVSSLMS) algorithm has shown high performance in different noise environments. The performance of the algorithm can be improved further if the system is sparse. In this paper, we propose a new algorithm based on the FCVSSLMS algorithm. The proposed algorithm imposes an approximate penalty in the cost function of the FCVSSLMS algorithm. We also present the convergence analysis of the proposed algorithm and derive the stability criterion. The performance of the proposed algorithm is compared to those of the variable step-size least-mean-square, more robust variable step-size least-mean-square, FCVSSLMS algorithm and reweighted zero-attracting least-mean-square algorithm in a system identification setting with an additive white Gaussian noise and additive correlated Gaussian noise. The proposed algorithm has shown high performance compared to the others in terms of the convergence rate and mean square deviation.


signal processing and communications applications conference | 2014

A sparse function controlled variable step-size LMS algorithm for system identification

Cemil Turan; Mohammad Shukri Salman

The recently proposed function controlled variable step-size least-mean-square (FCVSSLMS) algorithm has shown high performance. The performance of the algorithm can be improved further if the system is sparse. In this paper, we propose a new algorithm based on algorithm. The proposed algorithm imposes an approximate l0-norm penalty in the cost function of the FCVSSLMS algorithm. The performance of the proposed algorithm is compared to those of the variable step-size LMS (VSSLMS) algorithm and FCVSSLMS algorithm in a system identification setting with an additive white Gaussian noise (AWGN). The proposed algorithm has shown high performance compared to the others in terms of convergence rate and mean-square-deviation (MSD).


international symposium elmar | 2015

A block LMS-type algorithm with a function controlled variable step-size for sparse system identification

Cemil Turan; Mohammad Shukri Salman; Alaa Eleyan

Block least-mean-square algorithm has a very fast processing time compared to the conventional LMS algorithm. This is due to the updating mechanism of the filter coefficients. Filter coefficients are updated for each sample input for the LMS algorithm. This process is faster with BLMS algorithm as the filter coefficients are updated for blocks of the input sequence instead. The BLMS algorithm can also be improved in the same manner as LMS by using different approaches such as variable step-size and/or sparsity. This paper proposes a new BLMS-type algorithm with a function controlled variable step-size LMS for sparse system identification. The performance of the proposed algorithm is compared to that of the BLMS algorithm in terms of convergence rate and mean-square-deviation. The effects of the filter length, sparsity degree and signal-to-noise ratio (SNR) on MSD were also investigated. Simulations prove that the proposed algorithm always outperforms the BLMS algorithm.


international siberian conference on control and communications | 2017

A block LMS-type algorithm for sparse system identification with analysis

Cemil Turan; Mohammad Shukri Salman

The LMS-type algorithms are effectively used in diverse adaptive filtering applications such as; blind system identification, channel equalization, etc. For some applications such as echo cancellation, which requires a large filter length, a very long time is required to estimate the coefficients via the conventional least-mean-square (LMS) algorithm. Actually, updating the filter coefficients block by block of the input data but not for each sample of input, users can get a very fast processing time depending on the block length. Additionally, the performance of the block-LMS (BLMS) algorithm can be improved by using a variable step-size in the update equation. Also, in system identification, if the unknown system is sparse, then the performance of the algorithm can be improved further by imposing a penalty term in the cost function of the BLMS algorithm which, in turn, helps in exploiting the sparsity of the unknown system. In this paper, we investigate the performance of recently proposed BLMS-type algorithm for sparse systems. The convergence analysis of the proposed algorithm is presented for the first time. Also, the effects of the zero-attracting coefficient, ρ, and the sparsity degree on MSD have been investigated. Finally, the performance of the algorithm is compared to those of the BLMS and the other previously proposed adaptive algorithms, in terms of convergence rate and mean-square-deviation (MSD). Simulations show that the proposed algorithm guarantees better performance than the other algorithms.


International Conference on Discrete Optimization and Operations Research | 2016

A Robust Leaky-LMS Algorithm for Sparse System Identification

Cemil Turan; Yedilkhan Amirgaliev

In this paper, a new Leaky-LMS (LLMS) algorithm that modifies and improves the Zero-Attracting Leaky-LMS (ZA-LLMS) algorithm for sparse system identification has been proposed. The proposed algorithm uses the sparsity of the system with the advantages of the variable step-size and l 0 -norm penalty. We compared the performance of our proposed algorithm with the conventional LLMS and ZA-LLMS in terms of the convergence rate and mean-square-deviation (MSD). Additionally, the computational complexity of the proposed algorithm has been derived. Simulations performed in MATLAB showed that the proposed algorithm has superiority over the other algorithms for both types of input signals of additive white Gaussian noise (AWGN) and additive correlated Gaussian noise (ACGN).


signal processing and communications applications conference | 2015

A new sparse convex combination of ZA-LLMS and RZA-LLMS algorithms

Mohammad Shukri Salman; Alaa Ali Hameed; Cemil Turan; Bekir Karlik

In the last decade, several algorithms have been proposed for performance improvement of adaptive filters in sparse system identification. In this paper, we propose a new convex combination of two different algorithms as zero-attracting leaky least-mean-square (ZA-LLMS) and reweighted zero-attracting leaky-least-mean square (RZA-LLMS) algorithms in sparse system identification setting. The performances of the aforementioned algorithms has been tested and compared to the result of the new combination. Simulations show that the proposed algorithm has a good ability to track the MSD curves of the other algorithms in additive white Gaussian noise (AWGN) and additive correlated Gaussian noise (ACGN) environments.


international conference on electronics computer and computation | 2015

A new variable step-size block LMS algorithm for a non-stationary sparse systems

Cemil Turan; Mohammad Shukri Salman; Alaa Eleyan

The conventional LMS algorithm has been successfully used in adaptive filtering for system identification (SI) problem. In telecommunications, acoustic echo SI problems usually have relatively large filter lengths that take a long time to be estimated. To overcome this problem, the block least-mean-square algorithm (BLMS) has been proposed. In BLMS, the filter coefficients are updated for blocks of input instead of each sample of input data. Using this advantage, we propose a new block-LMS algorithm with a function controlled variable step-size LMS (FC-VSSLMS) for non-stationary sparse systems identification. The performance of proposed algorithm is compared to those of the original BLMS and reweighted zero-attracting block LMS (RZA-BLMS), in terms of convergence rate and mean-square-deviation (MSD) in additive white Gaussian noise (AWGN) and additive uniformly distributed noise (AUDN). Simulations show that the proposed algorithm has a better performance than those of the other algorithms.


ieee international conference on electronics and nanotechnology | 2015

A transform domain sparse LMS-type algorithm for highly correlated biomedical signals in sparse system identification

Cemil Turan; Mohammad Shukri Salman; Hatem Haddad

The convergence behavior of least-mean-square (LMS) algorithm is highly dependent on the correlation of the input data and, consequently, on the eigenvalue spread of its correlation matrix. To overcome this issue, LMS algorithm is studied in different transform domains in order to decrease this eigenvalue spread. In this paper, we propose a new transform domain LMS algorithm with function controlled variable step-size for sparse system identification. The proposed algorithm imposes a transform domain to the input signal and an approximate l0 norm penalty term in the cost function of the function controlled variable step-size LMS (FC-VSSLMS) algorithm. The algorithm has been tested in the presence of highly correlated signals, i.e., Electrocardiography (ECG) and Electromyography (EMG) signals, and has shown very remarkable performance compared to those of the sparse FC-VSSLMS (SFCVSSLMS) and transform domain reweighted zero-attracting LMS (TD-RZALMS) algorithms.


international conference on computing and network communications | 2018

An Improved Face Recognition Algorithm Based on Sparse Representation

Cemil Turan; Shirali Kadyrov; Diana Burissova


international conference on electronics computer and computation | 2017

Robust face recognition via sparse reconstruction vector

Cemil Turan

Collaboration


Dive into the Cemil Turan's collaboration.

Top Co-Authors

Avatar

Mohammad Shukri Salman

American University of the Middle East

View shared research outputs
Top Co-Authors

Avatar

Alaa Eleyan

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shirali Kadyrov

Süleyman Demirel University

View shared research outputs
Top Co-Authors

Avatar

Yedilkhan Amirgaliev

Süleyman Demirel University

View shared research outputs
Researchain Logo
Decentralizing Knowledge