Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atalay Barkana is active.

Publication


Featured researches published by Atalay Barkana.


IEEE Transactions on Speech and Audio Processing | 1999

A novel approach to isolated word recognition

M. Bilginer Gülmezoğlu; Vakif Dzhafarov; Mustafa Keskin; Atalay Barkana

A voice signal contains the psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word uttered by different speakers can be very different. In this paper, two theories are developed by considering two optimization criteria applied to both the training set and the test set. The first theory is well known and uses what is called Criterion 1 here and ends up with the average of all vectors belonging to the words in the training set. The second theory is a novel approach and uses what is called Criterion 2 here, and it is used to extract the common properties of all vectors belonging to the words in the training set. It is shown that Criterion 2 is superior to Criterion 1 when the training set is of concern. In Criterion 2, the individual differences are obtained by subtracting a reference vector from other vectors, and individual difference vectors are used to obtain orthogonal vector basis by using the Gram-Schmidt orthogonalization method. The common vector is obtained by subtracting projections of any vector of the training set on the orthogonal vectors from this same vector. It is proved that this common vector is unique for any word class in the training set and independent of the chosen reference vector. This common vector is used in isolated word recognition, and it is also shown that Criterion 2 is superior to Criterion 1 for the test set. From the theoretical and experimental study, it is seen that the recognition rates increase as the number of speakers in the training set increases. This means that the common vector obtained from Criterion 2 represents the common properties of a spoken word better than the common or average vector obtained from Criterion 1.


soft computing | 2009

Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training

Bayram Cetisli; Atalay Barkana

The aim of this study is to speed up the scaled conjugate gradient (SCG) algorithm by shortening the training time per iteration. The SCG algorithm, which is a supervised learning algorithm for network-based methods, is generally used to solve large-scale problems. It is well known that SCG computes the second-order information from the two first-order gradients of the parameters by using all the training datasets. In this case, the computation cost of the SCG algorithm per iteration is more expensive for large-scale problems. In this study, one of the first-order gradients is estimated from the previously calculated gradients without using the training dataset. To estimate this gradient, a least square error estimator is applied. The estimation complexity of the gradient is much smaller than the computation complexity of the gradient for large-scale problems, because the gradient estimation is independent of the size of dataset. The proposed algorithm is applied to the neuro-fuzzy classifier and the neural network training. The theoretical basis for the algorithm is provided, and its performance is illustrated by its application to several examples in which it is compared with several training algorithms and well-known datasets. The empirical results indicate that the proposed algorithm is quicker per iteration time than the SCG. The algorithm decreases the training time by 20–50% compared to SCG; moreover, the convergence rate of the proposed algorithm is similar to SCG.


Applied Mathematics and Computation | 2011

A new solution to one sample problem in face recognition using FLDA

Mehmet Koç; Atalay Barkana

Fisher linear discriminant analysis (FLDA) is a very popular method in face recognition. But FLDA fails when one image per person is available. This is due to the fact that the within-class scatter matrices cannot be calculated. An image decomposition method that uses QR-decomposition with column pivoting (QRCP) is proposed in this paper to overcome one image per person problem. At first, the image and its two approximations that are evaluated using QRCP-decomposition are all placed in the training set. Then 2D-FLDA method becomes applicable with these new data. The performance of the proposed image decomposition algorithm is tested on five different face databases, namely ORL, FERET, YALE, UMIST, and PolyU-NIR using 2D-FLDA. Our image decomposition algorithm performs better than the SVD based method mentioned by Gao et al. (2008) [1] in terms of recognition rate and training time in all of the above databases.


Neurocomputing | 2010

Large margin classifiers based on affine hulls

Hakan Cevikalp; Bill Triggs; Hasan Serhan Yavuz; Yalçın Küçük; Mahide Küçük; Atalay Barkana

This paper introduces a geometrically inspired large margin classifier that can be a better alternative to the support vector machines (SVMs) for the classification problems with limited number of training samples. In contrast to the SVM classifier, we approximate classes with affine hulls of their class samples rather than convex hulls. For any pair of classes approximated with affine hulls, we introduce two solutions to find the best separating hyperplane between them. In the first proposed formulation, we compute the closest points on the affine hulls of classes and connect these two points with a line segment. The optimal separating hyperplane between the two classes is chosen to be the hyperplane that is orthogonal to the line segment and bisects the line. The second formulation is derived by modifying the @n-SVM formulation. Both formulations are extended to the nonlinear case by using the kernel trick. Based on our findings, we also develop a geometric interpretation of the least squares SVM classifier and show that it is a special case of the proposed method. Multi-class classification problems are dealt with constructing and combining several binary classifiers as in SVM. The experiments on several databases show that the proposed methods work as good as the SVM classifier if not any better.


systems man and cybernetics | 2007

The Kernel Common Vector Method: A Novel Nonlinear Subspace Classifier for Pattern Recognition

Hakan Cevikalp; Marian Neamtu; Atalay Barkana

The common vector (CV) method is a linear subspace classifier method which allows one to discriminate between classes of data sets, such as those arising in image and word recognition. This method utilizes subspaces that represent classes during classification. Each subspace is modeled such that common features of all samples in the corresponding class are extracted. To accomplish this goal, the method eliminates features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the covariance matrix of each class. In this paper, we introduce a variation of the CV method, which will be referred to as the modified CV (MCV) method. Then, a novel approach is proposed to apply the MCV method in a nonlinearly mapped higher dimensional feature space. In this approach, all samples are mapped into a higher dimensional feature space using a kernel mapping function, and then, the MCV method is applied in the mapped space. Under certain conditions, each class gives rise to a unique CV, and the method guarantees a 100% recognition rate with respect to the training set data. Moreover, experiments with several test cases also show that the generalization performance of the proposed kernel method is comparable to the generalization performances of other linear subspace classifier methods as well as the kernel-based nonlinear subspace method. While both the MCV method and its kernel counterpart did not outperform the support vector machine (SVM) classifier in most of the reported experiments, the application of our proposed methods is simpler than that of the multiclass SVM classifier. In addition, it is not necessary to adjust any parameters in our approach.


IEEE Transactions on Power Delivery | 2006

Covariance analysis of voltage waveform signature for power-quality event classification

Ömer Nezih Gerek; Dogan Gökhan Ece; Atalay Barkana

In this paper, covariance behavior of several features (signature identifiers) that are determined from the voltage waveform within a time window for power-quality (PQ) event detection and classification is analyzed. A feature vector using selected signature identifiers such as local wavelet transform extrema at various decomposition levels, spectral harmonic ratios, and local extrema of higher order statistical parameters, is constructed. It is observed that the feature vectors corresponding to power quality event instances can be efficiently classified according to the event type using a covariance based classifier known as the common vector classifier. Arcing fault (high impedance fault) type events are successfully classified and distinguished from motor startup events under various load conditions. It is also observed that the proposed approach is even able to discriminate the loading conditions within the same class of events at a success rate of 70%. In addition, the common vector approach provides a redundancy and usefulness information about the feature vector elements. Implication of this information is experimentally justified with the fact that some of the signature identifiers are more important than others for the discrimination of PQ event types


Computer Speech & Language | 2007

The common vector approach and its comparison with other subspace methods in case of sufficient data

M. Bilginer Gülmezoğlu; Vakif Dzhafarov; Rifat Edizkan; Atalay Barkana

This paper presents an application of the common vector approach (CVA), an approach mainly used for speech recognition problems when the number of data items exceeds the dimension of the feature vectors. The calculation of a unique common vector for each class involves the use of principal component analysis. CVA and other subspace methods are compared both theoretically and experimentally. TI-digit database is used in the experimental study to show the practical use of CVA for the isolated word recognition problems. It can be concluded that CVA results are higher in terms of recognition rates when compared with those of other subspace methods in training and test sets. It is also seen that the consideration of only within-class scatter in CVA gives better performance than considering both within- and between-class scatters in Fishers linear discriminant analysis. The recognition rates obtained for CVA are also better than those obtained with the HMM method.


Neurocomputing | 2014

Application of Linear Regression Classification to low-dimensional datasets

Mehmet Koc; Atalay Barkana

The Traditional Linear Regression Classification (LRC) method fails when the number of data in the training set is greater than their dimensions. In this work, we proposed a new implementation of LRC to overcome this problem in the pattern recognition. The new form of LRC works even in the case of having low-dimensional excessive number of data. In order to explain the new form of LRC, the relation between the predictor and the correlation matrix of a class is shown first. Then for the derivation of LRC, the null space of the correlation matrix is generated by using the eigenvectors corresponding to the smallest eigenvalues. These eigenvectors are used to calculate the projection matrix in LRC. Also the equivalence of LRC and the method called Class-Featuring Information Compression (CLAFIC) is shown theoretically. TI Digit database and Multiple Feature dataset are used to illustrate the use of proposed improvement on LRC and CLAFIC.


Neurocomputing | 2009

Two-dimensional subspace classifiers for face recognition

Hakan Cevikalp; Hasan Serhan Yavuz; Mehmet Atıf Çay; Atalay Barkana

The subspace classifiers are pattern classification methods where linear subspaces are used to represent classes. In order to use the classical subspace classifiers for face recognition tasks, two-dimensional (2D) image matrices must be transformed into one-dimensional (1D) vectors. In this paper, we propose new methods to apply the conventional subspace classifier methods directly to the image matrices. The proposed methods yield easier evaluation of correlation and covariance matrices, which in turn speeds up the training and testing phases of the classification process. Utilizing 2D image matrices also enables us to apply 2D versions of some subspace classifiers to the face recognition tasks, in which the corresponding classical subspace classifiers cannot be used due to high dimensionality. Moreover, the proposed methods are also generalized such that they can be used with the higher order image tensors. We tested the proposed 2D methods on three different face databases. Experimental results show that the performances of the proposed 2D methods are typically better than the performances of classical subspace classifiers in terms of recognition accuracy and real-time efficiency.


signal processing and communications applications conference | 2004

A novel method for face recognition

Hakan Cevikalp; Marian Neamtu; M. Wilkes; Atalay Barkana

In this paper we propose an efficient method called the discriminative common vector method for face recognition. The discriminative common vectors representing each person in the training set of the face database are obtained by using the projection directions found in the null space of the within-class scatter matrix. Then, these vectors are used for classification of new faces. Also, an alternative algorithm based on the subspace methods is given to obtain the common vectors. Our test results show that the discriminative common vector method is superior to other methods in terms of recognition accuracy and efficiency.

Collaboration


Dive into the Atalay Barkana's collaboration.

Top Co-Authors

Avatar

M. Bilginer Gülmezoğlu

Eskişehir Osmangazi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rifat Edizkan

Eskişehir Osmangazi University

View shared research outputs
Top Co-Authors

Avatar

Semih Ergin

Eskişehir Osmangazi University

View shared research outputs
Top Co-Authors

Avatar

Hakan Cevikalp

Eskişehir Osmangazi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hasan Serhan Yavuz

Eskişehir Osmangazi University

View shared research outputs
Top Co-Authors

Avatar

Mehmet Koc

Bilecik Şeyh Edebali University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge