Aydın Kaya
Hacettepe University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aydın Kaya.
Journal of Biomedical Informatics | 2015
Aydın Kaya; Ahmet Burak Can
Predicting malignancy of solitary pulmonary nodules from computer tomography scans is a difficult and important problem in the diagnosis of lung cancer. This paper investigates the contribution of nodule characteristics in the prediction of malignancy. Using data from Lung Image Database Consortium (LIDC) database, we propose a weighted rule based classification approach for predicting malignancy of pulmonary nodules. LIDC database contains CT scans of nodules and information about nodule characteristics evaluated by multiple annotators. In the first step of our method, votes for nodule characteristics are obtained from ensemble classifiers by using image features. In the second step, votes and rules obtained from radiologist evaluations are used by a weighted rule based method to predict malignancy. The rule based method is constructed by using radiologist evaluations on previous cases. Correlations between malignancy and other nodule characteristics and agreement ratio of radiologists are considered in rule evaluation. To handle the unbalanced nature of LIDC, ensemble classifiers and data balancing methods are used. The proposed approach is compared with the classification methods trained on image features. Classification accuracy, specificity and sensitivity of classifiers are measured. The experimental results show that using nodule characteristics for malignancy prediction can improve classification results.
signal processing and communications applications conference | 2017
Ali Seydi Keçeli; Aydın Kaya; Ahmet Burak Can
The use of depth sensors in activity recognition is a technology that emerges in human computer interaction and motion recognition. In this study, an approach to identify single-person activities using deep learning on depth image sequences is presented. First, a 3D volumetric template is generated using skeletal information obtained from a depth video. The generated 3D volume is used for extracting features by taking images from different angles at different volumes. Actions are recognized by extracting deep features using AlexNet model [1] and Histogram of Oriented Gradients (HOG) features from these images. The proposed method has been tested with MSRAction3D [2] and UTHKinect-Action3D [2] datasets. The obtained results were comparable to similar studies in the literature.
international conference on pattern recognition | 2010
Aydın Kaya; Ahmet Burak Can; Hasan Basri Çakmak
In laser eye surgery, the accuracy of operation depends on coherent eye tracking and registration techniques. Main approach used in image processing based eye trackers is extraction and tracking of pupil and limbus regions. In eye registration step, iris region features extracted from infrared images are used generally. Registration step determines the angular shift of eye origin by comparing the eye position on operation table with the eye topology obtained before the operation. Registration is only applied at the beginning but patient’s movements don not stop during operation. Hence we presented a method for pattern stabilization which can be repeated during operation at regular intervals. We use scleral blood vessels as features due to texturedness and resistance to errors caused by pupil center shift and ablation of cornea region.
Signal, Image and Video Processing | 2018
Ali Seydi Keçeli; Aydın Kaya; Ahmet Burak Can
In activity recognition, usage of depth data is a rapidly growing research area. This paper presents a method for recognizing single-person activities and dyadic interactions by using deep features extracted from both 3D and 2D representations, which are constructed from depth sequences. First, a 3D volume representation is generated by considering spatiotemporal information in depth frames of an action sequence. Then, a 3D-CNN is trained to learn features from these 3D volume representations. In addition to this, a 2D representation is constructed from the weighted sum of the depth sequences. This 2D representation is used with a pre-trained CNN model. Features learned from this model and the 3D-CNN model are used in training of the final approach after a feature selection step. Among the various classifiers, an SVM-based model produced the best results. The proposed method was tested on the MSR-Action3D dataset for single-person activities, the SBU dataset for dyadic interactions, and the NTU RGB+D dataset for both types of actions. Experimental results show that proposed 3D and 2D representations and deep features extracted from them are robust and efficient. The proposed method achieves comparable results with the state of the art methods in the literature.
Iete Journal of Research | 2017
Ali Seydi Keçeli; Ahmet Burak Can; Aydın Kaya
ABSTRACT White matter lesions (WMLs) in the human brain are generally diagnosed by using magnetic resonance (MR) images. Doctors working on WMLs generally need to calculate the volume of lesions for each patient at regular intervals in order to observe the course of diseases and manage the treatment process. This paper introduces an unsupervised automatic approach for segmentation of WMLs in the human brain. The approach consists of skull stripping, preprocessing, and lesion detection steps. Three skull stripping methods are proposed to increase successful stripping probability on various qualities of MR image data. After preprocessing and segmenting lesions, the system applies volumetric calculation and 3D visualization of lesions. This volumetric information can be used by doctors to observe changes in the lesions against regularly scanned MR images of patients. GPU-based parallel image processing techniques are utilized on Nvidia CUDA environment in order to improve performance by 40–50 times. Therefore, the developed system saves the time of doctors by providing them a fast automatic segmentation method for WMLs.
Computers & Geosciences | 2017
Ali Seydi Keçeli; Aydın Kaya; Seda Uzunçimen Keçeli
Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy. A classification method for radiolarian images is proposed.A radiolarian image dataset is prepared.Combinations of different features are compared with different base classifiers.The results show that deep features are more successful than hand-crafted features.The feature selection over deep features has a positive effect on computation time.
Archive | 2015
Aydın Kaya; Ahmet Burak Can
Classification of small pulmonary nodules is an important task for lung cancer diagnosis. Studies on classification of these nodules generally concentrate on determining nodule malignancy using image features. In the recent years, publicly available databases offer researchers various types of data other than image features. LIDC database includes such information about radiologists’ annotations on nodule characteristics. In this paper, a cascaded classification method is studied to classify malignancy of small pulmonary nodules using nodule characteristics and image features. Results are compared with single classifiers based on nodule characteristics and image features separately.
international conference on image analysis and recognition | 2014
Aydın Kaya; Ahmet Burak Can
Predicting malignancy of small pulmonary nodules from computer tomography scans is a difficult and important problem to diagnose lung cancer. This paper presents a rule based fuzzy inference method for predicting malignancy rating of small pulmonary nodules. We use the nodule characteristics provided by Lung Image Database Consortium dataset to determine malignancy rating. The proposed fuzzy inference method uses outputs of ensemble classifiers and rules from radiologist agreements on the nodules. The results are evaluated over classification accuracy performance and compared with single classifier methods. We observed that the preliminary results are very promising and system is open to development.
signal processing and communications applications conference | 2013
Aydın Kaya; Ahmet Burak Can
Recognition of lung nodules and classification of them as benign and malignent are very important in diagnosis of lung cancer. Present methods on nodule classification generally concentrate on defining nodule as either benign or malignent but do not consider radiographic descriptors that play important role on classification of small-sized lung nodules. In this paper, features extracted from nodule images to denote radiographic descriptors are studied. With the results from classification and dimension reduction approaches, which images features truly denote radiographic descriptors is analyzed.
Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications | 2018
Huseyin Temucin; Ali Seydi Keçeli; Aydın Kaya; Hamdi Yalin Yalic; Bedir Tekinerdogan
Abstract In society, visual impairment is one of the important health issues that severely impede the daily life and welfare of many people. According to the 2014 World Health Organization (WHO) report, there exist 285 million visually impaired people worldwide, and more than 400 thousand in Turkey. To support the visually impaired people and likewise help them integrate into the society, several challenges need to be solved. In this study, we focus on two important issues, including the reading of normal, non-braille text, and face recognition. Reading of normal texts beyond Braille is one of the important life activities that is required in the daily personal and professional life of people. Face recognition is important for social interaction and communication. To solve both problems we propose a system which can help visually impaired people to recognize human faces and read normal text. The tool is based on a cloud-based architecture whereby services are provided for text and face recognition. The services are based on big data analytics together with deep learning algorithms. In this chapter, we discuss the overall architecture for such a text and face recognition system, the design decisions, the key challenges, the presented analytics approaches and the lessons learned that could be of value to both practitioners and researchers.