Ali Seydi Keçeli
Hacettepe University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ali Seydi Keçeli.
Computers & Geosciences | 2012
N. Yesiloglu-Gultekin; Ali Seydi Keçeli; Ebru Akcapinar Sezer; Ahmet Burak Can; Candan Gokceoglu; Hasan Bayhan
The geomechanical behavior of rocks is controlled mainly by their mineral content and texture. For this reason, determining the mineral content of rocks is highly important for their genetic classification and understanding their geomechanical behavior. Conventionally, the mineral content of rocks has been determined by point counting on thin sections. However, this process is exhaustive and time-consuming. This study presents a computer program, TSecSoft, that determines the mineral content of rocks. TSecSoft is developed with MATLAB10a, and the MATLAB scripts (m-files) are converted to a standalone application using MATLAB Deployment Toolbox. After automatically obtaining an initial segmentation, the user can correct segments to produce perfect segmentation. When correcting segments, the user can merge segments or divide one segment into several using the TSecSoft pen function. To assess the TSecSofts performance, point counting and TSecSoft are used on six different thin sections prepared from granitic rock specimens, and their results are compared. The correlation coefficients of the mineral percentage values obtained from point counting and TSecSoft are considerably high. All results indicate that a useful and time-saving tool has been produced for determining the mineral percentages of rocks.
International Journal of Pattern Recognition and Artificial Intelligence | 2014
Ali Seydi Keçeli; Ahmet Burak Can
Human action recognition using depth sensors is an emerging technology especially in game console industry. Depth information can provide robust features about 3D environments and increase accuracy of action recognition in short ranges. This paper presents an approach to recognize basic human actions using depth information obtained from the Kinect sensor. To recognize actions, features extracted from angle and displacement information of joints are used. Actions are classified using support vector machines and random forest (RF) algorithm. The model is tested on HUN-3D, MSRC-12, and MSR Action 3D datasets with various testing approaches and obtained promising results especially with the RF algorithm. The proposed approach produces robust results independent from the dataset with simple and computationally cheap features.
international conference on pattern recognition | 2014
Ali Seydi Keçeli; Ahmet Burak Can
Human action recognition using depth information is a trending technology especially in human computer interaction. Depth information may provide more robust features to increase accuracy of action recognition. This paper presents an approach to recognize basic human actions using the depth information from RGB-D sensors. Features obtained from a trained skeletal model and raw depth data are studied. Angle and displacement features derived from the skeletal model were the most useful in classification. However, HOG descriptors of gradient and depth history images derived from depth data also improved classification performance when used with skeletal model features. Actions are classified with the random forest algorithm. The model is tested on MSR Action 3D dataset and compared with some of the recent methods in literature. According to the experiments, the proposed model produces promising results.
signal processing and communications applications conference | 2017
Ali Seydi Keçeli; Aydın Kaya; Ahmet Burak Can
The use of depth sensors in activity recognition is a technology that emerges in human computer interaction and motion recognition. In this study, an approach to identify single-person activities using deep learning on depth image sequences is presented. First, a 3D volumetric template is generated using skeletal information obtained from a depth video. The generated 3D volume is used for extracting features by taking images from different angles at different volumes. Actions are recognized by extracting deep features using AlexNet model [1] and Histogram of Oriented Gradients (HOG) features from these images. The proposed method has been tested with MSRAction3D [2] and UTHKinect-Action3D [2] datasets. The obtained results were comparable to similar studies in the literature.
signal processing and communications applications conference | 2011
Ali Seydi Keçeli; Ahmet Burak Can
As Graphical Processing Units (GPU) develops fast and becomes suitable for general purpose usage, GPUs are used to improve speed performance of processing and analysis of medical images. With the high parallel computation capabilities of GPUs, a large number of pixel computation can be done in parallel. Especially volumetric MR or CT scans may contain more than 40 slices. In this type of data, parallel processing of image slices will speed up the medical processes. In this paper, we propose a brain segmentation method which uses our parallel implementation of active contours and K-means clustering algorithm on CUDA environment. GPU and CPU implementations of the method are compared and the advantages and disadvantages of using CUDA are explained.
Signal, Image and Video Processing | 2018
Ali Seydi Keçeli; Aydın Kaya; Ahmet Burak Can
In activity recognition, usage of depth data is a rapidly growing research area. This paper presents a method for recognizing single-person activities and dyadic interactions by using deep features extracted from both 3D and 2D representations, which are constructed from depth sequences. First, a 3D volume representation is generated by considering spatiotemporal information in depth frames of an action sequence. Then, a 3D-CNN is trained to learn features from these 3D volume representations. In addition to this, a 2D representation is constructed from the weighted sum of the depth sequences. This 2D representation is used with a pre-trained CNN model. Features learned from this model and the 3D-CNN model are used in training of the final approach after a feature selection step. Among the various classifiers, an SVM-based model produced the best results. The proposed method was tested on the MSR-Action3D dataset for single-person activities, the SBU dataset for dyadic interactions, and the NTU RGB+D dataset for both types of actions. Experimental results show that proposed 3D and 2D representations and deep features extracted from them are robust and efficient. The proposed method achieves comparable results with the state of the art methods in the literature.
Iete Journal of Research | 2017
Ali Seydi Keçeli; Ahmet Burak Can; Aydın Kaya
ABSTRACT White matter lesions (WMLs) in the human brain are generally diagnosed by using magnetic resonance (MR) images. Doctors working on WMLs generally need to calculate the volume of lesions for each patient at regular intervals in order to observe the course of diseases and manage the treatment process. This paper introduces an unsupervised automatic approach for segmentation of WMLs in the human brain. The approach consists of skull stripping, preprocessing, and lesion detection steps. Three skull stripping methods are proposed to increase successful stripping probability on various qualities of MR image data. After preprocessing and segmenting lesions, the system applies volumetric calculation and 3D visualization of lesions. This volumetric information can be used by doctors to observe changes in the lesions against regularly scanned MR images of patients. GPU-based parallel image processing techniques are utilized on Nvidia CUDA environment in order to improve performance by 40–50 times. Therefore, the developed system saves the time of doctors by providing them a fast automatic segmentation method for WMLs.
Computers & Geosciences | 2017
Ali Seydi Keçeli; Aydın Kaya; Seda Uzunçimen Keçeli
Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy. A classification method for radiolarian images is proposed.A radiolarian image dataset is prepared.Combinations of different features are compared with different base classifiers.The results show that deep features are more successful than hand-crafted features.The feature selection over deep features has a positive effect on computation time.
signal processing and communications applications conference | 2014
Ali Seydi Keçeli; Ahmet Burak Can
Usage of 3th dimension information obtained from depth sensors in human action recognition has gained importance in the recent years. In this study, basic human actions are tried to recognize on a human model derived from RGBD sensor. Joint angles and joint displacements used as time series and feature extraction from times series is applied to recognize actions. Actions are classified with the random forest and support vector machine approaches and classification accuracy is measured on MSRAction-3D and MSRC-12 datasets.
International Journal of Computer Theory and Engineering | 2014
Sinan O. Altinuc; Ali Seydi Keçeli; Ebru Akcapinar Sezer
SEMI-AUTOMATED SHORELINE EXTRACTION IN SATELLITE IMAGERY AND USAGE OF FRACTALS AS PERFORMANCE EVALUATOR