Caglar Aytekin
Tampere University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Caglar Aytekin.
international conference on image processing | 2013
Caglar Aytekin; Serkan Kiranyaz; Moncef Gabbouj
An automatic object extraction method is proposed exploiting the rich mathematical structure of quantum mechanics. First, a novel segmentation method based on the solutions of Schrödingers equation is proposed. This powerful segmentation method allows us to model complex objects and inherent structures of edge, shape, and texture information along with the grey-level intensity uniformity, all in a single equation. Due to the large amount of segments extracted with the proposed method, the selection of the object segment is performed by maximizing a regularization energy function based on a recently proposed sub-segment analysis indicating the object boundaries. The results of the proposed automatic object extraction method exhibit such a promising accuracy that pushes the frontier in this field to the borders of the input-driven processing only - without the use of “object knowledge” aided by long-term human memory and intelligence.
systems man and cybernetics | 2015
Caglar Aytekin; Yousef Rezaeitabar; Sedat Dogru; Ilkay Ulusoy
In this paper, a real-time railway fastener detection system using a high-speed laser range finder camera is presented. First, an extensive analysis of various methods based on pixel-wise and histogram similarities are conducted on a specific railway route. Then, a fusing stage is introduced which combines least correlated approaches also considering the performance upgrade after fusing. Then, the resulting method is tested on a larger database collected from a different railway route. After observing repeated successes, the method is implemented on NI LabVIEW and run real-time with a high-speed 3-D camera placed under a railway carriage designed for railway quality inspection.
international conference on image processing | 2015
Caglar Aytekin; Ezgi Can Ozan; Serkan Kiranyaz; Moncef Gabbouj
In this study, we propose an unsupervised, state-of-the-art saliency map generation algorithm which is based on a recently proposed link between quantum mechanics and spectral graph clustering, Quantum Cuts. The proposed algorithm forms a graph among superpixels extracted from an image and optimizes a criterion related to the image boundary, local contrast and area information. Furthermore, the effects of the graph connectivity, superpixel shape irregularity, superpixel size and how to determine the affinity between superpixels are analyzed in detail. Furthermore, we introduce a novel approach to propose several saliency maps. Resulting saliency maps consistently achieves a state-of-the-art performance in a large number of publicly available benchmark datasets in this domain, containing around 18k images in total.
Pattern Recognition Letters | 2016
Caglar Aytekin; Serkan Kiranyaz; Moncef Gabbouj
A state-of-the-art multi-resolution saliency detection method is presented.A set of saliency maps are generated via multispectral analysis.Saliency maps are converted to segments, by post-processing.The proposed multiple segments are then ranked via a learned ranking algorithm. In this paper, a learn-to-rank algorithm is proposed and applied over the segment pool of salient objects generated by an extension of the unsupervised Quantum-Cuts algorithm. Quantum Cuts is extended in a multiresolution approach as follows. First, superpixels are extracted from the input image using the simple linear iterative k-means algorithm; second, a scale space decomposition is applied prior to Quantum Cuts in order to capture salient details at different scales; and third, multispectral approach is followed to generate multiple proposals instead of a single proposal as in Quantum Cuts.The proposed learn-to-rank algorithm is then applied to these multiple proposals in order to select the most appropriate one. Shape and appearance features are extracted from the proposed segments and regressed with respect to a given confidence measure resulting in a ranked list of proposals. This ranking yields consistent improvements in an extensive collection of benchmark datasets containing around 18k images. Our analysis on the random forest regression models that are trained on different datasets shows that, although these datasets are of quite different characteristics, a model trained in the most complex dataset consistently provides performance improvements in all the other datasets, hence yielding robust salient object segmentation with a significant performance gap compared to the competing methods.
Pattern Recognition | 2018
Caglar Aytekin; Alexandros Iosifidis; Moncef Gabbouj
Abstract In this paper, we model the salient object detection problem under a probabilistic framework encoding the boundary connectivity saliency cue and smoothness constraints into an optimization problem. We show that this problem has a closed form global optimum solution, which estimates the salient object. We further show that along with the probabilistic framework, the proposed method also enjoys a wide range of interpretations, i.e. graph cut, diffusion maps and one-class classification. With an analysis according to these interpretations, we also find that our proposed method provides approximations to the global optimum to another criterion that integrates local/global contrast and large area saliency cues. The proposed unsupervised approach achieves mostly leading performance compared to the state-of-the-art unsupervised algorithms over a large set of salient object detection datasets including around 17k images for several evaluation metrics. Furthermore, the computational complexity of the proposed method is favorable/comparable to many state-of-the-art unsupervised techniques.
IEEE Transactions on Multimedia | 2018
Caglar Aytekin; Thomas Mauthner; Serkan Kiranyaz; Horst Bischof; Moncef Gabbouj
We present a novel approach for spatiotemporal saliency detection by optimizing a unified criterion of color contrast, motion contrast, appearance, and background cues. To this end, we first abstract the video by temporal superpixels. Second, we propose a novel graph structure exploiting the saliency cues to assign the edge weights. The salient segments are then extracted by applying a spectral foreground detection method, quantum cuts, on this graph. We evaluate our approach on several public datasets for video saliency and activity localization to demonstrate the favorable performance of the proposed video quantum cuts compared to the state of the art.
Pattern Recognition | 2017
Caglar Aytekin; Alexandros Iosifidis; Serkan Kiranyaz; Moncef Gabbouj
In this paper, we propose a novel method for learning graph affinities for salient object detection. First, we assume that a graph representation of an image is given with a predetermined connectivity rule and representative features for each of its nodes. Then, we learn to predict affinities related to this graph, that ensures a decent salient object detection performance, when used with a spectral graph based foreground detection method. To accomplish this task, we modify convolutional kernel networks (CKNs) for graph affinity calculation, which were originally proposed to predict similarities between images. Subsequently, we employ a spectral graph based salient object detection method - Extended Quantum Cuts (EQCut) - using these graph affinities. We show that the salient object detection error of such a system is differentiable with respect to the parameters of the CKN. Therefore, the proposed system can be trained end-to-end by applying error backpropagation and CKN parameters can be learned for salient object detection task. The comparative evaluations over a large set of benchmark datasets indicate that the proposed method has an insignificant computational burden on, but significantly outperforms the baseline EQCut - which uses color affinities - and achieves a comparable performance level with the state-of-the-art in some performance measures. We propose a method for learning graph affinities for salient object detection.We employ CKNs with a global inference layer EQCUT for salient object detection.We provide backpropagation rules of both CKN and EQCUT for parameter learning.The proposed system is trained end-to-end for performance enhancement.
Proceedings of SPIE | 2011
Caglar Aytekin; A. Aydin Alatan
A novel bag-of-visual-words algorithm is presented with two extensions compared to its classical version: exploiting scale information and weighting visual words. The scale information that is already extracted with SIFT detector is included as an additional element to the SIFT key-point descriptor, while the visual words are weighted during histogram assignment proportional to their importance which is measured by the ratio of their occurrences in the object to the occurrences in the background. The algorithm is tested for different geo-spatial object classes and the performance of the classical bag-of-visual-words algorithm is compared against the classical approach. Based on these results, a significant improvement is observed in terms of detection performance.
signal processing and communications applications conference | 2010
Caglar Aytekin; A. Aydin Alatan
Due to the degradation of an aerial image due to occluding of sunlight, namely shadows, algorithms like segmentation and object recognition may be highly influenced negatively. This paper describes a novel shadow restoration algorithm based on atmospheric effects and characteristics of sun-light for aerial images. In this work, first shadow regions are detected exploiting the Rayleigh scattering phenomena and the fact that shadows have low illumination intensity. After detection shadow restoration is achieved by first restoring partially occluded shadow areas, with modeling these transition regions with a sigmoid function. Then, fully occluded shadow regions are restored by first segmenting the image to uniformly illuminated regions, then multiplying the intensity in these regions with a constant, which is determined by the ratio of intensities between each segment and its non-shadow neighborhood. The experimental results yield better results when compared with the other methods in the literature.
IEEE Transactions on Image Processing | 2018
Caglar Aytekin; Jarno Nikkanen; Moncef Gabbouj
In this paper, we provide a novel data set designed for Camera-independent color constancy research. Camera independence corresponds to the robustness of an algorithm’s performance when it runs on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several laboratory and field scenes each of which is captured by three different cameras with minimal registration errors. The laboratory scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the laboratory light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation, and testing partitions. Accordingly, we evaluate two recently proposed convolutional neural network-based color constancy algorithms as baselines for future research. As a side contribution, this data set also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.In this paper, we provide a novel dataset designed for camera invariant color constancy research. Camera invariance corresponds to the robustness of an algorithm’s performance when run on images of the same scene taken by different cameras. Accordingly, images in the database correspond to several lab and field scenes each of which are captured by three different cameras with minimal registration errors. The lab scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the lab light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation and testing partitions. Accordingly, we evaluate a recently proposed convolutional neural network based color constancy algorithm as a baseline for future research. As a side contribution, this dataset also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.