Cem Direkoglu
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cem Direkoglu.
Pattern Recognition | 2011
Cem Direkoglu; Mark S. Nixon
We introduce a new multiscale Fourier-based object description in 2-D space using a low-pass Gaussian filter (LPGF) and a high-pass Gaussian filter (HPGF), separately. Using the LPGF at different scales (standard deviation) represents the inner and central part of an object more than the boundary. On the other hand using the HPGF at different scales represents the boundary and exterior parts of an object more than the central part. Our algorithms are also organized to achieve size, translation and rotation invariance. Evaluation indicates that representing the boundary and exterior parts more than the central part using the HPGF performs better than the LPGF-based multiscale representation, and in comparison to Zernike moments and elliptic Fourier descriptors with respect to increasing noise. Multiscale description using HPGF in 2-D also outperforms wavelet transform-based multiscale contour Fourier descriptors and performs similar to the perimeter descriptors without any noise.
international conference on acoustics, speech, and signal processing | 2005
Turgay Celik; Cem Direkoglu; Huseyin Ozkaramanli; Hasan Demirel; Mustafa Uyguroglu
Facial feature extraction is a fundamental problem in image processing. Correct extraction of features is essential for the success of many applications. Typical feature extraction algorithms fail for low resolution images which do not contain sufficient facial detail. A region-based super-resolution aided facial feature extraction method for low resolution video sequences is described. The region based approach makes use of segmented faces as the region of interest whereby a significant reduction in computational burden of the super-resolution algorithm is achieved. The results indicate that the region-based super-resolution aided extraction algorithm provides significant performance improvement in terms of correct detection in accurately locating the facial feature points.
international conference on multimedia retrieval | 2013
Suzanne Little; Iveel Jargalsaikhan; Kathy Clawson; Marcos Nieto; Hao Li; Cem Direkoglu; Noel E. O'Connor; Alan F. Smeaton; Bryan W. Scotney; Hui Wang; Jun Liu
This paper presents work on integrating multiple computer vision-based approaches to surveillance video analysis to support user retrieval of video segments showing human activities. Applied computer vision using real-world surveillance video data is an extremely challenging research problem, independently of any information retrieval (IR) issues. Here we describe the issues faced in developing both generic and specific analysis tools and how they were integrated for use in the new TRECVid interactive surveillance event detection task. We present an interaction paradigm and discuss the outcomes from face-to-face end user trials and the resulting feedback on the system from both professionals, who manage surveillance video, and computer vision or machine learning experts. We propose an information retrieval approach to finding events in surveillance video rather than solely relying on traditional annotation using specifically trained classifiers.
The Computer Journal | 2011
Mark S. Nixon; Xin U. Liu; Cem Direkoglu; David J. Hurley
There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low-and for high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but with performance analysis and optimization impeded by speed of computation. We have developed new feature extraction techniques on notional use of physical paradigms, with parametrization aimed to be more familiar to a scientifically trained user, aiming to make best use of computational resource. This paper is the first unified description of these new approaches, outlining the basis and results that can be achieved. We describe how gravitational force can be used for low-level analysis, while analogies of water flow and heat can be deployed to achieve high-level smooth shape detection, by determining features and shapes in a selection of images, comparing results with those by stock approaches from the literature. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision.
international conference on signal processing | 2008
Cem Direkoglu; Mark S. Nixon
In shape recognition, the boundary and exterior parts are amongst the most discriminative features. In this paper, we propose new multiscale Fourier-based object descriptors in 2-D space, which represents the boundary and exterior parts of an object more than the central part. This representation is based on using a high-pass Gaussian filter at different scales. The proposed algorithm makes descriptors size, translation and rotation invariant as well as increasing discriminative power and immunity to noise. In comparison, the new algorithm performs better than elliptic Fourier descriptors and Zernike moments with respect to increasing noise.
international conference on image processing | 2013
Iveel Jargalsaikhan; Suzanne Little; Cem Direkoglu; Noel E. O'Connor
We present a method that extracts effective features in videos for human action recognition. The proposed method analyses the 3D volumes along the sparse motion trajectories of a set of interest points from the video scene. To represent human actions, we generate a Bag-of-Features (BoF) model based on extracted features, and finally a support vector machine is used to classify human activities. Evaluation shows that the proposed features are discriminative and computationally efficient. Our method achieves state-of-the-art performance with the standard human action recognition benchmarks, namely KTH and Weizmann datasets.
International Journal of Computer Vision | 2012
Cem Direkoglu; Rozenn Dahyot; Michael Manzke
We present a novel and effective skeletonization algorithm for binary and gray-scale images, based on the anisotropic heat diffusion analogy. We diffuse the image in the direction normal to the feature boundaries and also allow tangential diffusion (curvature decreasing diffusion) to contribute slightly. The proposed anisotropic diffusion provides a high quality medial function in the image: it removes noise and preserves prominent curvatures of the shape along the level-sets (skeleton features). The skeleton strength map, which provides the likelihood of a point to be part of the skeleton, is defined by the mean curvature measure. Finally, thin and binary skeleton is obtained by non-maxima suppression and hysteresis thresholding of the skeleton strength map. Our method outperforms the most related and the popular methods in skeleton extraction especially in noisy conditions. Results show that the proposed approach is better at handling noise in images and preserving the skeleton features at the centerline of the shape.
indian conference on computer vision, graphics and image processing | 2008
Cem Direkoglu; Mark S. Nixon
In shape recognition, a multiscale description provides more information about the object, increases discrimination power and immunity to noise. In this paper, we develop a new multiscale Fourier-based object description in 2-D space using a low-pass Gaussian filter (LPGF) and a high-pass Gaussian filter (HPGF), separately. Using the LPGF, at different scales, represents the inner and central part of an object more than the boundary. On the other hand using the HPGF, at different scales, represents the boundary and exterior parts of an object more than the central part. Our algorithms are also organized to achieve size, translation and rotation invariance. Evaluation indicates that representing the boundary and exterior parts more than the central part using the HPGF performs better than the LPGF based multiscale representation, and in comparison to Zernike moments and elliptic Fourier descriptors with respect to increasing noise.
Pattern Recognition Letters | 2011
Cem Direkoglu; Mark S. Nixon
In this paper, a new and automatic moving-edge detection algorithm is proposed, based on using the heat flow analogy. This algorithm starts with anisotropic heat diffusion in the spatial domain, to remove noise and sharpen region boundaries for the purpose of obtaining high quality edge data. Then, isotropic and linear heat diffusion is applied in the temporal domain to calculate the total amount of heat flow. The moving-edges are represented as the total amount of heat flow out from the reference frame. The overall process is completed by non-maxima suppression and hysteresis thresholding to obtain binary moving-edges. Evaluation, on a variety of data, indicates that this approach can handle noise in the temporal domain because of the averaging inherent of isotropic heat flow. Results also show that this technique can detect moving-edges in image sequences, without background image subtraction.
international symposium on visual computing | 2006
Cem Direkoglu; Mark S. Nixon
In this paper, an intelligent and automatic moving object edge detection algorithm is proposed, based on heat flow analogy. This algorithm starts with anisotropic heat diffusion in the spatial domain to remove noise and sharpen region boundaries for the purpose of obtaining high quality edge data. Then, isotropic heat diffusion is applied in the temporal domain to calculate the total amount of heat flow. The moving edges are represented as the total amount of heat flow out from the reference frame. The overall process is completed by non-maxima suppression and hysteresis thresholding to obtain binary moving edges. Evaluation results indicate that this approach has advantages in handling noise in the temporal domain because of the averaging inherent of isotropic heat flow. Results also show that this technique can detect moving edges in image sequences.