Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ilkay Ulusoy is active.

Publication


Featured researches published by Ilkay Ulusoy.


international conference on machine learning | 2005

The 2005 PASCAL visual object classes challenge

Mark Everingham; Andrew Zisserman; Christopher K. I. Williams; Luc Van Gool; Moray Allan; Christopher M. Bishop; Olivier Chapelle; Navneet Dalal; Thomas Deselaers; Gyuri Dorkó; Stefan Duffner; Jan Eichhorn; Jason Farquhar; Mario Fritz; Christophe Garcia; Thomas L. Griffiths; Frédéric Jurie; Daniel Keysers; Markus Koskela; Jorma Laaksonen; Diane Larlus; Bastian Leibe; Hongying Meng; Hermann Ney; Bernt Schiele; Cordelia Schmid; Edgar Seemann; John Shawe-Taylor; Amos J. Storkey; Sandor Szedmak

The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved.


computer vision and pattern recognition | 2005

Generative versus discriminative methods for object recognition

Ilkay Ulusoy; Christopher M. Bishop

Many approaches to object recognition are founded on probability theory, and can be broadly characterized as either generative or discriminative according to whether or not the distribution of the image features is modelled. Generative and discriminative methods have very different characteristics, as well as complementary strengths and weaknesses. In this paper we introduce new generative and discriminative models for object detection and classification based on weakly labelled training data. We use these models to illustrate the relative merits of the two approaches in the context of a data set of widely varying images of non-rigid objects (animals). Our results support the assertion that neither approach alone will be sufficient for large scale object recognition, and we discuss techniques for combining them.


international conference on computer vision | 2007

3D Object Representation Using Transform and Scale Invariant 3D Features

Erdem Akagunduz; Ilkay Ulusoy

An algorithm is proposed for 3D object representation using generic 3D features which are transformation and scale invariant. Descriptive 3D features and their relations are used to construct a graphical model for the object which is later trained and then used for detection purposes. Descriptive 3D features are the fundamental structures which are extracted from the surface of the 3D scanner output. This surface is described by mean and Gaussian curvature values at every data point at various scales and a scale-space search is performed in order to extract the fundamental structures and to estimate the location and the scale of each fundamental structure.


International Journal of Remote Sensing | 2012

Unsupervised building detection in complex urban environments from multispectral satellite imagery

Örsan Aytekin; Arzu Erener; Ilkay Ulusoy; Şebnem Düzgün

A generic algorithm is presented for automatic extraction of buildings and roads from complex urban environments in high-resolution satellite images where the extraction of both object types at the same time enhances the performance. The proposed approach exploits spectral properties in conjunction with spatial properties, both of which actually provide complementary information to each other. First, a high-resolution pansharpened colour image is obtained by merging the high-resolution panchromatic (PAN) and the low-resolution multispectral images yielding a colour image at the resolution of the PAN band. Natural and man-made regions are classified and segmented by the Normalized Difference Vegetation Index (NDVI). Shadow regions are detected by the chromaticity to intensity ratio in the YIQ colour space. After the classification of the vegetation and the shadow areas, the rest of the image consists of man-made areas only. The man-made areas are partitioned by mean shift segmentation where some resulting segments are irrelevant to buildings in terms of shape. These artefacts are eliminated in two steps: First, each segment is thinned using morphological operations and its length is compared to a threshold which is determined according to the empirical length of the buildings. As a result, long segments which most probably represent roads are masked out. Second, the erroneous thin artefacts which are classified by principal component analysis (PCA) are removed. In parallel to PCA, small artefacts are wiped out based on morphological processes as well. The resultant man-made mask image is overlaid on the ground-truth image, where the buildings are previously labelled, for the accuracy assessment of the methodology. The method is applied to Quickbird images (2.4 m multispectral R, G, B, near-infrared (NIR) bands and 0.6 m PAN band) of eight different urban regions, each of which includes different properties of surface objects. The images are extending from simple to complex urban area. The simple image type includes a regular urban area with low density and regular building pattern. The complex image type involves almost all kinds of challenges such as small and large buildings, regions with bare soil, vegetation areas, shadows and so on. Although the performance of the algorithm slightly changes for various urban complexity levels, it performs well for all types of urban areas.


systems man and cybernetics | 2015

Railway Fastener Inspection by Real-Time Machine Vision

Caglar Aytekin; Yousef Rezaeitabar; Sedat Dogru; Ilkay Ulusoy

In this paper, a real-time railway fastener detection system using a high-speed laser range finder camera is presented. First, an extensive analysis of various methods based on pixel-wise and histogram similarities are conducted on a specific railway route. Then, a fusing stage is introduced which combines least correlated approaches also considering the performance upgrade after fusing. Then, the resulting method is tested on a larger database collected from a different railway route. After observing repeated successes, the method is implemented on NI LabVIEW and run real-time with a high-speed 3-D camera placed under a railway carriage designed for railway quality inspection.


international conference on recent advances in space technologies | 2009

Automatic and unsupervised building extraction in complex urban environments from multi spectral satellite imagery

Örsan Aytekin; Ilkay Ulusoy; A. Erener; H.S.B. Duzgun

This paper presents an approach for building extraction in remotely sensed images composed of low-resolution multi-spectral and high resolution panchromatic bands. The proposed approach exploits spectral properties in conjunction with spatial properties, both of which actually provide complementary information to each other. First, high resolution pan-sharpened color image is obtained via the process of merging high resolution panchromatic and low resolution multispectral imagery yielding a color image at the resolution of panchromatic band. Natural and man-made regions are classified by using Normalized Difference Vegetation Index (NDVI). Then shadow is detected by using chromaticity to intensity ratio in YIQ color space. After the classification of the vegetation and the shadow areas, the rest of the image consists of man-made areas only. Then, the manmade areas are partitioned by mean shift segmentation. However, some resulting segments are irrelevant to buildings in shape. These artifacts are eliminated in two steps: First, each segment is thinned using morphological operations and the length of it is compared to a threshold which is specified according to the empirical length of buildings. As a result, long segments which most probably represent roads are masked out. Second, the erroneous thin artifacts are removed via principle component analysis (PCA). In parallel to PCA, small artifacts are wiped out based on morphological processes also. The resultant manmade mask image is overlaid on the ground truth image, where the buildings are manually labeled, for the assessment of the methodology. The proposed methodology is applied to various Quickbird images. The experiments show that the methodology performs well to extract buildings in complex environments.


Lecture Notes in Computer Science | 2006

Comparison of Generative and Discriminative Techniques for Object Detection and Classification

Ilkay Ulusoy; Christopher M. Bishop

Many approaches to object recognition are founded on probability theory, and can be broadly characterized as either generative or discriminative according to whether or not the distribution of the image features is modelled. Generative and discriminative methods have very different characteristics, as well as complementary strengths and weaknesses. In this chapter we introduce new generative and discriminative models for object detection and classification based on weakly labelled training data. We use these models to illustrate the relative merits of the two approaches in the context of a data set of widely varying images of non-rigid objects (animals). Our results support the assertion that neither approach alone will be sufficient for large scale object recognition, and we discuss techniques for combining the strengths of generative and discriminative approaches.


Pattern Recognition Letters | 2011

Automatic segmentation of VHR images using type information of local structures acquired by mathematical morphology

Örsan Aytekin; Ilkay Ulusoy

The morphological profile (MP) and differential morphological profile (DMP) have been used extensively to acquire spatial information to be used in the segmentation of very high resolution (VHR) remotely sensed images. In most of the previous approaches, the maxima of the MP and DMP were investigated to estimate the best representative scale in the spatial domain for the pixel under consideration. Then, the object type (i.e. dark, bright or flat) was estimated based on the location of the maximum. Finally, the image segmentation was performed using the scale and type information as features. This approach usually causes over-segmentation. In this study, we also investigate the relevance of the DMP and the meaningful object types underlying the pixel of interest, however, instead of the maxima of the DMP, the type information is estimated using the whole DMP which is weighted by a weight function. Thus, the scale is not estimated directly but used indirectly in the estimation of the characteristic type for the object to which the pixel belongs. Then, the pixels are clustered based on their types only. The method has been applied to panchromatic high resolution QuickBird satellite images of the city of Ankara, Turkey. The results of the method were compared with previous studies and the proposed method seems to segment the images more precisely and semantically than the previous approaches.


international conference on recent advances in space technologies | 2009

Building detection in high resolution remotely sensed images based on morphological operators

Örsan Aytekin; Ilkay Ulusoy; Esra Zeynep Abacioglu; Erhan Gokcay

Information retrieval from high resolution remotely sensed images is a challenging issue due to the inherent complexity and the curse of dimensionality of data under study. This paper presents an approach for building detection in high resolution remotely sensed images incorporating structural information of spatial data into spectral information. The proposed approach moves along eliminating irrelevant areas in a hierarchical manner. As a first step, pan-sharpened image is obtained from multi-spectral and panchromatic bands of Quickbird image. Vegetation and shadow regions are masked out by using Normalized Difference Vegetation Index (NDVI) and ratio of hue to intensity in YIQ model, respectively. Then, panchromatic band is filtered by mean shift filtering for smoothing structures while preserving the discontinuities near boundaries. Next, differential morphological profile (DMP) is calculated for each pixel and a relative measure of structure size is recorded as the first maximum value of DMP which generates a labeled image representing connected components according to sizes of structures. However, there appear some connected components which are irrelevant to buildings in shape. To eliminate those connected components, their skeletons are obtained via thinning to get a relative length measure along with measuring areas of connected components. These measures are compared to a threshold individually, which provides a cue for a candidate building structure.


2008 IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS 2008) | 2008

Performance evaluation of building detection and digital surface model extraction algorithms: Outcomes of the PRRS 2008 Algorithm Performance Contest

Selim Aksoy; Bahadir Ozdemir; Sandra Eckert; Francois Kayitakire; Martino Pesarasi; Örsan Aytekin; Christoph C. Borel; Jan Cech; Emmanuel Christophe; Sebnem Duzgun; Arzu Erener; Kivanc Ertugay; Ejaz Hussain; Jordi Inglada; Sébastien Lefèvre; Ozgun Ok; Dilek Koc San; Radim Šára; Jie Shan; Jyothish Soman; Ilkay Ulusoy; Regis Witz

This paper presents the initial results of the algorithm performance contest that was organized as part of the 5th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS 2008). The focus of the 2008 contest was automatic building detection and digital surface model (DSM) extraction. A QuickBird data set with manual ground truth was used for building detection evaluation, and a stereo Ikonos data set with a highly accurate reference DSM was used for DSM extraction evaluation. Nine submissions were received for the building detection task, and three submissions were received for the DSM extraction task. We provide an overview of the data sets, the summaries of the methods used for the submissions, the details of the evaluation criteria, and the results of the initial evaluation.

Collaboration


Dive into the Ilkay Ulusoy's collaboration.

Top Co-Authors

Avatar

Erdem Akagunduz

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Ugur Halici

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Örsan Aytekin

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Nihan Kesim Cicekli

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Yousef Rezaeitabar

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kemal Leblebicioglu

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Mehmet Altan Toksöz

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Murat Yirci

Middle East Technical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge