Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yalin Bastanlar is active.

Publication


Featured researches published by Yalin Bastanlar.


International Journal of Computer Vision | 2012

Joint Optimization for Object Class Segmentation and Dense Stereo Reconstruction

Lubor Ladický; Paul Sturgess; Chris Russell; Sunando Sengupta; Yalin Bastanlar; William Clocksin; Philip H. S. Torr

The problems of dense stereo reconstruction and object class segmentation can both be formulated as Random Field labeling problems, in which every pixel in the image is assigned a label corresponding to either its disparity, or an object class such as road or building. While these two problems are mutually informative, no attempt has been made to jointly optimize their labelings. In this work we provide a flexible framework configured via cross-validation that unifies the two problems and demonstrate that, by resolving ambiguities, which would be present in real world data if the two problems were considered separately, joint optimization of the two problems substantially improves performance. To evaluate our method, we augment the Leuven data set (http://cms.brookes.ac.uk/research/visiongroup/files/Leuven.zip), which is a stereo video shot from a car driving around the streets of Leuven, with 70 hand labeled object class and disparity maps. We hope that the release of these annotations will stimulate further work in the challenging domain of street-view analysis. Complete source code is publicly available (http://cms.brookes.ac.uk/staff/Philip-Torr/ale.htm).


International Journal of Computer Vision | 2011

Calibration of Central Catadioptric Cameras Using a DLT-Like Approach

Luis Puig; Yalin Bastanlar; Peter F. Sturm; José Jesús Guerrero; João Pedro Barreto

In this study, we present a calibration technique that is valid for all single-viewpoint catadioptric cameras. We are able to represent the projection of 3D points on a catadioptric image linearly with a 6×10 projection matrix, which uses lifted coordinates for image and 3D points. This projection matrix can be computed from 3D–2D correspondences (minimum 20 points distributed in three different planes). We show how to decompose it to obtain intrinsic and extrinsic parameters. Moreover, we use this parameter estimation followed by a non-linear optimization to calibrate various types of cameras. Our results are based on the sphere camera model which considers that every central catadioptric system can be modeled using two projections, one from 3D points to a unitary sphere and then a perspective projection from the sphere to the image plane. We test our method both with simulations and real images, and we analyze the results performing a 3D reconstruction from two omnidirectional images.


Methods of Molecular Biology | 2014

Introduction to Machine Learning

Yalin Bastanlar; Mustafa Özuysal

The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning approaches for this application area. In this chapter, we first review the fundamental concepts of machine learning such as feature assessment, unsupervised versus supervised learning and types of classification. Then, we point out the main issues of designing machine learning experiments and their performance evaluation. Finally, we introduce some supervised learning methods.


Signal, Image and Video Processing | 2016

A direct approach for object detection with catadioptric omnidirectional cameras

Ibrahim Cinaroglu; Yalin Bastanlar

In this paper, we present an omnidirectional vision-based method for object detection. We first adopt the conventional camera approach that uses sliding windows and histogram of oriented gradients (HOG) features. Then, we describe how the feature extraction step of the conventional approach should be modified for a theoretically correct and effective use in omnidirectional cameras. Main steps are modification of gradient magnitudes using Riemannian metric and conversion of gradient orientations to form an omnidirectional sliding window. In this way, we perform object detection directly on the omnidirectional images without converting them to panoramic or perspective images. Our experiments, with synthetic and real images, compare the proposed approach with regular (unmodified) HOG computation on both omnidirectional and panoramic images. Results show that the proposed approach should be preferred.


Computer Vision and Image Understanding | 2008

Corner validation based on extracted corner properties

Yalin Bastanlar; Yasemin Yardimci

We developed a method to validate and filter a large set of previously obtained corner points. We derived the necessary relationships between image derivatives and estimates of corner angle, orientation and contrast. Commonly used cornerness measures of the auto-correlation matrix estimates of image derivatives are expressed in terms of these estimated corner properties. A candidate corner is validated if the cornerness score directly obtained from the image is sufficiently close to the cornerness score for an ideal corner with the estimated orientation, angle and contrast. We tested this algorithm on both real and synthetic images and observed that this procedure significantly improves the corner detection rates based on human evaluations. We tested the accuracy of our corner property estimates under various noise conditions. Extracted corner properties can also be used for tasks like feature point matching, object recognition and pose estimation.


signal processing and communications applications conference | 2014

A Direct approach for human detection with catadioptric omnidirectional cameras

Ibrahim Cinaroglu; Yalin Bastanlar

This paper presents an omnidirectional vision based solution to detect human beings. We first go through the conventional sliding window approaches for human detection. Then, we describe how the feature extraction step of the conventional approaches should be modified for a theoretically correct and effective use in omnidirectional cameras. In this way we perform human detection directly on the omnidirectional images without converting them to panoramic or perspective image. Our experiments, both with synthetic and real images show that the proposed approach produces successful results.


Image and Vision Computing | 2012

Multi-view structure-from-motion for hybrid camera scenarios

Yalin Bastanlar; Alptekin Temizel; Yasemin Yardimci; Peter F. Sturm

We describe a pipeline for structure-from-motion (SfM) with mixed camera types, namely omnidirectional and perspective cameras. For the steps of this pipeline, we propose new approaches or adapt the existing perspective camera methods to make the pipeline effective and automatic. We model our cameras of different types with the sphere camera model. To match feature points, we describe a preprocessing algorithm which significantly increases scale invariant feature transform (SIFT) matching performance for hybrid image pairs. With this approach, automatic point matching between omnidirectional and perspective images is achieved. We robustly estimate the hybrid fundamental matrix with the obtained point correspondences. We introduce the normalization matrices for lifted coordinates so that normalization and denormalization can be performed linearly for omnidirectional images. We evaluate the alternatives of estimating camera poses in hybrid pairs. A weighting strategy is proposed for iterative linear triangulation which improves the structure estimation accuracy. Following the addition of multiple perspective and omnidirectional images to the structure, we perform sparse bundle adjustment on the estimated structure by adapting it to use the sphere camera model. Demonstrations of the end-to-end multi-view SfM pipeline with the real images of mixed camera types are presented. Display Omitted Highlights? We describe a pipeline to perform structure-from-motion with mixed camera types. ? Sphere camera model is used throughout the pipeline for different camera types. ? Demonstrations of the proposed approach in real world scenarios are presented.


information technology interfaces | 2007

User Behaviour in Web-Based Interactive Virtual Tours

Yalin Bastanlar

In this work, user behaviour characteristics were investigated for a Web-based virtual tour application in which 360deg panoramic images were used. There exist several options for the user to navigate in the museum (interactive floor plan, links in the images and pull-down menu). Written and audio information about the sections visited, detailed information for some artworks and several control functions are provided at the Webpage. 15 participants undertook the usability test and they filled a post-experiment questionnaire. The main research questions were: which option of navigation is preferred? At what rate are the written information area, audio information option and extra artwork information used? Which way of control (mouse, keyboard, panel buttons) is preferred? Results showed that the floor plan is the most preferred way for changing the location and pull-down menu is the least preferred. Another finding is that the mouse is the most preferred way for control functions.


international conference on pattern recognition | 2010

Effective Structure-from-Motion for Hybrid Camera Systems

Yalin Bastanlar; Alptekin Temizel; Yasemin Yardimci; Peter F. Sturm

We describe a pipeline for structure-from-motion with mixed camera types, namely omni directional and perspective cameras. The steps of the pipeline can be summarized as calibration, point matching, pose estimation, triangulation and bundle adjustment. For these steps, we either propose improved methods or modify existing perspective camera methods to make the pipeline more effective and automatic when employed for hybrid camera systems.


international conference on intelligent transportation systems | 2015

Combining Shape-Based and Gradient-Based Classifiers for Vehicle Classification

Hakki Can Karaimer; Ibrahim Cinaroglu; Yalin Bastanlar

In this paper, we present our work on vehicle classification with omnidirectional cameras. In particular, we investigate whether the combined use of shape-based and gradient-based classifiers outperforms the individual classifiers or not. For shape-based classification, we extract features from the silhouettes in the omnidirectional video frames, which are obtained after background subtraction. Classification is performed with kNN (k Nearest Neighbors) method, which has been commonly used in shape-based vehicle classification studies in the past. For gradient-based classification, we employ HOG (Histogram of Oriented Gradients) features. Instead of searching a whole video frame, we extract the features in the region located by the foreground silhouette. We use SVM (Support Vector Machines) as the classifier since HOG+SVM is a commonly used pair in visual object detection. The vehicle types that we worked on are motorcycle, car and van (minibus). In experiments, we first analyze the performances of shape-based and HOG-based classifiers separately. Then, we analyze the performance of the combined classifier where the two classifiers are fused at decision level. Results show that the combined classifier is superior to the individual classifiers.

Collaboration


Dive into the Yalin Bastanlar's collaboration.

Top Co-Authors

Avatar

Yasemin Yardimci

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alptekin Temizel

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Ipek Baris

İzmir Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter F. Sturm

Cincinnati Children's Hospital Medical Center

View shared research outputs
Top Co-Authors

Avatar

Ibrahim Cinaroglu

İzmir Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chris Russell

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Paul Sturgess

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge