Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fatih Porikli is active.

Publication


Featured researches published by Fatih Porikli.


IEEE Transactions on Intelligent Transportation Systems | 2016

Fast Detection of Multiple Objects in Traffic Scenes With a Common Detection Framework

Qichang Hu; Sakrapee Paisitkriangkrai; Chunhua Shen; Anton van den Hengel; Fatih Porikli

Traffic scene perception (TSP) aims to extract accurate real-time on-road environment information, which involves three phases: detection of objects of interest, recognition of detected objects, and tracking of objects in motion. Since recognition and tracking often rely on the results from detection, the ability to detect objects of interest effectively plays a crucial role in TSP. In this paper, we focus on three important classes of objects: traffic signs, cars, and cyclists. We propose to detect all the three important objects in a single learning-based detection framework. The proposed framework consists of a dense feature extractor and detectors of three important classes. Once the dense features have been extracted, these features are shared with all detectors. The advantage of using one common framework is that the detection speed is much faster, since all dense features need only to be evaluated once in the testing phase. In contrast, most previous works have designed specific detectors using different features for each of these three classes. To enhance the feature robustness to noises and image deformations, we introduce spatially pooled features as a part of aggregated channel features. In order to further improve the generalization performance, we propose an object subcategorization method as a means of capturing the intraclass variation of objects. We experimentally demonstrate the effectiveness and efficiency of the proposed framework in three detection applications: traffic sign detection, car detection, and cyclist detection. The proposed framework achieves the competitive performance with state-of-the-art approaches on several benchmark data sets.


workshop on applications of computer vision | 2015

Material Classification on Symmetric Positive Definite Manifolds

Masoud Faraki; Mehrtash Tafazzoli Harandi; Fatih Porikli

This paper tackles the problem of categorizing materials and textures by exploiting the second order statistics. To this end, we introduce the Extrinsic Vector of Locally Aggregated Descriptors (E-VLAD), a method to combine local and structured descriptors into a unified vector representation where each local descriptor is a Covariance Descriptor (CovD). In doing so, we make use of an accelerated method of obtaining a visual codebook where each atom is itself a CovD. We will then introduce an efficient way of aggregating local CovDs into a vector representation. Our method could be understood as an extrinsic extension of the highly acclaimed method of Vector of Locally Aggregated Descriptors [17] (or VLAD) to CovDs. We will show that the proposed method is extremely powerful in classifying materials/ textures and can outperform complex machineries even with simple classifiers.


international conference on image processing | 2016

Semantic context and depth-aware object proposal generation

Haoyang Zhang; Xuming He; Fatih Porikli; Laurent Kneip

This paper presents a context-aware object proposal generation method for stereo images. Unlike existing methods which mostly rely on image-based or depth features to generate object candidates, we propose to incorporate additional geometric and high-level semantic context information into the proposal generation. Our method starts from an initial object proposal set, and encode objectness for each proposal using three types of features , including a CNN feature, a geometric feature computed from dense depth map, and a semantic context feature from pixel-wise scene labeling. We then train an efficient random forest classifier to re-rank the initial proposals and a set of linear regressors to fine-tune the location of each proposal. Experiments on the KITTI dataset show our approach significantly improves the quality of the initial proposals and achieves the state-of-the-art performance using only a fraction of original object candidates.


Archive | 2018

Museum Exhibit Identification Challenge for the Supervised Domain Adaptation and Beyond

Piotr Koniusz; Yusuf Tas; Hongguang Zhang; Mehrtash Harandi; Fatih Porikli; Rui Zhang

We study an open problem of artwork identification and propose a new dataset dubbed Open Museum Identification Challenge (Open MIC). It contains photos of exhibits captured in 10 distinct exhibition spaces of several museums which showcase paintings, timepieces, sculptures, glassware, relics, science exhibits , natural history pieces, ceramics, pottery, tools and indigenosus crafts. The goal of Open MIC is to stimulate research in domain adaptation, egocentric recognition and few-shot learning by providing a testbed complementary to the famous Office dataset which reaches (sim )90% accuracy. To form our dataset, we captured a number of images per art piece with a mobile phone and wearable cameras to form the source and target data splits, respectively. To achieve robust baselines, we build on a recent approach that aligns per-class scatter matrices of the source and target CNN streams. Moreover, we exploit the positive definite nature of such representations by using end-to-end Bregman divergences and the Riemannian metric. We present baselines such as training/evaluation per exhibition and training/evaluation on the combined set covering 866 exhibit identities. As each exhibition poses distinct challenges e.g., quality of lighting, motion blur, occlusions, clutter, viewpoint and scale variations, rotations, glares, transparency, non-planarity, clipping, we break down results w.r.t. these factors.


international conference on pattern recognition | 2016

Finetuning Convolutional Neural Networks for visual aesthetics

Yeqing Wang; Yi Li; Fatih Porikli

Inferring the aesthetic quality of images is a challenging computer vision task due to its subjective and conceptual nature. Most image aesthetics evaluation approaches focused on designing handcrafted features, and only a few adopted learning of relevant and imperative characteristics in a data-driven manner. In this paper, we propose to attune Convolutional Neural Networks (CNNs) for image aesthetics. Unlike previous deep learning based techniques, we employ pretrained models, namely AlexNet [12] and the 16-layer VGGNet [20], and calibrate them to estimate visual aesthetic quality. This enables exploiting automatically the inherent information from much larger scale and more diversified image datasets. We tested our methods on AVA and CUHKPQ image aesthetics datasets on two different training-testing partitions, and compared the performance using both local and contextual information. Experimental results suggest that our strategy is robust, effective and superior to the state-of-the-art approaches.


Archive | 2008

Computer implemented method for constructing classifier from training data detecting moving objects

Fatih Porikli; Oncel Tuzel


Archive | 2012

More About VLAD: A Leap from Euclidean to Riemannian Manifolds Supplementary Material

Masoud Faraki; Mehrtash Harandi; Fatih Porikli


Archive | 2008

Computerimplementiertes Verfahren zur Klassifikatorkonstruktion aus Übungsdaten für die Detektion beweglicher Objekte

Fatih Porikli; Oncel Tuzel


Archive | 2008

Computerimplementierungsverfahren zur Klassifikatorkonstruktion aus Übungsdaten und Erfassung beweglicher Objekte in Testdaten mittels Klassifikator

Fatih Porikli; Oncel Tuzel


Archive | 2007

Computerized method for object tracking in a frame sequence

Fatih Porikli; Oncel Tuzel

Collaboration


Dive into the Fatih Porikli's collaboration.

Top Co-Authors

Avatar

Oncel Tuzel

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Mehrtash Harandi

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongguang Zhang

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Laurent Kneip

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Masoud Faraki

Isfahan University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qichang Hu

University of Adelaide

View shared research outputs
Researchain Logo
Decentralizing Knowledge