Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Berkan Solmaz is active.

Publication


Featured researches published by Berkan Solmaz.


computer vision and pattern recognition | 2016

Evaluation of Feature Channels for Correlation-Filter-Based Visual Object Tracking in Infrared Spectrum

Erhan Gundogdu; Aykut Koç; Berkan Solmaz; Riad I. Hammoud; A. Aydin Alatan

Correlation filters for visual object tracking in visible imagery has been well-studied. Most of the correlation-filterbased methods use either raw image intensities or feature maps of gradient orientations or color channels. However, well-known features designed for visible spectrum may not be ideal for infrared object tracking, since infrared and visible spectra have dissimilar characteristics in general. We assess the performance of two state-of-the-art correlationfilter-based object tracking methods on Linköping Thermal InfraRed (LTIR) dataset of medium wave and longwave infrared videos, using deep convolutional neural networks (CNN) features as well as other traditional hand-crafted descriptors. The deep CNN features are trained on an infrared dataset consisting of 16K objects for a supervised classification task. The highest performance in terms of the overlap metric is achieved when these deep CNN features are utilized in a correlation-filter-based tracker.


asian conference on computer vision | 2016

MARVEL: A Large-Scale Image Dataset for Maritime Vessels

Erhan Gundogdu; Berkan Solmaz; Veysel Yucesoy; Aykut Koç

Fine-grained visual categorization has recently received great attention as the volumes of the labelled datasets for classification of specific objects, such as cars, bird species, and aircrafts, have been increasing. The collection of large datasets has helped vision based classification approaches and led to significant improvements in performances of the state-of-the-art methods. Visual classification of maritime vessels is another important task assisting naval security and surveillance applications. In this work, we introduce a large-scale image dataset for maritime vessels, consisting of 2 million user uploaded images and their attributes including vessel identity, type, photograph category and year of built, collected from a community website. We categorize the images into 109 vessel type classes and construct 26 superclasses by combining heavily populated classes with a semi-automatic clustering scheme. For the analysis of our dataset, extensive experiments have been performed, involving four potentially useful applications; vessel classification, verification, retrieval, and recognition. We report encouraging results for each application. The introduced dataset is publicly available.


Iet Computer Vision | 2018

Fine-grained recognition of maritime vessels and land vehicles by deep feature embedding

Berkan Solmaz; Erhan Gundogdu; Veysel Yucesoy; Aykut Koç; A. Aydin Alatan

Recent advances in large-scale image and video analysis have empowered the potential capabilities of visual surveillance systems. In particular, deep learning-based approaches bring in substantial benefits in solving certain computer vision problems such as fine-grained object recognition. Here, the authors mainly concentrate on classification and identification of maritime vessels and land vehicles, which are the key constituents of visual surveillance systems. Employing publicly available data sets for maritime vessels and land vehicles, the authors aim to improve visual recognition. Specifically, the authors focus on five tasks regarding visual recognition; coarse-grained classification, fine-grained classification, coarse-grained retrieval, fine-grained retrieval, and verification. To increase the performance in these tasks, the authors utilise a multi-task learning framework and present a novel loss function which simultaneously considers deep feature learning and classification by exploiting the available hierarchical labels of individual samples and the global statistics of distances between the data pairs. The authors observe that the proposed multi-task learning model improves the fine-grained recognition performance on MARVEL and Stanford Cars data sets, compared to training of a model targeting a single recognition task.


Ipsj Transactions on Computer Vision and Applications | 2017

Generic and attribute-specific deep representations for maritime vessels

Berkan Solmaz; Erhan Gundogdu; Veysel Yucesoy; Aykut Koç

Fine-grained visual categorization has recently received great attention as the volumes of labeled datasets for classification of specific objects, such as cars, bird species, and air-crafts, have been increasing. The availability of large datasets led to significant performance improvements in several vision-based classification tasks. Visual classification of maritime vessels is another important task, assisting naval security and surveillance applications. We introduced, MARVEL, a large-scale image dataset for maritime vessels, consisting of 2 million user-uploaded images and their various attributes, including vessel identity, type, category, year built, length, and tonnage, collected from a community website. The images were categorized into vessel type classes and also into superclasses defined by combining semantically similar classes, following a semi-automatic clustering scheme. For the analysis of the presented dataset, extensive experiments have been performed, involving several potentially useful applications: vessel type classification, identity verification, retrieval, and identity recognition with and without prior vessel type knowledge. Furthermore, we attempted interesting problems of visual marine surveillance such as predicting and classifying maritime vessel attributes such as length, summer deadweight, draught, and gross tonnage by solely interpreting the visual content in the wild, where no additional cues such as scale, orientation, or location are provided. By utilizing generic and attribute-specific deep representations for maritime vessels, we obtained promising results for the aforementioned applications.


Electro-Optical Remote Sensing XI | 2017

Fine-grained visual marine vessel classification for coastal surveillance and defense applications

Aykut Koç; Berkan Solmaz; Erhan Gundogdu; Veysel Yucesoy; Kaan Karaman

The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.


signal processing and communications applications conference | 2017

Deep distance metric learning for maritime vessel identification

Erhan Gundogdu; Berkan Solmaz; Aykut Koç; Veysel Yucesoy; A. Aydin Alatan

This paper addresses the problem of maritime vessel identification by exploiting the state-of-the-art techniques of distance metric learning and deep convolutional neural networks since vessels are the key constituents of marine surveillance. In order to increase the performance of visual vessel identification, we propose a joint learning framework which considers a classification and a distance metric learning cost function. The proposed method utilizes the quadruplet samples from a diverse image dataset to learn the ranking of the distances for hierarchical levels of labeling. The proposed method performs favorably well for vessel identification task against the conventional use of neuron activations towards the final layers of the classification networks. The proposed method achieves 60 percent vessel identification accuracy for 3965 different vessels without sacrificing vessel type classification accuracy.


signal processing and communications applications conference | 2017

Automated visual classification of indoor scenes and architectural styles

Berkan Solmaz; Veysel Yucesoy; Aykut Koç

The ability to automatically categorize a large number of new images that are being uploaded to real estate, furniture, and decoration websites, and personalized search functionality will be a great convenience for the users. In this study, modeling of types and architectural styles of indoor scenes is attempted using visual descriptors of different structures. The performance of the learned models is quantitatively measured on useful applications such as image classification and retrieval.


Emerging Imaging and Sensing Technologies for Security and Defence II | 2017

Single-particle imaging for biosensor applications

M. Selim Ünlü; Berkan Solmaz; Mustafa Yorulmaz; Elif Ç. Seymour; Aykut Koç; Çağatay Işıl; Celalettin Yurdakul

Current state-of-the-art technology for in-vitro diagnostics employ laboratory tests such as ELISA that consists of a multi-step test procedure and give results in analog format. Results of these tests are interpreted by the color change in a set of diluted samples in a multi-well plate. However, detection of the minute changes in the color poses challenges and can lead to false interpretations. Instead, a technique that allows individual counting of specific binding events would be useful to overcome such challenges. Digital imaging has been applied recently for diagnostics applications. SPR is one of the techniques allowing quantitative measurements. However, the limit of detection in this technique is on the order of nM. The current required detection limit, which is already achieved with the analog techniques, is around pM. Optical techniques that are simple to implement and can offer better sensitivities have great potential to be used in medical diagnostics. Interference Microscopy is one of the tools that have been investigated over years in optics field. More of the studies have been performed in confocal geometry and each individual nanoparticle was observed separately. Here, we achieve wide-field imaging of individual nanoparticles in a large field-of-view (~166 μm × 250 μm) on a micro-array based sensor chip in fraction of a second. We tested the sensitivity of our technique on dielectric nanoparticles because they exhibit optical properties similar to viruses and cells. We can detect non-resonant dielectric polystyrene nanoparticles of 100 nm. Moreover, we perform post-processing applications to further enhance visibility.


Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies | 2017

Deep learning-based fine-grained car make/model classification for visual surveillance

Aykut Koç; Erhan Gundogdu; Berkan Solmaz; Enes Sinan Parildi; Veysel Yucesoy

Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.


signal processing and communications applications conference | 2018

Variational autoencoders with triplet loss for representation learning

Çağatay Işıl; Berkan Solmaz; Aykut Koç

Collaboration


Dive into the Berkan Solmaz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Aydin Alatan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge