Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neslihan Bayramoglu is active.

Publication


Featured researches published by Neslihan Bayramoglu.


international conference on pattern recognition | 2010

Shape Index SIFT: Range Image Recognition Using Local Features

Neslihan Bayramoglu; A. Aydin Alatan

Range image recognition gains importance in the recent years due to the developments in acquiring, displaying, and storing such data. In this paper, we present a novel method for matching range surfaces. Our method utilizes local surface properties and represents the geometry of local regions efficiently. Integrating the Scale Invariant Feature Transform (SIFT) with the shape index (SI) representation of the range images allows matching of surfaces with different scales and orientations. We apply the method for scaled, rotated, and occluded range images and demonstrate the effectiveness it by comparing the previous studies.


international conference on biometrics | 2013

CS-3DLBP and geometry based person independent 3D facial action unit detection

Neslihan Bayramoglu; Guoying Zhao; Matti Pietikäinen

Face is the key component in understanding emotions which play significant roles in many areas from security and entertainment to psychology and education. In this paper, we propose a method to detect facial action units in 3D face data by combining novel geometric properties and a new descriptor based on the Local Binary Pattern (LBP) methodology. The proposed method enables person and gender independent facial action unit detection. The decision level fusion is used by employing the Random Forests classifiers to combine geometric and LBP based features. Unlike the previous methods which suffer from the diversity among different persons and normalize features utilizing neutral faces, our method extracts features on a single 3D face data. Besides, we show that orientation based 3D LBP descriptor can be implemented efficiently in terms of size and time without degrading the performance. We tested our method on the Bosphorus database and present comparative results with the existing methods. Our results outperform those of existing methods, achieving a mean receiver operating characteristic area under curve of 97.7%.


Oncotarget | 2015

Automated tracking of tumor-stroma morphology in microtissues identifies functional targets within the tumor microenvironment for therapeutic intervention

Malin Åkerfelt; Neslihan Bayramoglu; Sean Robinson; Mervi Toriseva; Hannu-Pekka Schukov; Ville Härmä; Johannes Virtanen; Raija Sormunen; Mika Kaakinen; Juho Kannala; Lauri Eklund; Janne Heikkilä

Cancer-associated fibroblasts (CAFs) constitute an important part of the tumor microenvironment and promote invasion via paracrine functions and physical impact on the tumor. Although the importance of including CAFs into three-dimensional (3D) cell cultures has been acknowledged, computational support for quantitative live-cell measurements of complex cell cultures has been lacking. Here, we have developed a novel automated pipeline to model tumor-stroma interplay, track motility and quantify morphological changes of 3D co-cultures, in real-time live-cell settings. The platform consists of microtissues from prostate cancer cells, combined with CAFs in extracellular matrix that allows biochemical perturbation. Tracking of fibroblast dynamics revealed that CAFs guided the way for tumor cells to invade and increased the growth and invasiveness of tumor organoids. We utilized the platform to determine the efficacy of inhibitors in prostate cancer and the associated tumor microenvironment as a functional unit. Interestingly, certain inhibitors selectively disrupted tumor-CAF interactions, e.g. focal adhesion kinase (FAK) inhibitors specifically blocked tumor growth and invasion concurrently with fibroblast spreading and motility. This complex phenotype was not detected in other standard in vitro models. These results highlight the advantage of our approach, which recapitulates tumor histology and can significantly improve cancer target validation in vitro.


international conference on pattern recognition | 2016

Deep learning for magnification independent breast cancer histopathology image classification

Neslihan Bayramoglu; Juho Kannala; Janne Heikkilä

Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.


bioinformatics and bioengineering | 2015

Human Epithelial Type 2 cell classification with convolutional neural networks

Neslihan Bayramoglu; Juho Kannala; Janne Heikkilä

Automated cell classification in Indirect Immunofluorescence (IIF) images has potential to be an important tool in clinical practice and research. This paper presents a framework for classification of Human Epithelial Type 2 cell IIF images using convolutional neural networks (CNNs). Previuos state-of-the-art methods show classification accuracy of 75.6% on a benchmark dataset. We conduct an exploration of different strategies for enhancing, augmenting and processing training data in a CNN framework for image classification. Our proposed strategy for training data and pre-training and fine-tuning the CNN network led to a significant increase in the performance over other approaches that have been used until now. Specifically, our method achieves a 80.25% classification accuracy. Source code and models to reproduce the experiments in the paper is made publicly available.


european conference on computer vision | 2016

Transfer learning for cell nuclei classification in histopathology images

Neslihan Bayramoglu; Janne Heikkilä

In histopathological image assessment, there is a high demand to obtain fast and precise quantification automatically. Such automation could be beneficial to find clinical assessment clues to produce correct diagnoses, to reduce observer variability, and to increase objectivity. Due to its success in other areas, deep learning could be the key method to obtain clinical acceptance. However, the major bottleneck is how to train a deep CNN model with a limited amount of training data. There is one important question of critical importance: Could it be possible to use transfer learning and fine-tuning in biomedical image analysis to reduce the effort of manual data labeling and still obtain a full deep representation for the target task? In this study, we address this question quantitatively by comparing the performances of transfer learning and learning from scratch for cell nuclei classification. We evaluate four different CNN architectures trained on natural images and facial images.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2010

Utilization of spatial information for point cloud segmentation

Oytun Akman; Neslihan Bayramoglu; A. Aydin Alatan; Pieter P. Jonker

Object segmentation has an important role in the field of computer vision for semantic information inference. Many applications such as 3DTV archive systems, 3D/2D model fitting, object recognition and shape retrieval are strongly dependent to the performance of the segmentation process. In this paper we present a new algorithm for object localization and segmentation based on the spatial information obtained via a Time-of-Flight (TOF) camera. 3D points obtained via a TOF camera are projected onto the major plane representing the planar surface on which the objects are placed. Afterward, the most probable regions that an item can be placed are extracted by using kernel density estimation method and 3D points are segmented into objects. Also some well-known segmentation algorithms are tested on the 3D (depth) images.


international conference on pattern recognition | 2014

Detection of Tumor Cell Spheroids from Co-cultures Using Phase Contrast Images and Machine Learning Approach

Neslihan Bayramoglu; Mika Kaakinen; Lauri Eklund; Malin Åkerfelt; Juho Kannala; Janne Heikkilä

Automated image analysis is demanded in cell biology and drug development research. The type of microscopy is one of the considerations in the trade-offs between experimental setup, image acquisition speed, molecular labelling, resolution and quality of images. In many cases, phase contrast imaging gets higher weights in this optimization. And it comes at the price of reduced image quality in imaging 3D cell cultures. For such data, the existing state-of-the-art computer vision methods perform poorly in segmenting specific cell type. Low SNR, clutter and occlusions are basic challenges for blind segmentation approaches. In this study we propose an automated method, based on a learning framework, for detecting particular cell type in cluttered 2D phase contrast images of 3D cell cultures that overcomes those challenges. It depends on local features defined over super pixels. The method learns appearance based features, statistical features, textural features and their combinations. Also, the importance of each feature is measured by employing Random Forest classifier. Experiments show that our approach does not depend on training data and the parameters.


Proceedings of the Twelfth International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines | 2009

INTEGRATION OF 2D IMAGES AND RANGE DATA FOR OBJECT SEGMENTATION AND RECOGNITION

Neslihan Bayramoglu; Oytun Akman; A. Aydm Alatan; Pieter P. Jonker

In the field of vision based robot actuation, in order to manipulate objects in an environment, background separation and object selection are fundamental tasks that should be carried out in a fast and efficient way. In this paper, we propose a method to segment possible object locations in the scene and recognize them via local-point based representation. Exploiting the resulting 3D structure of the scene via a time-of-flight camera, background regions are eliminated with the assumption that the objects are placed on planar surfaces. Next, object recognition is performed using scale invariant features in the captured high resolution images via standard camera. The preliminary experimental results show that the proposed system gives promising results for background segmentation and object recognition, especially for the service robot environments, which could also be utilized as a pre-processing step in path planning and 3D scene map generation.


Neurocomputing | 2016

Comparison of 3D local and global descriptors for similarity retrieval of range data

Neslihan Bayramoglu; A. Aydin Alatan

Recent improvements in scanning technologies such as consumer penetration of RGB-D cameras lead obtaining and managing range image databases practical. Hence, the need for describing and indexing such data arises. In this study, we focus on similarity indexing of range data among a database of range objects (range-to-range retrieval) by employing only single view depth information. We utilize feature based approaches both on local and global scales. However, the emphasis is on the local descriptors with their global representations. A comparative study with extensive experimental results is presented. In addition, we introduce a publicly available range object database which is large and has a high diversity that is suitable for similarity retrieval applications. The simulation results indicate competitive performance between local and global methods. While better complexity trade-off can be achieved with the global techniques, local methods perform better in distinguishing different parts of incomplete depth data.

Collaboration


Dive into the Neslihan Bayramoglu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Aydin Alatan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oytun Akman

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pieter P. Jonker

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge