Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melih Kandemir is active.

Publication


Featured researches published by Melih Kandemir.


Medical Image Analysis | 2010

Automatic segmentation of colon glands using object-graphs

Cigdem Gunduz-Demir; Melih Kandemir; Akif Burak Tosun; Cenk Sokmensuer

Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.


Pattern Recognition | 2009

Object-oriented texture analysis for the unsupervised segmentation of biopsy images for cancer detection

Akif Burak Tosun; Melih Kandemir; Cenk Sokmensuer; Cigdem Gunduz-Demir

Staining methods routinely used in pathology lead to similar color distributions in the biologically different regions of histopathological images. This causes problems in image segmentation for the quantitative analysis and detection of cancer. To overcome this problem, unlike previous methods that use pixel distributions, we propose a new homogeneity measure based on the distribution of the objects that we define to represent tissue components. Using this measure, we demonstrate a new object-oriented segmentation algorithm. Working with colon biopsy images, we show that this algorithm segments the cancerous and normal regions with 94.89 percent accuracy on the average and significantly improves the segmentation accuracy compared to its pixel-based counterpart.


Virtual Reality | 2011

An augmented reality interface to contextual information

Antti Ajanki; Mark Billinghurst; Hannes Gamper; Toni Järvenpää; Melih Kandemir; Samuel Kaski; Markus Koskela; Mikko Kurimo; Jorma Laaksonen; Kai Puolamäki; Teemu Ruokolainen; Timo Tossavainen

In this paper, we report on a prototype augmented reality (AR) platform for accessing abstract information in real-world pervasive computing environments. Using this platform, objects, people, and the environment serve as contextual channels to more information. The user’s interest with respect to the environment is inferred from eye movement patterns, speech, and other implicit feedback signals, and these data are used for information filtering. The results of proactive context-sensitive information retrieval are augmented onto the view of a handheld or head-mounted display or uttered as synthetic speech. The augmented information becomes part of the user’s context, and if the user shows interest in the AR content, the system detects this and provides progressively more information. In this paper, we describe the first use of the platform to develop a pilot application, Virtual Laboratory Guide, and early evaluation results of this application.


Computerized Medical Imaging and Graphics | 2015

Computer-aided diagnosis from weak supervision: A benchmarking study

Melih Kandemir; Fred A. Hamprecht

Supervised machine learning is a powerful tool frequently used in computer-aided diagnosis (CAD) applications. The bottleneck of this technique is its demand for fine grained expert annotations, which are tedious for medical image analysis applications. Furthermore, information is typically localized in diagnostic images, which makes representation of an entire image by a single feature set problematic. The multiple instance learning framework serves as a remedy to these two problems by allowing labels to be provided for groups of observations, called bags, and assuming the group label to be the maximum of the instance labels within the bag. This setup can effectively be applied to CAD by splitting a given diagnostic image into a Cartesian grid, treating each grid element (patch) as an instance by representing it with a feature set, and grouping instances belonging to the same image into a bag. We quantify the power of existing multiple instance learning methods by evaluating their performance on two distinct CAD applications: (i) Barretts cancer diagnosis and (ii) diabetic retinopathy screening. In the experiments, mi-Graph appears as the best-performing method in bag-level prediction (i.e. diagnosis) for both of these applications that have drastically different visual characteristics. For instance-level prediction (i.e. disease localization), mi-SVM ranks as the most accurate method.


medical image computing and computer-assisted intervention | 2014

Empowering multiple instance histopathology cancer diagnosis by cell graphs.

Melih Kandemir; Chong Zhang; Fred A. Hamprecht

We introduce a probabilistic classifier that combines multiple instance learning and relational learning. While multiple instance learning allows automated cancer diagnosis from only image-level annotations, relational learning allows exploiting changes in cell formations due to cancer. Our method extends Gaussian process multiple instance learning with a relational likelihood that brings improved diagnostic performance on two tissue microarray data sets (breast and Barretts cancer) when similarity of cell layouts in different tissue regions is used as relational side information.


international workshop on machine learning for signal processing | 2010

Contextual information access with Augmented Reality

Antti Ajanki; Mark Billinghurst; Toni Järvenpää; Melih Kandemir; Samuel Kaski; Markus Koskela; Mikko Kurimo; Jorma Laaksonen; Kai Puolamäki; Teemu Ruokolainen; Timo Tossavainen

We have developed a prototype platform for contextual information access in mobile settings. Objects, people, and the environment are considered as contextual channels or cues to more information. The system infers, based on gaze, speech and other implicit feedback signals, which of the contextual cues are relevant, retrieves more information relevant to the cues, and presents the information with Augmented Reality (AR) techniques on a handheld or head-mounted display. The augmented information becomes potential contextual cues as well, and its relevance is assessed to provide more information. In essence, the platform turns the real world into an information browser which focuses proactively on the information inferred to be the most relevant for the user. We present the first pilot application, a Virtual Laboratory Guide, and its early evaluation results.


international symposium on biomedical imaging | 2014

Digital pathology: Multiple instance learning can detect Barrett's cancer

Melih Kandemir; Annette Feuchtinger; Axel Walch; Fred A. Hamprecht

We study diagnosis of Barretts cancer from hematoxylin & eosin (H & E) stained histopathological biopsy images using multiple instance learning (MIL). We partition tissue cores into rectangular patches, and construct a feature vector consisting of a large set of cell-level and patch-level features for each patch. In MIL terms, we treat each tissue core as a bag (group of instances with a single group-level ground-truth label) and each patch an instance. After a benchmarking study on several MIL approaches, we find that a graph-based MIL algorithm, mi-Graph [1], gives the best performance (87% accuracy, 0.93 AUC), due to its inherent suitability to bags with spatially-correlated instances. In patch-level diagnosis, we reach 82% accuracy and 0.89 AUC using Bayesian logistic regression. We also pursue a study on feature importance, which shows that patch-level color and texture features and cell-level features all have significant contribution to prediction.


NeuroImage | 2015

Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals

Jukka-Pekka Kauppi; Melih Kandemir; Veli-Matti Saarinen; Lotta Hirvenkari; Lauri Parkkonen; Arto Klami; Riitta Hari; Samuel Kaski

We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements.


eye tracking research & application | 2010

Inferring object relevance from gaze in dynamic scenes

Melih Kandemir; Veli-Matti Saarinen; Samuel Kaski

As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.


Neurocomputing | 2014

Multi-task and multi-view learning of user state

Melih Kandemir; Akos Vetek; Arto Klami; Samuel Kaski

Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user@?s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.

Collaboration


Dive into the Melih Kandemir's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Arto Klami

Helsinki Institute for Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge