Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Caglar Senaras is active.

Publication


Featured researches published by Caglar Senaras.


Cancer Research | 2017

An Image Analysis Resource for Cancer Research: PIIP—Pathology Image Informatics Platform for Visualization, Analysis, and Management

Anne L. Martel; Dan Hosseinzadeh; Caglar Senaras; Yu Zhou; Azadeh Yazdanpanah; Rushin Shojaii; Emily S. Patterson; Anant Madabhushi; Metin N. Gurcan

Pathology Image Informatics Platform (PIIP) is an NCI/NIH sponsored project intended for managing, annotating, sharing, and quantitatively analyzing digital pathology imaging data. It expands on an existing, freely available pathology image viewer, Sedeen. The goal of this project is to develop and embed some commonly used image analysis applications into the Sedeen viewer to create a freely available resource for the digital pathology and cancer research communities. Thus far, new plugins have been developed and incorporated into the platform for out of focus detection, region of interest transformation, and IHC slide analysis. Our biomarker quantification and nuclear segmentation algorithms, written in MATLAB, have also been integrated into the viewer. This article describes the viewing software and the mechanism to extend functionality by plugins, brief descriptions of which are provided as examples, to guide users who want to use this platform. PIIP project materials, including a video describing its usage and applications, and links for the Sedeen Viewer, plug-ins, and user manuals are freely available through the project web page: http://pathiip.org Cancer Res; 77(21); e83-86. ©2017 AACR.


PLOS ONE | 2018

Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology

Muhammad Khalid Khan Niazi; Fazly Salleh Abas; Caglar Senaras; Michael L. Pennell; Berkman Sahiner; Weijie Chen; John Opfer; Robert P. Hasserjian; Abner Louissaint; Arwa Shana'ah; Gerard Lozanski; Metin N. Gurcan

Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.


Proceedings of SPIE | 2017

Autoscope: automated otoscopy image analysis to diagnose ear pathology and use of clinically motivated eardrum features

Caglar Senaras; Aaron C. Moberly; Theodoros N. Teknos; Garth Essig; Charles A. Elmaraghy; Nazhat Taj-Schaal; Lianbo Yu; Metin N. Gurcan

In this study, we propose an automated otoscopy image analysis system called Autoscope. To the best of our knowledge, Autoscope is the first system designed to detect a wide range of eardrum abnormalities by using high-resolution otoscope images and report the condition of the eardrum as “normal” or “abnormal.” In order to achieve this goal, first, we developed a preprocessing step to reduce camera-specific problems, detect the region of interest in the image, and prepare the image for further analysis. Subsequently, we designed a new set of clinically motivated eardrum features (CMEF). Furthermore, we evaluated the potential of the visual MPEG-7 descriptors for the task of tympanic membrane image classification. Then, we fused the information extracted from the CMEF and state-of-the-art computer vision features (CVF), which included MPEG-7 descriptors and two additional features together, using a state of the art classifier. In our experiments, 247 tympanic membrane images with 14 different types of abnormality were used, and Autoscope was able to classify the given tympanic membrane images as normal or abnormal with 84.6% accuracy.


Proceedings of SPIE | 2017

FOXP3-stained image analysis for follicular lymphoma: optimal adaptive thresholding with maximal nucleus coverage

Caglar Senaras; Michael L. Pennell; Weijie Chen; Berkman Sahiner; A. Shana’ah; Abner Louissaint; Robert P. Hasserjian; Gerard Lozanski; Metin N. Gurcan

Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial effect of TR is frequently “hijacked” by malignant cells where tumor-infiltrating regulatory T cells are recruited by the malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However, in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial. In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT) method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method, Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3 positive nuclei on the outcome and prognosis in FL.


PLOS ONE | 2018

DeepFocus: Detection of out-of-focus regions in whole slide digital images using deep learning

Caglar Senaras; M. Khalid Khan Niazi; Gerard Lozanski; Metin N. Gurcan

The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.


PLOS ONE | 2018

Optimized generation of high-resolution phantom images using cGAN: Application to quantification of Ki67 breast cancer images

Caglar Senaras; Muhammad Khalid Khan Niazi; Berkman Sahiner; Michael Pennell; Gary Tozbikian; Gerard Lozanski; Metin N. Gurcan

In pathology, Immunohistochemical staining (IHC) of tissue sections is regularly used to diagnose and grade malignant tumors. Typically, IHC stain interpretation is rendered by a trained pathologist using a manual method, which consists of counting each positively- and negatively-stained cell under a microscope. The manual enumeration suffers from poor reproducibility even in the hands of expert pathologists. To facilitate this process, we propose a novel method to create artificial datasets with the known ground truth which allows us to analyze the recall, precision, accuracy, and intra- and inter-observer variability in a systematic manner, enabling us to compare different computer analysis approaches. Our method employs a conditional Generative Adversarial Network that uses a database of Ki67 stained tissues of breast cancer patients to generate synthetic digital slides. Our experiments show that synthetic images are indistinguishable from real images. Six readers (three pathologists and three image analysts) tried to differentiate 15 real from 15 synthetic images and the probability that the average reader would be able to correctly classify an image as synthetic or real more than 50% of the time was only 44.7%.


Medical Imaging 2018: Digital Pathology | 2018

Tumor microenvironment for follicular lymphoma: structural analysis for outcome prediction.

Caglar Senaras; Michael Pennell; Weijie Chen; Berkman Sahiner; Arwa Shana'ah; Abner Louissaint; Robert P. Hasserjian; Gerard Lozanski; Metin N. Gurcan

Follicular Lymphoma (FL) is the second most common subtype of lymphoma in the Western World. In 2009, about 15,000 new cases of FL were diagnosed in the U.S. and approximately 120,000 patients were affected. Both the clinical course and prognosis of FL are variable, and at present, oncologists do not have evidence-based systems to assess risk and make individualized treatment choices. Our goal is to develop a clinically relevant, pathology-based prognostic model in FL utilizing a computer-assisted image analysis (CaIA) system to incorporate grade, tumor microenvironment, and immunohistochemical markers, thereby improving upon the existing prognostic models. Therefore, we developed an approach to estimate the outcome of the follicular lymphoma patients by analyzing the tumor microenvironment as represented by quantification of CD4, CD8, FoxP3 and Ki67 stains intra- and inter-follicular regions. In our experiments, we analyzed 15 patients, and we were able to correctly estimate the output for the 87.5% of the patient with no evidence of disease after the therapy/operation.


Medical Imaging 2018: Computer-Aided Diagnosis | 2018

Detection of eardrum abnormalities using ensemble deep learning approaches.

Caglar Senaras; Aaron C. Moberly; Theodoros N. Teknos; Garth Essig; Charles A. Elmaraghy; Nazhat Taj-Schaal; Lianbo Yua; Metin N. Gurcan

In this study, we proposed an approach to report the condition of the eardrum as “normal” or “abnormal” by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(± 12.1%) and 82.6% (± 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (± 10.3%).


BMC Cancer | 2018

Relationship between the Ki67 index and its area based approximation in breast cancer

Muhammad Khalid Khan Niazi; Caglar Senaras; Michael L. Pennell; Vidya Arole; Gary Tozbikian; Metin N. Gurcan

BackgroundThe Ki67 Index has been extensively studied as a prognostic biomarker in breast cancer. However, its clinical adoption is largely hampered by the lack of a standardized method to assess Ki67 that limits inter-laboratory reproducibility. It is important to standardize the computation of the Ki67 Index before it can be effectively used in clincial practice.MethodIn this study, we develop a systematic approach towards standardization of the Ki67 Index. We first create the ground truth consisting of tumor positive and tumor negative nuclei by registering adjacent breast tissue sections stained with Ki67 and H&E. The registration is followed by segmentation of positive and negative nuclei within tumor regions from Ki67 images. The true Ki67 Index is then approximated with a linear model of the area of positive to the total area of tumor nuclei.ResultsWhen tested on 75 images of Ki67 stained breast cancer biopsies, the proposed method resulted in an average root mean square error of 3.34. In comparison, an expert pathologist resulted in an average root mean square error of 9.98 and an existing automated approach produced an average root mean square error of 5.64.ConclusionsWe show that it is possible to approximate the true Ki67 Index accurately without detecting individual nuclei and also statically demonstrate the weaknesses of commonly adopted approaches that use both tumor and non-tumor regions together while compensating for the latter with higher order approximations.


Journal of Telemedicine and Telecare | 2017

Digital otoscopy versus microscopy: How correct and confident are ear experts in their diagnoses?

Aaron C. Moberly; Margaret Zhang; Lianbo Yu; Metin N. Gurcan; Caglar Senaras; Theodoros N. Teknos; Charles A. Elmaraghy; Nazhat Taj-Schaal; Garth F. Essig

Collaboration


Dive into the Caglar Senaras's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Berkman Sahiner

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar

Aaron C. Moberly

The Ohio State University Wexner Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles A. Elmaraghy

Nationwide Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nazhat Taj-Schaal

The Ohio State University Wexner Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge