Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tahsin M. Kurç is active.

Publication


Featured researches published by Tahsin M. Kurç.


international parallel and distributed processing symposium | 2014

Comparative Performance Analysis of Intel (R) Xeon Phi (TM), GPU, and CPU: A Case Study from Microscopy Image Analysis

George Teodoro; Tahsin M. Kurç; Jun Kong; Lee A. D. Cooper; Joel H. Saltz

We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs).


BMC Bioinformatics | 2015

Scalable analysis of Big pathology image data cohorts using efficient methods and high-performance computing strategies.

Tahsin M. Kurç; Xin Qi; Daihou Wang; Fusheng Wang; George Teodoro; Lee A. D. Cooper; Michael Nalisnik; Lin Yang; Joel H. Saltz; David J. Foran

BackgroundWe describe a suite of tools and methods that form a core set of capabilities for researchers and clinical investigators to evaluate multiple analytical pipelines and quantify sensitivity and variability of the results while conducting large-scale studies in investigative pathology and oncology. The overarching objective of the current investigation is to address the challenges of large data sizes and high computational demands.ResultsThe proposed tools and methods take advantage of state-of-the-art parallel machines and efficient content-based image searching strategies. The content based image retrieval (CBIR) algorithms can quickly detect and retrieve image patches similar to a query patch using a hierarchical analysis approach. The analysis component based on high performance computing can carry out consensus clustering on 500,000 data points using a large shared memory system.ConclusionsOur work demonstrates efficient CBIR algorithms and high performance computing can be leveraged for efficient analysis of large microscopy images to meet the challenges of clinically salient applications in pathology. These technologies enable researchers and clinical investigators to make more effective use of the rich informational content contained within digitized microscopy specimens.


very large data bases | 2015

SparkGIS: Efficient Comparison and Evaluation of Algorithm Results in Tissue Image Analysis Studies

Furqan Baig; Mudit Mehrotra; Hoang Vo; Fusheng Wang; Joel H. Saltz; Tahsin M. Kurç

Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison of multiple results, and facilitate algorithm sensitivity studies. The sizes of images and analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present SparkGIS, a distributed, in-memory spatial data processing framework to query, retrieve, and compare large volumes of analytical image result data for algorithm evaluation. Our approach combines the in-memory distributed processing capabilities of Apache Spark and the efficient spatial query processing of Hadoop-GIS. The experimental evaluation of SparkGIS for heatmap computations used to compare nucleus segmentation results from multiple images and analysis runs shows that SparkGIS is efficient and scalable, enabling algorithm evaluation and algorithm sensitivity studies on large datasets.


Bioinformatics | 2014

Parallel content-based sub-image retrieval using hierarchical searching

Lin Yang; Xin Qi; Fuyong Xing; Tahsin M. Kurç; Joel H. Saltz; David J. Foran

MOTIVATIONnThe capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches.nnnMETHODnGiven a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image.nnnRESULTSnThe algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%.nnnAVAILABILITY AND IMPLEMENTATIONnBoth the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.


Proceedings of SPIE | 2017

Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis

Naiyun Zhou; Xiaxia Yu; Tianhao Zhao; Si Wen; Fusheng Wang; Wei Zhu; Tahsin M. Kurç; Allen R. Tannenbaum; Joel H. Saltz; Yi Gao

Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.


Bioinformatics | 2017

Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

George Teodoro; Tahsin M. Kurç; Luis F. R. Taveira; Alba Cristina Magalhaes Alves de Melo; Yi Gao; Jun Kong; Joel H. Saltz

Motivation: Sensitivity analysis and parameter tuning are important processes in large‐scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non‐influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto‐tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU‐BMI/region‐templates/. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.


Proceedings of SPIE | 2016

Hierarchical nucleus segmentation in digital pathology images

Yi Gao; Vadim Ratner; Liangjia Zhu; Tammy Diprima; Tahsin M. Kurç; Allen R. Tannenbaum; Joel H. Saltz

Extracting nuclei is one of the most actively studied topic in the digital pathology researches. Most of the studies directly search the nuclei (or seeds for the nuclei) from the finest resolution available. While the richest information has been utilized by such approaches, it is sometimes difficult to address the heterogeneity of nuclei in different tissues. In this work, we propose a hierarchical approach which starts from the lower resolution level and adaptively adjusts the parameters while progressing into finer and finer resolution. The algorithm is tested on brain and lung cancers images from The Cancer Genome Atlas data set.


ieee international conference on high performance computing data and analytics | 2017

Application performance analysis and efficient execution on systems with multi-core CPUs, GPUs and MICs

George Teodoro; Tahsin M. Kurç; Guilherme Andrade; Jun Kong; Renato Ferreira; Joel H. Saltz

We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core (MIC)) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared with classic strategies in hybrid configurations.


2016 New York Scientific Data Summit (NYSDS) | 2016

Automatic histopathology image analysis with CNNs

Le Hou; Kunal Singh; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; Roberta J. Seidman; Joel H. Saltz

We define Pathomics as the process of high throughput generation, interrogation, and mining of quantitative features from high-resolution histopathology tissue images. Analysis and mining of large volumes of imaging features has great potential to enhance our understanding of tumors. The basic Pathomics workflow consists of several steps: segmentation of tissue images to delineate the boundaries of nuclei, cells, and other structures; computation of size, shape, intensity, and texture features for each segmented object; classification of images and patients based on imaging features; and correlation of classification results with genomic signatures and clinical outcome. Executing a Pathomics workflow on a dataset of thousands of very high resolution (gigapixels) and heterogeneous histopathology images is a computationally challenging problem. In this paper, we use Convolutional Neural Networks (CNN) for automatic recognition of nuclear morphological attributes in histopathology images of glioma, the most common malignant brain tumor. We constructed a comprehensive multi-label dataset of glioma nuclei and applied two CNN based methods on this dataset. Both methods perform well recognizing some but not all morphological attributes and are complementary with each other.


symposium on computer architecture and high performance computing | 2014

Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems

Guilherme Andrade; Renato Ferreira; George Teodoro; Leonardo C. da Rocha; Joel H. Saltz; Tahsin M. Kurç

High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical dataflow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and Masc. also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

Collaboration


Dive into the Tahsin M. Kurç's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Gao

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jong Youl Choi

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norbert Podhorszki

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge