Le Hou
Stony Brook University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Le Hou.
computer vision and pattern recognition | 2016
Le Hou; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; James Davis; Joel H. Saltz
Convolutional Neural Networks (CNN) are state-of-theart models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.
european conference on computer vision | 2016
Tomas F. Yago Vicente; Le Hou; Chen-Ping Yu; Minh Hoai; Dimitris Samaras
This paper introduces training of shadow detectors under the large-scale dataset paradigm. This was previously impossible due to the high cost of precise shadow annotation. Instead, we advocate the use of quickly but imperfectly labeled images. Our novel label recovery method automatically corrects a portion of the erroneous annotations such that the trained classifiers perform at state-of-the-art level. We apply our method to improve the accuracy of the labels of a new dataset that is 20 times larger than existing datasets and contains a large variety of scenes and image types. Naturally, such a large dataset is appropriate for training deep learning methods. Thus, we propose a semantic-aware patch level Convolutional Neural Network architecture that efficiently trains on patch level shadow examples while incorporating image level semantic information. This means that the detected shadow patches are refined based on image semantics. Our proposed pipeline can be a useful baseline for future advances in shadow detection.
2016 New York Scientific Data Summit (NYSDS) | 2016
Le Hou; Kunal Singh; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; Roberta J. Seidman; Joel H. Saltz
We define Pathomics as the process of high throughput generation, interrogation, and mining of quantitative features from high-resolution histopathology tissue images. Analysis and mining of large volumes of imaging features has great potential to enhance our understanding of tumors. The basic Pathomics workflow consists of several steps: segmentation of tissue images to delineate the boundaries of nuclei, cells, and other structures; computation of size, shape, intensity, and texture features for each segmented object; classification of images and patients based on imaging features; and correlation of classification results with genomic signatures and clinical outcome. Executing a Pathomics workflow on a dataset of thousands of very high resolution (gigapixels) and heterogeneous histopathology images is a computationally challenging problem. In this paper, we use Convolutional Neural Networks (CNN) for automatic recognition of nuclear morphological attributes in histopathology images of glioma, the most common malignant brain tumor. We constructed a comprehensive multi-label dataset of glioma nuclei and applied two CNN based methods on this dataset. Both methods perform well recognizing some but not all morphological attributes and are complementary with each other.
Pattern Recognition | 2019
Le Hou; Vu Nguyen; Ariel B. Kanevsky; Dimitris Samaras; Tahsin M. Kurç; Tianhao Zhao; Rajarsi Gupta; Yi Gao; Wenjin Chen; David J. Foran; Joel H. Saltz
We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.
workshop on applications of computer vision | 2017
Veda Murthy; Le Hou; Dimitris Samaras; Tahsin M. Kurç; Joel H. Saltz
Classifying the various shapes and attributes of a glioma cell nucleus is crucial for diagnosis and understanding of the disease. We investigate the automated classification of the nuclear shapes and visual attributes of glioma cells, using Convolutional Neural Networks (CNNs) on pathology images of automatically segmented nuclei. We propose three methods that improve the performance of a previously-developed semi-supervised CNN. First, we propose a method that allows the CNN to focus on the most important part of an image-the images center containing the nucleus. Second, we inject (concatenate) pre-extracted VGG features into an intermediate layer of our Semi-Supervised CNN so that during training, the CNN can learn a set of additional features. Third, we separate the losses of the two groups of target classes (nuclear shapes and attributes) into a single-label loss and a multi-label loss in order to incorporate prior knowledge of inter-label exclusiveness. On a dataset of 2078 images, the combination of the proposed methods reduces the error rate of attribute and shape classification by 21.54% and 15.07% respectively compared to the existing state-of-the-art method on the same dataset.
Archive | 2015
Le Hou; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; James Davis; Joel H. Saltz
arXiv: Computer Vision and Pattern Recognition | 2016
Le Hou; Chen-Ping Yu; Dimitris Samaras
international conference on artificial intelligence and statistics | 2017
Le Hou; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; Joel H. Saltz
arXiv: Computer Vision and Pattern Recognition | 2017
Le Hou; Ayush Agarwal; Dimitris Samaras; Tahsin M. Kurç; Rajarsi Gupta; Joel H. Saltz
arXiv: Computer Vision and Pattern Recognition | 2016
Le Hou; Dimitris Samaras; Tahsin M. Kurç; Yi Gao; Joel H. Saltz