Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aïcha BenTaieb is active.

Publication


Featured researches published by Aïcha BenTaieb.


medical image computing and computer assisted intervention | 2016

Topology Aware Fully Convolutional Networks for Histology Gland Segmentation

Aïcha BenTaieb; Ghassan Hamarneh

The recent success of deep learning techniques in classification and object detection tasks has been leveraged for segmentation tasks. However, a weakness of these deep segmentation models is their limited ability to encode high level shape priors, such as smoothness and preservation of complex interactions between object regions, which can result in implausible segmentations. In this work, by formulating and optimizing a new loss, we introduce the first deep network trained to encode geometric and topological priors of containment and detachment. Our results on the segmentation of histology glands from a dataset of 165 images demonstrate the advantage of our novel loss terms and show how our topology aware architecture outperforms competing methods by up to 10 % in both pixel-level accuracy and object-level Dice.


international symposium on biomedical imaging | 2016

Deep features to classify skin lesions

Jeremy Kawahara; Aïcha BenTaieb; Ghassan Hamarneh

Diagnosing an unknown skin lesion is the first step to determine appropriate treatment. We demonstrate that a linear classifier, trained on features extracted from a convolutional neural network pretrained on natural images, distinguishes among up to ten skin lesions with a higher accuracy than previously published state-of-the-art results on the same dataset. Further, in contrast to competing works, our approach requires no lesion segmentations nor complex preprocessing. We gain consistent additional improvements to accuracy using a per image normalization, a fully convolutional network to extract multi-scale features, and by pooling over an augmented feature space. Compared to state-of-the-art, our proposed approach achieves a favourable accuracy of 85.8% over 5-classes (compared to 75.1%) with noticeable improvements in accuracy for underrepresented classes (e.g., 60% compared to 15.6%). Over the entire 10-class dataset of 1300 images captured from a standard (non-dermoscopic) camera, our method achieves an accuracy of 81.8% outperforming the 67% accuracy previously reported.


IEEE Transactions on Medical Imaging | 2018

Adversarial Stain Transfer for Histopathology Image Analysis

Aïcha BenTaieb; Ghassan Hamarneh

It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.


Medical Image Analysis | 2017

A structured latent model for ovarian carcinoma subtyping from histopathology slides

Aïcha BenTaieb; Hector Li-Chang; David Huntsman; Ghassan Hamarneh

HighlightsA method for automatic subtyping of ovarian carcinomas from whole slide histopathology images is proposed.The method features a context‐aware classification model using a multi‐resolution patch‐based representation.An efficient and scalable feature learning strategy is used to process large scale histopathology images.Salient discriminative regions are automatically highlighted to the user on the whole slide images. ABSTRACT Accurate subtyping of ovarian carcinomas is an increasingly critical and often challenging diagnostic process. This work focuses on the development of an automatic classification model for ovarian carcinoma subtyping. Specifically, we present a novel clinically inspired contextual model for histopathology image subtyping of ovarian carcinomas. A whole slide image is modelled using a collection of tissue patches extracted at multiple magnifications. An efficient and effective feature learning strategy is used for feature representation of a tissue patch. The locations of salient, discriminative tissue regions are treated as latent variables allowing the model to explicitly ignore portions of the large tissue section that are unimportant for classification. These latent variables are considered in a structured formulation to model the contextual information represented from the multi‐magnification analysis of tissues. A novel, structured latent support vector machine formulation is defined and used to combine information from multiple magnifications while simultaneously operating within the latent variable framework. The structural and contextual nature of our method addresses the challenges of intra‐class variation and pathologists’ workload, which are prevalent in histopathology image classification. Extensive experiments on a dataset of 133 patients demonstrate the efficacy and accuracy of the proposed method against state‐of‐the‐art approaches for histopathology image classification. We achieve an average multi‐class classification accuracy of 90%, outperforming existing works while obtaining substantial agreement with six clinicians tested on the same dataset.


Journal of Pathology Informatics | 2016

Clinically-inspired automatic classification of ovarian carcinoma subtypes.

Aïcha BenTaieb; Masoud Nosrati; Hector Li-Chang; David Huntsman; Ghassan Hamarneh

Context: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists′ workflow, we propose an automatic framework for ovarian carcinoma classification. Materials and Methods: Our method is inspired by pathologists′ workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. Results: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier′s confusion (confusion matrix) between the five different ovarian carcinoma subtypes agrees with clinician′s confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. Conclusions: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician′s diagnostic procedure by providing a second opinion.


CVII-STENT/LABELS@MICCAI | 2017

Uncertainty Driven Multi-loss Fully Convolutional Networks for Histopathology

Aïcha BenTaieb; Ghassan Hamarneh

Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.


medical image computing and computer-assisted intervention | 2018

Predicting Cancer with a Recurrent Visual Attention Model for Histopathology Images

Aïcha BenTaieb; Ghassan Hamarneh

Automatically recognizing cancers from multi-gigapixel whole slide histopathology images is one of the challenges facing machine and deep learning based solutions for digital pathology. Currently, most automatic systems for histopathology are not scalable to large images and hence require a patch-based representation; a sub-optimal solution as it results in important additional computational costs but more importantly in the loss of contextual information. We present a novel attention-based model for predicting cancer from histopathology whole slide images. The proposed model is capable of attending to the most discriminative regions of an image by adaptively selecting a limited sequence of locations and only processing the selected areas of tissues. We demonstrate the utility of the proposed model on the slide-based prediction of macro and micro metastases in sentinel lymph nodes of breast cancer patients. We achieve competitive results with state-of-the-art convolutional networks while automatically identifying discriminative areas of tissues.


International Workshop on Machine Learning for Medical Image Reconstruction | 2018

Deep Learning Based Image Reconstruction for Diffuse Optical Tomography

Hanene Ben Yedder; Aïcha BenTaieb; Majid Shokoufi; Amir Zahiremami; Farid Golnaraghi; Ghassan Hamarneh

Diffuse optical tomography (DOT) is a relatively new imaging modality that has demonstrated its clinical potential of probing tumors in a non-invasive and affordable way. Image reconstruction is an ill-posed challenging task because knowledge of the exact analytic inverse transform does not exist a priori, especially in the presence of sensor non-idealities and noise. Standard reconstruction approaches involve approximating the inverse function and often require expert parameters tuning to optimize reconstruction performance. In this work, we evaluate the use of a deep learning model to reconstruct images directly from their corresponding DOT projection data. The inverse problem is solved by training the model via training pairs created using physics-based simulation. Both quantitative and qualitative results indicate the superiority of the proposed network compared to an analytic technique.


Computerized Medical Imaging and Graphics | 2018

Automatic localization of normal active organs in 3D PET scans

Saeedeh Afshari; Aïcha BenTaieb; Ghassan Hamarneh

PET imaging captures the metabolic activity of tissues and is commonly visually interpreted by clinicians for detecting cancer, assessing tumor progression, and evaluating response to treatment. To automate accomplishing these tasks, it is important to distinguish between normal active organs and activity due to abnormal tumor growth. In this paper, we propose a deep learning method to localize and detect normal active organs visible in a 3D PET scan field-of-view. Our method adapts the deep network architecture of YOLO to detect multiple organs in 2D slices and aggregates the results to produce semantically labeled 3D bounding boxes. We evaluate our method on 479 18F-FDG PET scans of 156 patients achieving an average organ detection precision of 75-98%, recall of 94-100%, average bounding box centroid localization error of less than 14 mm, wall localization error of less than 24 mm and a mean IOU of up to 72%.


international symposium on biomedical imaging | 2016

Multi-loss convolutional networks for gland analysis in microscopy

Aïcha BenTaieb; Jeremy Kawahara; Ghassan Hamarneh

Collaboration


Dive into the Aïcha BenTaieb's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Huntsman

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Hector Li-Chang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge