Featured Researches

Image And Video Processing

COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 from Chest CT Images Through Bigger, More Diverse Learning

The COVID-19 pandemic continues to rage on, with multiple waves causing substantial harm to health and economies around the world. Motivated by the use of CT imaging at clinical institutes around the world as an effective complementary screening method to RT-PCR testing, we introduced COVID-Net CT, a neural network tailored for detection of COVID-19 cases from chest CT images as part of the open source COVID-Net initiative. However, one potential limiting factor is restricted quantity and diversity given the single nation patient cohort used. In this study, we introduce COVID-Net CT-2, enhanced deep neural networks for COVID-19 detection from chest CT images trained on the largest quantity and diversity of multinational patient cases in research literature. We introduce two new CT benchmark datasets, the largest comprising a multinational cohort of 4,501 patients from at least 15 countries. We leverage explainability to investigate the decision-making behaviour of COVID-Net CT-2, with the results for select cases reviewed and reported on by two board-certified radiologists with over 10 and 30 years of experience, respectively. The COVID-Net CT-2 neural networks achieved accuracy, COVID-19 sensitivity, PPV, specificity, and NPV of 98.1%/96.2%/96.7%/99%/98.8% and 97.9%/95.7%/96.4%/98.9%/98.7%, respectively. Explainability-driven performance validation shows that COVID-Net CT-2's decision-making behaviour is consistent with radiologist interpretation by leveraging correct, clinically relevant critical factors. The results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment. While not a production-ready solution, we hope the open-source, open-access release of COVID-Net CT-2 and benchmark datasets will continue to enable researchers, clinicians, and citizen data scientists alike to build upon them.

Read more
Image And Video Processing

COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest CT Images

The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients with worsening respiratory status or developing complications that require expedited care, and patients suspected to be COVID-19-positive but have negative RT-PCR test results. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behaviour of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.

Read more
Image And Video Processing

Calibration based Minimalistic Multi-Exposure Digital Sensor Camera Robust Linear High Dynamic Range Enhancement Technique Demonstration

Demonstrated for a digital image sensor based camera is a calibration target optimized method for finding the Camera Response Function (CRF). The proposed method uses localized known target zone pixel outputs spatial averaging and histogram analysis for saturated pixel detection. Using the proposed CRF generation method with a 87 dB High Dynamic Range (HDR) silicon CMOS image sensor camera viewing a 90 dB HDR calibration target, experimentally produced is a non-linear CRF with a limited 40 dB linear CRF zone. Next, a 78 dB test target is deployed to test the camera with this measured CRF and its restricted 40 dB zone. By engaging the proposed minimal exposures, weighting free, multi-exposure imaging method with 2 images, demonstrated is a highly robust recovery of the test target. In addition, the 78 dB test target recovery with 16 individual DR value patches stays robust over a factor of 20 change in test target illumination lighting. In comparison, a non-robust test target image recovery is produced by 5 leading prior-art multi-exposure HDR recovery algorithms using 16 images having 16 different exposure times, with each consecutive image having a sensor dwell time increasing by a factor of 2. Further validation of the proposed HDR image recovery method is provided using two additional experiments, the first using a 78 dB calibrated target combined with a natural indoor scene to form a hybrid design target and a second experiment using an uncalibrated indoor natural scene. The proposed technique applies to all digital image sensor based cameras having exposure time and illumination controls. In addition, the proposed methods apply to various sensor technologies, spectral bands, and imaging applications.

Read more
Image And Video Processing

Cell abundance aware deep learning for cell detection on highly imbalanced pathological data

Automated analysis of tissue sections allows a better understanding of disease biology and may reveal biomarkers that could guide prognosis or treatment selection. In digital pathology, less abundant cell types can be of biological significance, but their scarcity can result in biased and sub-optimal cell detection model. To minimize the effect of cell imbalance on cell detection, we proposed a deep learning pipeline that considers the abundance of cell types during model training. Cell weight images were generated, which assign larger weights to less abundant cells and used the weights to regularize Dice overlap loss function. The model was trained and evaluated on myeloma bone marrow trephine samples. Our model obtained a cell detection F1-score of 0.78, a 2% increase compared to baseline models, and it outperformed baseline models at detecting rare cell types. We found that scaling deep learning loss function by the abundance of cells improves cell detection performance. Our results demonstrate the importance of incorporating domain knowledge on deep learning methods for pathological data with class imbalance.

Read more
Image And Video Processing

Cell image segmentation by Feature Random Enhancement Module

It is important to extract good features using an encoder to realize semantic segmentation with high accuracy. Although loss function is optimized in training deep neural network, far layers from the layers for computing loss function are difficult to train. Skip connection is effective for this problem but there are still far layers from the loss function. In this paper, we propose the Feature Random Enhancement Module which enhances the features randomly in only training. By emphasizing the features at far layers from loss function, we can train those layers well and the accuracy was improved. In experiments, we evaluated the proposed module on two kinds of cell image datasets, and our module improved the segmentation accuracy without increasing computational cost in test phase.

Read more
Image And Video Processing

Cephalogram Synthesis and Landmark Detection in Dental Cone-Beam CT Systems

Due to the lack of standardized 3D cephalometric analytic methodology, 2D cephalograms synthesized from 3D cone-beam computed tomography (CBCT) volumes are widely used for cephalometric analysis in dental CBCT systems. However, compared with conventional X-ray film based cephalograms, such synthetic cephalograms lack image contrast and resolution. In addition, the radiation dose during the scan for 3D reconstruction causes potential health risks. In this work, we propose a sigmoid-based intensity transform that uses the nonlinear optical property of X-ray films to increase image contrast of synthetic cephalograms. To improve image resolution, super resolution deep learning techniques are investigated. For low dose purpose, the pixel-to-pixel generative adversarial network (pix2pixGAN) is proposed for 2D cephalogram synthesis directly from two CBCT projections. For landmark detection in the synthetic cephalograms, an efficient automatic landmark detection method using the combination of LeNet-5 and ResNet50 is proposed. Our experiments demonstrate the efficacy of pix2pixGAN in 2D cephalogram synthesis, achieving an average peak signal-to-noise ratio (PSNR) value of 33.8 with reference to the cephalograms synthesized from 3D CBCT volumes. Pix2pixGAN also achieves the best performance in super resolution, achieving an average PSNR value of 32.5 without the introduction of checkerboard or jagging artifacts. Our proposed automatic landmark detection method achieves 86.7% successful detection rate in the 2 mm clinical acceptable range on the ISBI Test1 data, which is comparable to the state-of-the-art methods. The method trained on conventional cephalograms can be directly applied to landmark detection in the synthetic cephalograms, achieving 93.0% and 80.7% successful detection rate in 4 mm precision range for synthetic cephalograms from 3D volumes and 2D projections respectively.

Read more
Image And Video Processing

Characterization of Covid-19 Dataset using Complex Networks and Image Processing

This paper aims to explore the structure of pattern behind covid-19 dataset. The dataset includes medical images with positive and negative cases. A sample of 100 sample is chosen, 50 per each class. An histogram frequency is calculated to get features using statistical measurements, besides a feature extraction using Grey Level Co-Occurrence Matrix (GLCM). Using both features are build Complex Networks respectively to analyze the adjacency matrices and check the presence of patterns. Initial experiments introduces the evidence of hidden patterns in the dataset for each class, which are visible using Complex Networks representation.

Read more
Image And Video Processing

CheXternal: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays and External Clinical Settings

Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise. However, poor generalization due to data distribution shifts in clinical settings is a key barrier to implementation. In this study, we measured the diagnostic performance for 8 different chest X-ray models when applied to (1) smartphone photos of chest X-rays and (2) external datasets without any finetuning. All models were developed by different groups and submitted to the CheXpert challenge, and re-applied to test datasets without further tuning. We found that (1) on photos of chest X-rays, all 8 models experienced a statistically significant drop in task performance, but only 3 performed significantly worse than radiologists on average, and (2) on the external set, none of the models performed statistically significantly worse than radiologists, and five models performed statistically significantly better than radiologists. Our results demonstrate that some chest X-ray models, under clinically relevant distribution shifts, were comparable to radiologists while other models were not. Future work should investigate aspects of model training procedures and dataset collection that influence generalization in the presence of data distribution shifts.

Read more
Image And Video Processing

Chest X-ray lung and heart segmentation based on minimal training sets

As the COVID-19 pandemic aggravated the excessive workload of doctors globally, the demand for computer aided methods in medical imaging analysis increased even further. Such tools can result in more robust diagnostic pipelines which are less prone to human errors. In our paper, we present a deep neural network to which we refer to as Attention BCDU-Net, and apply it to the task of lung and heart segmentation from chest X-ray (CXR) images, a basic but ardous step in the diagnostic pipeline, for instance for the detection of cardiomegaly. We show that the fine-tuned model exceeds previous state-of-the-art results, reaching 98.1±0.1% Dice score and 95.2±0.1% IoU score on the dataset of Japanese Society of Radiological Technology (JSRT). Besides that, we demonstrate the relative simplicity of the task by attaining surprisingly strong results with training sets of size 10 and 20: in terms of Dice score, 97.0±0.8% and 97.3±0.5 , respectively, while in terms of IoU score, 92.2±1.2% and 93.3±0.4% , respectively. To achieve these scores, we capitalize on the mixup augmentation technique, which yields a remarkable gain above 4% IoU score in the size 10 setup.

Read more
Image And Video Processing

Chronological age estimation of lateral cephalometric radiographs with deep learning

The traditional manual age estimation method is crucial labor based on many kinds of the X-Ray image. Some current studies have shown that lateral cephalometric(LC) images can be used to estimate age. However, these methods are based on manually measuring some image features and making age estimates based on experience or scoring. Therefore, these methods are time-consuming and labor-intensive, and the effect will be affected by subjective opinions. In this work, we propose a saliency map-enhanced age estimation method, which can automatically perform age estimation based on LC images. Meanwhile, it can also show the importance of each region in the image for age estimation, which undoubtedly increases the method's Interpretability. Our method was tested on 3014 LC images from 4 to 40 years old. The MEA of the experimental result is 1.250, which is less than the result of the state-of-the-art benchmark because it performs significantly better in the age group with fewer data. Besides, our model is trained in each area with a high contribution to age estimation in LC images, so the effect of these different areas on the age estimation task was verified. Consequently, we conclude that the proposed saliency map enhancements chronological age estimation method of lateral cephalometric radiographs can work well in chronological age estimation task, especially when the amount of data is small. Besides, compared with traditional deep learning, our method is also interpretable.

Read more

Ready to get started?

Join us today