Featured Researches

Image And Video Processing

Deep-Learning Driven Noise Reduction for Reduced Flux Computed Tomography

Deep neural networks have received considerable attention in clinical imaging, particularly with respect to the reduction of radiation risk. Lowering the radiation dose by reducing the photon flux inevitably results in the degradation of the scanned image quality. Thus, researchers have sought to exploit deep convolutional neural networks (DCNNs) to map low-quality, low-dose images to higher-dose, higher-quality images thereby minimizing the associated radiation hazard. Conversely, computed tomography (CT) measurements of geomaterials are not limited by the radiation dose. In contrast to the human body, however, geomaterials may be comprised of high-density constituents causing increased attenuation of the X-Rays. Consequently, higher dosage images are required to obtain an acceptable scan quality. The problem of prolonged acquisition times is particularly severe for micro-CT based scanning technologies. Depending on the sample size and exposure time settings, a single scan may require several hours to complete. This is of particular concern if phenomena with an exponential temperature dependency are to be elucidated. A process may happen too fast to be adequately captured by CT scanning. To address the aforementioned issues, we apply DCNNs to improve the quality of rock CT images and reduce exposure times by more than 60\%, simultaneously. We highlight current results based on micro-CT derived datasets and apply transfer learning to improve DCNN results without increasing training time. The approach is applicable to any computed tomography technology. Furthermore, we contrast the performance of the DCNN trained by minimizing different loss functions such as mean squared error and structural similarity index.

Read more
Image And Video Processing

DeepCervix: A Deep Learning-based Framework for the Classification of Cervical Cells Using Hybrid Deep Feature Fusion Techniques

Cervical cancer, one of the most common fatal cancers among women, can be prevented by regular screening to detect any precancerous lesions at early stages and treat them. Pap smear test is a widely performed screening technique for early detection of cervical cancer, whereas this manual screening method suffers from high false-positive results because of human errors. To improve the manual screening practice, machine learning (ML) and deep learning (DL) based computer-aided diagnostic (CAD) systems have been investigated widely to classify cervical pap cells. Most of the existing researches require pre-segmented images to obtain good classification results, whereas accurate cervical cell segmentation is challenging because of cell clustering. Some studies rely on handcrafted features, which cannot guarantee the classification stage's optimality. Moreover, DL provides poor performance for a multiclass classification task when there is an uneven distribution of data, which is prevalent in the cervical cell dataset. This investigation has addressed those limitations by proposing DeepCervix, a hybrid deep feature fusion (HDFF) technique based on DL to classify the cervical cells accurately. Our proposed method uses various DL models to capture more potential information to enhance classification performance. Our proposed HDFF method is tested on the publicly available SIPAKMED dataset and compared the performance with base DL models and the LF method. For the SIPAKMED dataset, we have obtained the state-of-the-art classification accuracy of 99.85%, 99.38%, and 99.14% for 2-class, 3-class, and 5-class classification. Moreover, our method is tested on the Herlev dataset and achieves an accuracy of 98.32% for binary class and 90.32% for 7-class classification.

Read more
Image And Video Processing

DeepRegularizer: Rapid Resolution Enhancement of Tomographic Imaging using Deep Learning

Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.

Read more
Image And Video Processing

Democratizing Artificial Intelligence in Healthcare: A Study of Model Development Across Two Institutions Incorporating Transfer Learning

The training of deep learning models typically requires extensive data, which are not readily available as large well-curated medical-image datasets for development of artificial intelligence (AI) models applied in Radiology. Recognizing the potential for transfer learning (TL) to allow a fully trained model from one institution to be fine-tuned by another institution using a much small local dataset, this report describes the challenges, methodology, and benefits of TL within the context of developing an AI model for a basic use-case, segmentation of Left Ventricular Myocardium (LVM) on images from 4-dimensional coronary computed tomography angiography. Ultimately, our results from comparisons of LVM segmentation predicted by a model locally trained using random initialization, versus one training-enhanced by TL, showed that a use-case model initiated by TL can be developed with sparse labels with acceptable performance. This process reduces the time required to build a new model in the clinical environment at a different institution.

Read more
Image And Video Processing

Denoising convolutional neural networks for photoacoustic microscopy

Photoacoustic imaging is a new imaging technology in recent years, which combines the advantages of high resolution and rich contrast of optical imaging with the advantages of high penetration depth of acoustic imaging. Photoacoustic imaging has been widely used in biomedical fields, such as brain imaging, tumor detection and so on. The signal-to-noise ratio (SNR) of image signals in photoacoustic imaging is generally low due to the limitation of laser pulse energy, electromagnetic interference in the external environment and system noise. In order to solve the problem of low SNR of photoacoustic images, we use feedforward denoising convolutional neural network to further process the obtained images, so as to obtain higher SNR images and improve image quality. We use Python language to manage the referenced Python external library through Anaconda, and build a feedforward noise-reducing convolutional neural network on Pycharm platform.We first processed and segmated a training set containing 400 images, and then used it for network training. Finally, we tested it with a series of cerebrovascular photoacoustic microscopy images.The results show that the peak signal-to-noise ratio (PSNR) of the image increases significantly before and after denoising.The experimental results verify that the feed-forward noise reduction convolutional neural network can effectively improve the quality of photoacoustic microscopic images, which provides a good foundation for the subsequent biomedical research.

Read more
Image And Video Processing

DenseNet for Breast Tumor Classification in Mammographic Images

Breast cancer is the most common invasive cancer in women, and the second main cause of death. Breast cancer screening is an efficient method to detect indeterminate breast lesions early. The common approaches of screening for women are tomosynthesis and mammography images. However, the traditional manual diagnosis requires an intense workload by pathologists, who are prone to diagnostic errors. Thus, the aim of this study is to build a deep convolutional neural network method for automatic detection, segmentation, and classification of breast lesions in mammography images. Based on deep learning the Mask-CNN (RoIAlign) method was developed to features selection and extraction; and the classification was carried out by DenseNet architecture. Finally, the precision and accuracy of the model is evaluated by cross validation matrix and AUC curve. To summarize, the findings of this study may provide a helpful to improve the diagnosis and efficiency in the automatic tumor localization through the medical image classification.

Read more
Image And Video Processing

Densely Connected Recurrent Residual (Dense R2UNet) Convolutional Neural Network for Segmentation of Lung CT Images

Deep Learning networks have established themselves as providing state of art performance for semantic segmentation. These techniques are widely applied specifically to medical detection, segmentation and classification. The advent of the U-Net based architecture has become particularly popular for this application. In this paper we present the Dense Recurrent Residual Convolutional Neural Network(Dense R2U CNN) which is a synthesis of Recurrent CNN, Residual Network and Dense Convolutional Network based on the U-Net model architecture. The residual unit helps training deeper network, while the dense recurrent layers enhances feature propagation needed for segmentation. The proposed model tested on the benchmark Lung Lesion dataset showed better performance on segmentation tasks than its equivalent models.

Read more
Image And Video Processing

Density Compensated Unrolled Networks for Non-Cartesian MRI Reconstruction

Deep neural networks have recently been thoroughly investigated as a powerful tool for MRI reconstruction. There is a lack of research, however, regarding their use for a specific setting of MRI, namely non-Cartesian acquisitions. In this work, we introduce a novel kind of deep neural networks to tackle this problem, namely density compensated unrolled neural networks, which rely on Density Compensation to correct the uneven weighting of the k-space. We assess their efficiency on the publicly available fastMRI dataset, and perform a small ablation study. Our results show that the density-compensated unrolled neural networks outperform the different baselines, and that all parts of the design are needed. We also open source our code, in particular a Non-Uniform Fast Fourier transform for TensorFlow.

Read more
Image And Video Processing

Depth Range Reduction for 3D Range Geometry Compression

Three-dimensional (3D) shape measurement devices and techniques are being rapidly adopted within a variety of industries and applications. As acquiring 3D range data becomes faster and more accurate it becomes more challenging to efficiently store, transmit, or stream this data. One prevailing approach to compressing 3D range data is to encode it within the color channels of regular 2D images. This paper presents a novel method for reducing the depth range of a 3D geometry such that it can be stored within a 2D image using lower encoding frequencies (or a fewer number of encoding periods). This allows for smaller compressed file sizes to be achieved without a proportional increase in reconstruction errors. Further, as the proposed method occurs prior to encoding, it is readily compatible with a variety of existing image-based 3D range geometry compression methods.

Read more
Image And Video Processing

Depth extraction from a single compressive hologram

We propose a novel method that records a single compressive hologram in a short time and extracts the depth of a scene from that hologram using a stereo disparity technique. The method is verified with numerical simulations, but there is no restriction on adapting this into an optical experiment. In the simulations, a computer-generated hologram is first sampled with random binary patterns, and measurements are utilized in a recovery algorithm to form a compressive hologram. The compressive hologram is then divided into two parts (two apertures), and these parts are separately reconstructed to form a stereo image pair. The pair is eventually utilized in stereo disparity method for depth map extraction. The depth maps of the compressive holograms with the sampling rates of 2, 25, and 50 percent are compared with the depth map extracted from the original hologram, on which compressed sensing is not applied. It is demonstrated that the depth profiles obtained from the compressive holograms are in very good agreement with the depth profile obtained from the original hologram despite the data reduction.

Read more

Ready to get started?

Join us today