Featured Researches

Image And Video Processing

Deep Convolutional Neural Network based Classification of Alzheimer's Disease using MRI data

Alzheimer's disease (AD) is a progressive and incurable neurodegenerative disease which destroys brain cells and causes loss to patient's memory. An early detection can prevent the patient from further damage of the brain cells and hence avoid permanent memory loss. In past few years, various automatic tools and techniques have been proposed for diagnosis of AD. Several methods focus on fast, accurate and early detection of the disease to minimize the loss to patients mental health. Although machine learning and deep learning techniques have significantly improved medical imaging systems for AD by providing diagnostic performance close to human level. But the main problem faced during multi-class classification is the presence of highly correlated features in the brain structure. In this paper, we have proposed a smart and accurate way of diagnosing AD based on a two-dimensional deep convolutional neural network (2D-DCNN) using imbalanced three-dimensional MRI dataset. Experimental results on Alzheimer Disease Neuroimaging Initiative magnetic resonance imaging (MRI) dataset confirms that the proposed 2D-DCNN model is superior in terms of accuracy, efficiency, and robustness. The model classifies MRI into three categories: AD, mild cognitive impairment, and normal control: and has achieved 99.89% classification accuracy with imbalanced classes. The proposed model exhibits noticeable improvement in accuracy as compared to the state-fo-the-art methods.

Read more
Image And Video Processing

Deep Cyclic Generative Adversarial Residual Convolutional Networks for Real Image Super-Resolution

Recent deep learning based single image super-resolution (SISR) methods mostly train their models in a clean data domain where the low-resolution (LR) and the high-resolution (HR) images come from noise-free settings (same domain) due to the bicubic down-sampling assumption. However, such degradation process is not available in real-world settings. We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions, which is inspired by the recent success of CycleGAN in the image-to-image translation applications. We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation in an end-to-end manner. We demonstrate our proposed approach in the quantitative and qualitative experiments that generalize well to the real image super-resolution and it is easy to deploy for the mobile/embedded devices. In addition, our SR results on the AIM 2020 Real Image SR Challenge datasets demonstrate that the proposed SR approach achieves comparable results as the other state-of-art methods.

Read more
Image And Video Processing

Deep Equilibrium Architectures for Inverse Problems in Imaging

Recent efforts on solving inverse problems in imaging via deep neural networks use architectures inspired by a fixed number of iterations of an optimization method. The number of iterations is typically quite small due to difficulties in training networks corresponding to more iterations; the resulting solvers cannot be run for more iterations at test time without incurring significant errors. This paper describes an alternative approach corresponding to an infinite number of iterations, yielding a consistent improvement in reconstruction accuracy above state-of-the-art alternatives and where the computational budget can be selected at test time to optimize context-dependent trade-offs between accuracy and computation. The proposed approach leverages ideas from Deep Equilibrium Models, where the fixed-point iteration is constructed to incorporate a known forward model and insights from classical optimization-based reconstruction methods.

Read more
Image And Video Processing

Deep EvoGraphNet Architecture For Time-Dependent Brain Graph Data Synthesis From a Single Timepoint

Learning how to predict the brain connectome (i.e. graph) development and aging is of paramount importance for charting the future of within-disorder and cross-disorder landscape of brain dysconnectivity evolution. Indeed, predicting the longitudinal (i.e., time-dependent ) brain dysconnectivity as it emerges and evolves over time from a single timepoint can help design personalized treatments for disordered patients in a very early stage. Despite its significance, evolution models of the brain graph are largely overlooked in the literature. Here, we propose EvoGraphNet, the first end-to-end geometric deep learning-powered graph-generative adversarial network (gGAN) for predicting time-dependent brain graph evolution from a single timepoint. Our EvoGraphNet architecture cascades a set of time-dependent gGANs, where each gGAN communicates its predicted brain graphs at a particular timepoint to train the next gGAN in the cascade at follow-up timepoint. Therefore, we obtain each next predicted timepoint by setting the output of each generator as the input of its successor which enables us to predict a given number of timepoints using only one single timepoint in an end- to-end fashion. At each timepoint, to better align the distribution of the predicted brain graphs with that of the ground-truth graphs, we further integrate an auxiliary Kullback-Leibler divergence loss function. To capture time-dependency between two consecutive observations, we impose an l1 loss to minimize the sparse distance between two serialized brain graphs. A series of benchmarks against variants and ablated versions of our EvoGraphNet showed that we can achieve the lowest brain graph evolution prediction error using a single baseline timepoint. Our EvoGraphNet code is available at this http URL.

Read more
Image And Video Processing

Deep Gaussian Denoiser Epistemic Uncertainty and Decoupled Dual-Attention Fusion

Following the performance breakthrough of denoising networks, improvements have come chiefly through novel architecture designs and increased depth. While novel denoising networks were designed for real images coming from different distributions, or for specific applications, comparatively small improvement was achieved on Gaussian denoising. The denoising solutions suffer from epistemic uncertainty that can limit further advancements. This uncertainty is traditionally mitigated through different ensemble approaches. However, such ensembles are prohibitively costly with deep networks, which are already large in size. Our work focuses on pushing the performance limits of state-of-the-art methods on Gaussian denoising. We propose a model-agnostic approach for reducing epistemic uncertainty while using only a single pretrained network. We achieve this by tapping into the epistemic uncertainty through augmented and frequency-manipulated images to obtain denoised images with varying error. We propose an ensemble method with two decoupled attention paths, over the pixel domain and over that of our different manipulations, to learn the final fusion. Our results significantly improve over the state-of-the-art baselines and across varying noise levels.

Read more
Image And Video Processing

Deep Generative SToRM model for dynamic imaging

We introduce a novel generative smoothness regularization on manifolds (SToRM) model for the recovery of dynamic image data from highly undersampled measurements. The proposed generative framework represents the image time series as a smooth non-linear function of low-dimensional latent vectors that capture the cardiac and respiratory phases. The non-linear function is represented using a deep convolutional neural network (CNN). Unlike the popular CNN approaches that require extensive fully-sampled training data that is not available in this setting, the parameters of the CNN generator as well as the latent vectors are jointly estimated from the undersampled measurements using stochastic gradient descent. We penalize the norm of the gradient of the generator to encourage the learning of a smooth surface/manifold, while temporal gradients of the latent vectors are penalized to encourage the time series to be smooth. The main benefits of the proposed scheme are (a) the quite significant reduction in memory demand compared to the analysis based SToRM model, and (b) the spatial regularization brought in by the CNN model. We also introduce efficient progressive approaches to minimize the computational complexity of the algorithm.

Read more
Image And Video Processing

Deep High-Resolution Network for Low Dose X-ray CT Denoising

Low Dose Computed Tomography (LDCT) is clinically desirable due to the reduced radiation to patients. However, the quality of LDCT images is often sub-optimal because of the inevitable strong quantum noise. Inspired by their unprecedent success in computer vision, deep learning (DL)-based techniques have been used for LDCT denoising. Despite the promising noise removal ability of DL models, people have observed that the resolution of the DL-denoised images is compromised, decreasing their clinical value. Aiming at relieving this problem, in this work, we developed a more effective denoiser by introducing a high-resolution network (HRNet). Since HRNet consists of multiple branches of subnetworks to extract multiscale features which are later fused together, the quality of the generated features can be substantially enhanced, leading to improved denoising performance. Experimental results demonstrated that the introduced HRNet-based denoiser outperforms the benchmarked UNet-based denoiser in terms of superior image resolution preservation ability while comparable, if not better, noise suppression ability. Quantitative metrics in terms of root-mean-squared-errors (RMSE)/structure similarity index (SSIM) showed that the HRNet-based denoiser can improve the values from 113.80/0.550 (LDCT) to 55.24/0.745 (HRNet), in comparison to 59.87/0.712 for the UNet-based denoiser.

Read more
Image And Video Processing

Deep Image Reconstruction using Unregistered Measurements without Groundtruth

One of the key limitations in conventional deep learning based image reconstruction is the need for registered pairs of training images containing a set of high-quality groundtruth images. This paper addresses this limitation by proposing a novel unsupervised deep registration-augmented reconstruction method (U-Dream) for training deep neural nets to reconstruct high-quality images by directly mapping pairs of unregistered and artifact-corrupted images. The ability of U-Dream to circumvent the need for accurately registered data makes it widely applicable to many biomedical image reconstruction tasks. We validate it in accelerated magnetic resonance imaging (MRI) by training an image reconstruction model directly on pairs of undersampled measurements from images that have undergone nonrigid deformations.

Read more
Image And Video Processing

Deep Iteration Assisted by Multi-level Obey-pixel Network Discriminator (DIAMOND) for Medical Image Recovery

Image restoration is a typical ill-posed problem, and it contains various tasks. In the medical imaging field, an ill-posed image interrupts diagnosis and even following image processing. Both traditional iterative and up-to-date deep networks have attracted much attention and obtained a significant improvement in reconstructing satisfying images. This study combines their advantages into one unified mathematical model and proposes a general image restoration strategy to deal with such problems. This strategy consists of two modules. First, a novel generative adversarial net(GAN) with WGAN-GP training is built to recover image structures and subtle details. Then, a deep iteration module promotes image quality with a combination of pre-trained deep networks and compressed sensing algorithms by ADMM optimization. (D)eep (I)teration module suppresses image artifacts and further recovers subtle image details, (A)ssisted by (M)ulti-level (O)bey-pixel feature extraction networks (D)iscriminator to recover general structures. Therefore, the proposed strategy is named DIAMOND.

Read more
Image And Video Processing

Deep Iterative Residual Convolutional Network for Single Image Super-Resolution

Deep convolutional neural networks (CNNs) have recently achieved great success for single image super-resolution (SISR) task due to their powerful feature representation capabilities. The most recent deep learning based SISR methods focus on designing deeper / wider models to learn the non-linear mapping between low-resolution (LR) inputs and high-resolution (HR) outputs. These existing SR methods do not take into account the image observation (physical) model and thus require a large number of network's trainable parameters with a great volume of training data. To address these issues, we propose a deep Iterative Super-Resolution Residual Convolutional Network (ISRResCNet) that exploits the powerful image regularization and large-scale optimization techniques by training the deep network in an iterative manner with a residual learning approach. Extensive experimental results on various super-resolution benchmarks demonstrate that our method with a few trainable parameters improves the results for different scaling factors in comparison with the state-of-art methods.

Read more

Ready to get started?

Join us today