Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faisal Mahmood is active.

Publication


Featured researches published by Faisal Mahmood.


Medical Image Analysis | 2018

Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy

Faisal Mahmood; Nicholas J. Durr

HighlightsSynthetically generated endoscopy data with ground truth depth.An efficient joint CNN‐CRF‐based depth estimation architecture trained from synthetic endoscopy data.Adversarial training for adapting the network trained on synthetic data to real data.State‐of‐the‐art endoscopy depth estimation performance.Validation using registered views of CT and endoscopy data from a porcine colon. Graphical abstract Figure. No caption available. ABSTRACT Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network‐conditional random field (CNN‐CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically‐realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic‐like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN‐CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer‐aided detection algorithms for detection, segmentation and classification of lesions.


international conference of the ieee engineering in medicine and biology society | 2016

Graph-based sinogram denoising for tomographic reconstructions

Faisal Mahmood; Nauman Shahid; Pierre Vandergheynst; Ulf Skoglund

Limited data and low-dose constraints are common problems in a variety of tomographic reconstruction paradigms, leading to noisy and incomplete data. Over the past few years, sinogram denoising has become an essential preprocessing step for low-dose Computed Tomographic (CT) reconstructions. We propose a novel sinogram denoising algorithm inspired by signal processing on graphs. Graph-based methods often perform better than standard filtering operations since they can exploit the signal structure. This makes the sinogram an ideal candidate for graph based denoising since it generally has a piecewise smooth structure. We test our method with a variety of phantoms using different reconstruction methods. Our numerical study shows that the proposed algorithm improves the performance of analytical filtered back-projection (FBP) and iterative methods such as ART (Kaczmarz), and SIRT (Cimmino). We observed that graph denoised sinograms always minimize the error measure and improve the accuracy of the solution, compared to regular reconstructions.


PLOS ONE | 2016

Effect of Subliminal Lexical Priming on the Subjective Perception of Images: A Machine Learning Approach

Dhanya Menoth Mohan; Parmod Kumar; Faisal Mahmood; Kian Foong Wong; Abhishek Agrawal; Mohamed Elgendi; Rohit Shukla; Natania Ang; April Ching; Justin Dauwels; Alice H.D. Chan

The purpose of the study is to examine the effect of subliminal priming in terms of the perception of images influenced by words with positive, negative, and neutral emotional content, through electroencephalograms (EEGs). Participants were instructed to rate how much they like the stimuli images, on a 7-point Likert scale, after being subliminally exposed to masked lexical prime words that exhibit positive, negative, and neutral connotations with respect to the images. Simultaneously, the EEGs were recorded. Statistical tests such as repeated measures ANOVAs and two-tailed paired-samples t-tests were performed to measure significant differences in the likability ratings among the three prime affect types; the results showed a strong shift in the likeness judgment for the images in the positively primed condition compared to the other two. The acquired EEGs were examined to assess the difference in brain activity associated with the three different conditions. The consistent results obtained confirmed the overall priming effect on participants’ explicit ratings. In addition, machine learning algorithms such as support vector machines (SVMs), and AdaBoost classifiers were applied to infer the prime affect type from the ERPs. The highest classification rates of 95.0% and 70.0% obtained respectively for average-trial binary classifier and average-trial multi-class further emphasize that the ERPs encode information about the different kinds of primes.


field-programmable technology | 2015

2D Discrete Fourier Transform with simultaneous edge artifact removal for real-time applications

Faisal Mahmood; Märt Toots; Lars-Göran Öfverstedt; Ulf Skoglund

Two-Dimensional (2D) Discrete Fourier Transform (DFT) is a basic and computationally intensive algorithm, with a vast variety of applications. 2D images are, in general, non-periodic, but are assumed to be periodic while calculating their DFTs. This leads to cross-shaped artifacts in the frequency domain due to spectral leakage. These artifacts can have critical consequences if the DFTs are being used for further processing. In this paper we present a novel FPGA-based design to calculate high-throughput 2D DFTs with simultaneous edge artifact removal. Standard approaches for removing these artifacts using apodization functions or mirroring, either involve removing critical frequencies or a surge in computation by increasing image size. We use a periodic-plus-smooth decomposition based artifact removal algorithm optimized for FPGA implementation, while still achieving real-time (≥23 frames per second) performance for a 512×512 size image stream. Our optimization approach leads to a significant decrease in external memory utilization thereby avoiding memory conflicts and simplifies the design. We have tested our design on a PXIe based Xilinx Kintex 7 FPGA system communicating with a host PC which gives us the advantage to further expand the design for industrial applications.


Multimodal Biomedical Imaging XIII | 2018

Quantitative polyp size measurements with photometric stereo endoscopy enhanced by deep learning (Conference Presentation)

Faisal Mahmood; Norman S. Nishioka; Nicholas J. Durr

Colorectal cancer is the second leading cause of cancer deaths in the United States. Identifying and removing premalignant lesions via colonoscopy can significantly reduce colorectal cancer mortality. Unfortunately, the protective value of screening colonoscopy is limited because more than one quarter of clinically-important lesions are missed on average. Most of these lesions are associated with characteristic 3D topographical shapes that appear subtle to a conventional colonoscope. Photometric stereo endoscopy captures this 3D structure but is inherently qualitative due to the unknown working distances from each point of the object to the endoscope. In this work, we use deep learning to estimate the depth from a monocular endoscope camera. Significant amounts of endoscopy data with known depth maps is required for training a convolutional neural network for deep learning. Moreover, this training problem is challenging because the colon texture is patient-specific and cannot be used to efficiently learn depth. To resolve these issues, we developed a photometric stereo endoscopy simulator and generated data with ground truth depths from a virtual, texture-free colon phantom. These data were used to train a deep convolutional neural field network that can estimate the depth for test data with an accuracy of 84%. We use this depth estimate to implement a smart photometric stereo algorithm that reconstructs absolute depth maps. Applying this technique to an in-vivo human colonoscopy video of a single polyp viewed at varying distance, initial results show a reduction in polyp size measurement variation from 15.5% with conventional to 3.4% with smart photometric reconstruction.


Medical Imaging 2018: Image Processing | 2018

Deep learning-based depth estimation from a synthetic endoscopy image training set

Faisal Mahmood; Nicholas J. Durr

Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.


IEEE Transactions on Medical Imaging | 2018

Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training

Faisal Mahmood; Richard Chen; Nicholas J. Durr


IEEE Signal Processing Letters | 2018

Adaptive Graph-Based Total Variation for Tomographic Reconstructions

Faisal Mahmood; Nauman Shahid; Ulf Skoglund; Pierre Vandergheynst


international conference of the ieee engineering in medicine and biology society | 2014

On the effect of subliminal priming on subjective perception of images: a machine learning approach.

Parmod Kumar; Faisal Mahmood; Dhanya Menoth Mohan; Ken Wong; Abhishek Agrawal; Mohamed Elgendi; Rohit Shukla; Justin Dauwels; Alice H.D. Chan


IEEE Access | 2018

An Extended Field-Based Method for Noise Removal From Electron Tomographic Reconstructions

Faisal Mahmood; Lars-Göran Öfverstedt; Märt Toots; Gunnar Wilken; Ulf Skoglund

Collaboration


Dive into the Faisal Mahmood's collaboration.

Top Co-Authors

Avatar

Ulf Skoglund

Okinawa Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars-Göran Öfverstedt

Okinawa Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Nauman Shahid

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Pierre Vandergheynst

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Märt Toots

Okinawa Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Abhishek Agrawal

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Alice H.D. Chan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Dhanya Menoth Mohan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Justin Dauwels

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge