Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manu Goyal is active.

Publication


Featured researches published by Manu Goyal.


systems, man and cybernetics | 2017

Fully convolutional networks for diabetic foot ulcer segmentation

Manu Goyal; Moi Hoon Yap; Satyan Rajbhandari; Jennifer Spragg

Diabetic Foot Ulcer (DFU) is a major complication of Diabetes, which if not managed properly can lead to amputation. DFU can appear anywhere on the foot and can vary in size, colour, and contrast depending on various pathologies. Current clinical approaches to DFU treatment rely on patients and clinician vigilance, which has significant limitations such as the high cost involved in the diagnosis, treatment and lengthy care of the DFU. We introduce a dataset of 705 foot images. We provide the ground truth of ulcer region and the surrounding skin that is an important indicator for clinicians to assess the progress of ulcer. Then, we propose a two-tier transfer learning from bigger datasets to train the Fully Convolutional Networks (FCNs) to automatically segment the ulcer and surrounding skin. Using 5fold cross-validation, the proposed two-tier transfer learning FCN Models achieve a Dice Similarity Coefficient of 0.794 (±0.104) for ulcer region, 0.851 (±0.148) for surrounding skin region, and 0.899 (±0.072) for the combination of both regions. This demonstrates the potential of FCNs in DFU segmentation, which can be further improved with a larger dataset.


Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging | 2018

End-to-end breast ultrasound lesions recognition with a deep learning approach

Fatima M. Osman; Robert Martí; Reyer Zwiggelaar; Arne Juette; Erika R. E. Denton; Moi Hoon Yap; Manu Goyal; Ezak Fadzrin B. Ahmad-Shaubari

Existing methods for automated breast ultrasound lesions detection and recognition tend to be based on multi-stage processing, such as preprocessing, filtering/denoising, segmentation and classification. The performance of these processes is dependent on the prior stages. To improve the current state of the art, we have proposed an end-to-end breast ultrasound lesions detection and recognition using a deep learning approach. We implemented a popular semantic segmentation framework, i.e. Fully Convolutional Network (FCN-AlexNet) for our experiment. To overcome data deficiency, we used a pre-trained model based on ImageNet and transfer learning. We validated our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. We assessed the performance of the model using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results show that our proposed method performed better on benign lesions, with a Dice score of 0.6879, when compared to the malignant lesions with a Dice score of 0.5525. When considering the number of images with Dice score > 0.5, 79% of the benign lesions were successfully segmented and correctly recognised, while 65% of the malignant lesions were successfully segmented and correctly recognised. This paper provides the first end-to-end solution for breast ultrasound lesion recognition. The future challenges for the proposed approaches are to obtain additional datasets and customize the deep learning framework to improve the accuracy of this method.


international conference on image analysis and recognition | 2017

Facial Skin Classification Using Convolutional Neural Networks

Jhan S. Alarifi; Manu Goyal; Adrian K. Davison; Darren Dancey; Rabia Khan; Moi Hoon Yap

Facial skin assessment is crucial for a number of fields including the make-up industry, dermatology and plastic surgery. This paper addresses skin classification techniques which use conventional machine learning and state-of-the-art Convolutional Neural Networks to classify three types of facial skin patches, namely normal, spots and wrinkles. This study aims to accomplish the pivotal work on the basis of these three classes to provide the collective facial skin quality score. In this work, we collected high quality face images of people from different ethnicities to create a derma dataset. Then, we outlined the skin patches of 100 (times ) 100 resolution in the three pre-decided classes. With extensive parameter tuning, we ran a number of computer vision experiments using both traditional machine learning and deep learning techniques for this 3-class classification. Despite the limited dataset, GoogLeNet outperforms the Support Vector Machine approach with Accuracy of 0.899, F-Measure of 0.852 and Matthews Correlation Coefficient of 0.779. The result shows the potential use of deep learning for non-clinical skin images classification, which will be more promising with a larger dataset.


Journal of medical imaging | 2018

Breast ultrasound lesions recognition:: end-to-end deep learning approaches

Moi Hoon Yap; Manu Goyal; Fatima M. Osman; Robert Martí; Erika R. E. Denton; Arne Juette; Reyer Zwiggelaar

Abstract. Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. To assess the performance, we conduct fivefold cross validation using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results showed that our proposed method performed better on benign lesions, with a top “mean Dice” score of 0.7626 with FCN-16s, when compared with the malignant lesions with a top mean Dice score of 0.5484 with FCN-8s. When considering the number of images with Dice score >0.5, 89.6% of the benign lesions were successfully segmented and correctly recognised, whereas 60.6% of the malignant lesions were successfully segmented and correctly recognized. We conclude the paper by addressing the future challenges of the work.


arXiv: Computer Vision and Pattern Recognition | 2017

Multi-class Semantic Segmentation of Skin Lesions via Fully Convolutional Networks.

Manu Goyal; Moi Hoon Yap


IEEE Transactions on Emerging Topics in Computational Intelligence | 2018

DFUNet: Convolutional Neural Networks for Diabetic Foot Ulcer Classification

Manu Goyal; Adrian K. Davison; Satyan Rajbhandari; Jennifer Spragg; Moi Hoon Yap


arXiv: Computer Vision and Pattern Recognition | 2018

Region of Interest Detection in Dermoscopic Images for Natural Data-augmentation.

Manu Goyal; Moi Hoon Yap


arXiv: Computer Vision and Pattern Recognition | 2018

Multi-Class Lesion Diagnosis with Pixel-wise Classification Network.

Manu Goyal; Jiahua Ng; Moi Hoon Yap


arXiv: Computer Vision and Pattern Recognition | 2018

Semantic Segmentation of Human Thigh Quadriceps Muscle in Magnetic Resonance Images.

Ezak Ahmad; Manu Goyal; Jamie S. McPhee; Hans Degens; Moi Hoon Yap


IEEE Journal of Biomedical and Health Informatics | 2018

Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices

Manu Goyal; Satyan Rajbhandari; Moi Hoon Yap

Collaboration


Dive into the Manu Goyal's collaboration.

Top Co-Authors

Avatar

Moi Hoon Yap

Manchester Metropolitan University

View shared research outputs
Top Co-Authors

Avatar

Adrian K. Davison

Manchester Metropolitan University

View shared research outputs
Top Co-Authors

Avatar

Arne Juette

Norfolk and Norwich University Hospital

View shared research outputs
Top Co-Authors

Avatar

Erika R. E. Denton

Norfolk and Norwich University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fatima M. Osman

Sudan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Connah Kendrick

Manchester Metropolitan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge