Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohammad Ali Maraci is active.

Publication


Featured researches published by Mohammad Ali Maraci.


international symposium on biomedical imaging | 2016

Describing ultrasound video content using deep convolutional neural networks

Y. Gao; Mohammad Ali Maraci; J.A. Noble

We address the task of object recognition in obstetric ultrasound videos using deep Convolutional Neural Networks (CNNs). A transfer learning based design is presented to study the transferability of features learnt from natural images to ultrasound image object recognition which on the surface is a very different problem. Our results demonstrate that CNNs initialised with large-scale pre-trained networks outperform those directly learnt from small-scale ultrasound data (91.5% versus 87.9%), in terms of object identification.


International Workshop on Machine Learning in Medical Imaging | 2014

Searching for Structures of Interest in an Ultrasound Video Sequence

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J. Alison Noble

Ultrasound diagnosis and therapy is typically protocol driven but often criticized for requiring highly-skilled sonographers. However there is a shortage of highly trained sonographers worldwide, which is limiting the wider adoption of this cost-effective technology. The challenge therefore is to make the technology easier to use. We consider this problem in this paper. Our approach combines simple standardized clinical US scanning protocols (defined by our clinical partners) with machine learning driven image analysis solutions to enable a non-expert to perform ultrasound-based diagnostic tasks with minimal training. Motivated by recent work on dynamic texture analysis within the computer vision community, we have developed, and evaluated on clinical data, a framework that given a training set of Ultrasound Sweep Videos (USV), models the temporal evolution of objects of interest as a kernel dynamic texture which can form the basis of a metric for detecting structures of interest in new unseen videos. We describe the full original method, and demonstrate that it outperforms a simpler recently proposed approach on phantom data, and is significantly superior in performance on real clinical data.


Medical Image Analysis | 2017

A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat

Mohammad Ali Maraci; Christopher P. Bridge; R. Napolitano; A T Papageorghiou; J.A. Noble

HighlightsStandard obstetric ultrasound examination requires expert sonographers.A framework is proposed to detect fetal presentation & heartbeat, for novice users.This is based on predefined free‐hand ultrasound videos of the maternal abdomen. Graphical abstract Figure. No Caption available. Abstract Confirmation of pregnancy viability (presence of fetal cardiac activity) and diagnosis of fetal presentation (head or buttock in the maternal pelvis) are the first essential components of ultrasound assessment in obstetrics. The former is useful in assessing the presence of an on‐going pregnancy and the latter is essential for labour management. We propose an automated framework for detection of fetal presentation and heartbeat from a predefined free‐hand ultrasound sweep of the maternal abdomen. Our method exploits the presence of key anatomical sonographic image patterns in carefully designed scanning protocols to develop, for the first time, an automated framework allowing novice sonographers to detect fetal breech presentation and heartbeat from an ultrasound sweep. The framework consists of a classification regime for a frame by frame categorization of each 2D slice of the video. The classification scores are then regularized through a conditional random field model, taking into account the temporal relationship between the video frames. Subsequently, if consecutive frames of the fetal heart are detected, a kernelized linear dynamical model is used to identify whether a heartbeat can be detected in the sequence. In a dataset of 323 predefined free‐hand videos, covering the mother’s abdomen in a straight sweep, the fetal skull, abdomen, and heart were detected with a mean classification accuracy of 83.4%. Furthermore, for the detection of the heartbeat an overall classification accuracy of 93.1% was achieved.


international symposium on biomedical imaging | 2015

Fisher vector encoding for detecting objects of interest in ultrasound videos

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J.A. Noble

One of the main factors limiting the wider adoption of ultrasound imaging for diagnosis and therapy is requiring highly skilled sonographers. In this paper we consider the challenge of making this technology easier to use for non-experts. Our approach follows some of the recently proposed frameworks that break the process into firstly data acquisition through a simple and task-specific scan protocol followed by using machine learning methodologies to assist non-experts in performing diagnostic tasks. We present an object classification pipeline to identify the fetal skull, heart and abdomen from all the other frames in an ultrasound video, using Fisher vector features. We describe the full proposed method and provide a comparison with a recently proposed approach based on Bag of Visual Words (BoVW) to demonstrate that the new approach is superior in terms of accuracy (98.9% versus 87.1%).


International MICCAI Workshop on Medical Computer Vision | 2014

Object Classification in an Ultrasound Video Using LP-SIFT Features

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J. Allison Noble

The advantages of ultrasound (US) over other medical imaging modalities have provided a platform for its wide use in many medical fields, both for diagnostic and therapeutic purposes. However one of the limiting factors which has affected wide adoption of this cost-effective technology is requiring highly skilled sonographers and operators. We consider this problem in this paper which is motivated by advancements within the computer vision community. Our approach combines simple and standardized clinical ultrasound procedures with machine learning driven imaging solutions to provide users who have limited clinical experience, to perform simple diagnostic decisions (such as detection of a fetal breech presentation). We introduce LP-SIFT features constructed using the well-known SIFT features, utilizing a set of feature symmetry filters. We also illustrate how such features can be used in a bag of visual words representation on ultrasound images for classification of anatomical structures that have significant clinical implications in fetal health such as the fetal head, heart and abdomen, despite the high presence of speckle, shadows and other imaging artifacts in ultrasound images.


MLMI@MICCAI | 2018

Can Dilated Convolutions Capture Ultrasound Video Dynamics

Mohammad Ali Maraci; Weidi Xie; J. Alison Noble

Automated analysis of free-hand ultrasound video sweeps is an important topic in diagnostic and interventional imaging, however, it is a notoriously challenging task for detecting the standard planes, due to the low-quality data, variability in contrast, appearance and placement of the structures. Conventionally, sequential data is usually modelled with heavy Recurrent Neural Networks (RNNs). In this paper, we propose to apply a convolutional architecture (CNNs) for the standard plane detection in free-hand ultrasound videos. Our contributions are twofolds, firstly, we show a simple convolutional architecture can be applied to characterize the long range dependencies in the challenging ultrasound video sequences, and outperform the canonical LSTMs and the recently proposed two-stream spatial ConvNet by a large margin (89% versus 83% and 84% respectively). Secondly, to get an understanding of what evidences have been used by the model for decision making, we experimented with the soft-attention layers for feature pooling, and trained the entire model end-to-end with only standard classification losses. As a result, we find the input-dependent attention maps can not only boost the network’s performance, but also indicate useful patterns of the data that are deemed important for certain structure, therefore provide interpretation while deploying the models.


Ultrasound in Obstetrics & Gynecology | 2015

OC11.07: Towards automating the ISUOG “six‐step basic ultrasound” scan

Mohammad Ali Maraci; Christopher P. Bridge; J.A. Noble; Christina Aye; M. Molloholli; R. Napolitano; A T Papageorghiou


Ultrasound in Obstetrics & Gynecology | 2014

P22.03: Searching for structures of interest in an ultrasound video sequence with an application for detection of breech

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J.A. Noble


Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 2014

Object classification in an ultrasound video using LP-SIFT features

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J Allison Noble


Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 2014

Searching for structures of interest in an ultrasound video sequence

Mohammad Ali Maraci; R. Napolitano; A T Papageorghiou; J Alison Noble

Collaboration


Dive into the Mohammad Ali Maraci's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Y. Gao

University of Oxford

View shared research outputs
Researchain Logo
Decentralizing Knowledge