Siyamalan Manivannan
University of Dundee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Siyamalan Manivannan.
Pattern Recognition | 2016
Siyamalan Manivannan; Wenqi Li; Shazia Akbar; Ruixuan Wang; Jianguo Zhang; Stephen J. McKenna
Immunofluorescence antinuclear antibody tests are important for diagnosis and management of autoimmune conditions; a key step that would benefit from reliable automation is the recognition of subcellular patterns suggestive of different diseases. We present a system to recognize such patterns, at cellular and specimen levels, in images of HEp-2 cells. Ensembles of SVMs were trained to classify cells into six classes based on sparse encoding of texture features with cell pyramids, capturing spatial, multi-scale structure. A similar approach was used to classify specimens into seven classes. Software implementations were submitted to an international contest hosted by ICPR 2014 (Performance Evaluation of Indirect Immunofluorescence Image Analysis Systems). Mean class accuracies obtained on heldout test data sets were 87.1% and 88.5% for cell and specimen classification respectively. These were the highest achieved in the competition, suggesting that our methods are state-of-the-art. We provide detailed descriptions and extensive experiments with various features and encoding methods. HighlightsWe propose systems for classifying immunofluorescence images of HEp-2 cells.Images are classified at both the cell level and the specimen level.Ensemble SVM classification based on sparse coding of texture features was effective.Cell pyramids and artificial dataset augmentation increased mean class accuracy.The proposed systems came first in the I3A contest associated with ICPR 2014.
international symposium on biomedical imaging | 2016
Wenqi Li; Siyamalan Manivannan; Shazia Akbar; Jianguo Zhang; Emanuele Trucco; Stephen J. McKenna
We investigate glandular structure segmentation in colon histology images as a window-based classification problem. We compare and combine methods based on fine-tuned convolutional neural networks (CNN) and hand-crafted features with support vector machines (HC-SVM). On 85 images of H&E-stained tissue, we find that fine-tuned CNN outperforms HC-SVM in gland segmentation measured by pixel-wise Jaccard and Dice indices. For HC-SVM we further observe that training a second-level window classifier on the posterior probabilities - as an output refinement - can substantially improve the segmentation performance. The final performance of HC-SVM with refinement is comparable to that of CNN. Furthermore, we show that by combining and refining the posterior probability outputs of CNN and HC-SVM together, a further performance boost is obtained.
international symposium on biomedical imaging | 2013
Siyamalan Manivannan; Ruixuan Wang; Emanuele Trucco; Adrian Hood
Two novel schemes are proposed to represent intermediate-scale features for normal-abnormal classification of colonoscopy images. The first scheme works on the full-resolution image, the second on a multi-scale pyramid space. Both schemes support any feature descriptor; here we use multi-resolution local binary patterns which outperformed other features reported in the literature in our comparative experiments. We also compared experimentally two types of features not previously used in colonoscopy image classification, bag of features and sparse coding, each with and without spatial pyramid matching (SPM). We find that SPM improves performance, therefore supporting the importance of intermediate-scale features as in the proposed schemes for classification. Within normal-abnormal frame classification, we show that our representational schemes outperforms other features reported in the literature in leave-N-out tests with a database of 2100 colonoscopy images.
medical image computing and computer assisted intervention | 2014
Siyamalan Manivannan; Ruixuan Wang; Emanuele Trucco
Feature encoding plays an important role for medical image classification. Intra-cluster features such as bag of visual words have been widely used for feature encoding, which are based on the statistical information within each clusters of local features and therefore fail to capture the inter-cluster statistics, such as how the visual words co-occur in images. This paper proposes a new method to choose a subset of cluster pairs based on the idea of Latent Semantic Analysis (LSA) and proposes a new inter-cluster statistics which capture richer information than the traditional co-occurrence information. Since the cluster pairs are selected based on image patches rather than the whole images, the final representation also captures the local structures present in images. Experiments on medical datasets show that explicitly encoding inter-cluster statistics in addition to intra-cluster statistics significantly improves the classification performance, and adding the rich inter-cluster statistics performs better than the frequency based inter-cluster statistics.
IEEE Transactions on Medical Imaging | 2017
Siyamalan Manivannan; Caroline Cobb; Stephen Burgess; Emanuele Trucco
We propose a novel multiple-instance learning (MIL) method to assess the visibility (visible/not visible) of the retinal nerve fiber layer (RNFL) in fundus camera images. Using only image-level labels, our approach learns to classify the images as well as to localize the RNFL visible regions. We transform the original feature space into a discriminative subspace, and learn a region-level classifier in that subspace. We propose a margin-based loss function to jointly learn this subspace and the region-level classifier. Experiments with an RNFL data set containing 884 images annotated by two ophthalmologists give a system-annotator agreement (kappa values) of 0.73 and 0.72, respectively, with an interannotator agreement of 0.73. Our system agrees better with the more experienced annotator. Comparative tests with three public data sets (MESSIDOR and DR for diabetic retinopathy, and UCSB for breast cancer) show that our novel MIL approach improves performance over the state of the art. Our MATLAB code is publicly available at https://github.com/ManiShiyam/Sub-category-classifiers-for-Multiple-Instance-Learning/wiki.
international symposium on biomedical imaging | 2016
Siyamalan Manivannan; Wenqi Li; Shazia Akbar; Jianguo Zhang; Emanuele Trucco; Stephen J. McKenna
We present a method to segment individual glands from colon histopathology images. Segmentation based on sliding window classification does not usually make explicit use of information about the spatial configurations of class labels. To improve on this we propose to segment glands using a structure learning approach in which the local label configurations (structures) are considered when training a support vector machine classifier. The proposed method not only distinguishes foreground from background, it also distinguishes between different local structures in pixel labelling, e.g. locations between adjacent glands and locations far from glands. It directly predicts these label configurations at test time. Experiments demonstrate that it produces better segmentations than when the local label structure is not used to train the classifier.
international symposium on biomedical imaging | 2015
Siyamalan Manivannan; Emanuele Trucco
In this paper we propose a novel weakly-supervised feature learning approach, learning discriminative local features from image-level labelled data for image classification. Unlike existing feature learning approaches which assume that a set of additional data in the form of matching/non-matching pairs of local patches are given for learning the features, our approach only uses the image-level labels which are much easier to obtain. Experiments on a colonoscopy image dataset with 2100 images shows that the learned local features outperforms other hand-crafted features and gives a state-or-the-art classification accuracy of 93.5%.
medical image computing and computer assisted intervention | 2016
Siyamalan Manivannan; Caroline Cobb; Stephen Burgess; Emanuele Trucco
We propose a novel multiple instance learning method to assess the visibility (visible/not visible) of the retinal nerve fiber layer (RNFL) in fundus camera images. Using only image-level labels, our approach learns to classify the images as well as to localize the RNFL visible regions. We transform the original feature space to a discriminative subspace, and learn a region-level classifier in that subspace. We propose a margin-based loss function to jointly learn this subspace and the region-level classifier. Experiments with a RNFL dataset containing 576 images annotated by two experienced ophthalmologists give an agreement (kappa values) of 0.65 and 0.58 respectively, with an inter-annotator agreement of 0.62. Note that our system gives higher agreements with the more experienced annotator. Comparative tests with three public datasets (MESSIDOR and DR for diabetic retinopathy, UCSB for breast cancer) show improved performance over the state-of-the-art.
IEEE Transactions on Medical Imaging | 2018
Siyamalan Manivannan; Wenqi Li; Jianguo Zhang; Emanuele Trucco; Stephen J. McKenna
We present a novel method to segment instances of glandular structures from colon histopathology images. We use a structure learning approach which represents local spatial configurations of class labels, capturing structural information normally ignored by sliding-window methods. This allows us to reveal different spatial structures of pixel labels (e.g., locations between adjacent glands, or far from glands), and to identify correctly neighboring glandular structures as separate instances. Exemplars of label structures are obtained via clustering and used to train support vector machine classifiers. The label structures predicted are then combined and post-processed to obtain segmentation maps. We combine hand-crafted, multi-scale image features with features computed by a deep convolutional network trained to map images to segmentation maps. We evaluate the proposed method on the public domain GlaS data set, which allows extensive comparisons with recent, alternative methods. Using the GlaS contest protocol, our method achieves the overall best performance.
medical image computing and computer assisted intervention | 2017
Ahmed E. Fetit; Siyamalan Manivannan; Sarah McGrory; Lucia Ballerini; Alex S. F. Doney; Tom MacGillivray; Ian J. Deary; Joanna M. Wardlaw; Fergus N. Doubal; Gareth J. McKay; Stephen J. McKenna; Emanuele Trucco
Dementia is a devastating disease, and has severe implications on affected individuals, their family and wider society. A growing body of literature is studying the association of retinal microvasculature measurement with dementia. We present a pilot study testing the strength of groups of conventional (semantic) and texture-based (non-semantic) measurements extracted from retinal fundus camera images to classify patients with and without dementia. We performed a 500-trial bootstrap analysis with regularized logistic regression on a cohort of 1,742 elderly diabetic individuals (median age 72.2). Age was the strongest predictor for this elderly cohort. Semantic retinal measurements featured in up to 81% of the bootstrap trials, with arterial caliber and optic disk size chosen most often, suggesting that they do complement age when selected together in a classifier. Textural features were able to train classifiers that match the performance of age, suggesting they are potentially a rich source of information for dementia outcome classification.