Suman Sedai
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Suman Sedai.
Pattern Recognition | 2013
Suman Sedai; Mohammed Bennamoun; Du Q. Huynh
This paper presents a method for combining the shape and appearance feature types in a discriminative learning framework for human pose estimation. We first present a new appearance descriptor that is distinctive and resilient to noise for 3D human pose estimation. We then combine the proposed appearance descriptor with a shape descriptor computed from the silhouette of the human subject using discriminative learning. Our method, which we refer to as a localized decision level fusion technique, is based on clustering the output pose space into several partitions and learning a decision level fusion model for the shape and appearance descriptors in each region. The combined shape and appearance descriptor allows complementary information of the individual feature types to be exploited, leading to improved performance of the pose estimation system. We evaluate our proposed fusion method with feature level fusion and kernel level fusion methods using a synchronized video and 3D motion dataset. Our experimental results show that the proposed feature combination method gives more accurate pose estimation than the one obtained from each individual feature type. Among the three fusion methods, our localized decision level fusion method is demonstrated to perform the best for 3D pose estimation.
International Workshop on Machine Learning in Medical Imaging | 2016
Dwarikanath Mahapatra; Pallab Kanti Roy; Suman Sedai; Rahil Garnavi
Retinal image quality assessment (IQA) algorithms use different hand crafted features without considering the important role of the human visual system (HVS). We solve the IQA problem using the principles behind the working of the HVS. Unsupervised information from local saliency maps and supervised information from trained convolutional neural networks (CNNs) are combined to make a final decision on image quality. A novel algorithm is proposed that calculates saliency values for every image pixel at multiple scales to capture global and local image information. This extracts generalized image information in an unsupervised manner while CNNs provide a principled approach to feature learning without the need to define hand-crafted features. The individual classification decisions are fused by weighting them according to their confidence scores. Experimental results on real datasets demonstrate the superior performance of our proposed algorithm over competing methods.
british machine vision conference | 2010
Suman Sedai; Mohammed Bennamoun; Du Q. Huynh
This paper presents a learning-based method for combining the shape and appearance feature types for 3D human pose estimation from single-view images. Our method is based on clustering the 3D pose space into several modular regions and learning the regressors for both feature types and their optimal fusion scenario in each region. This way the complementary information of the individual feature types is exploited, leading to improved performance of pose estimation. We train and evaluate our method using a synchronized video and 3D motion dataset. Our experimental results show that the proposed feature combination method gave more accurate pose estimation than that from each individual feature type.
international conference on machine learning | 2015
Suman Sedai; Pallab Kanti Roy; Rahil Garnavi
Accurate and automatic segmentation of the right ventricle is challenging due to its complex anatomy and large shape variation observed between patients. In this paper the ability of shape regression is explored to segment right ventricle in presence of large shape variation among the patients. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape from a given initial shape. We use gradient boosted regression trees to learn each regressor in the cascade to take the advantage of supervised feature selection mechanism. A novel data augmentation method is proposed to generate synthetic training samples to improve regressors performance. In addition to that, a robust fusion method is proposed to reduce the the variance in the predictions given by different initial shapes, which is a major drawback of cascade regression based methods. The proposed method is evaluated on an image set of 45 patients and shows high segmentation accuracy with dice metric of
international symposium on biomedical imaging | 2017
Suman Sedai; Ruwan B. Tennakoon; Pallab Kanti Roy; Khoa Cao; Rahil Garnavi
digital image computing techniques and applications | 2016
Pallab Kanti Roy; Rajib Chakravorty; Suman Sedai; Dwarikanath Mahapatra; Rahil Garnavi
0.87\pm 0.06
international symposium on biomedical imaging | 2015
Long Xie; Suman Sedai; Xi Liang; Colin B. Compas; Hongzhi Wang; Paul A. Yushkevich; Tanveer Fathima Syeda-Mahmood
international symposium on biomedical imaging | 2015
Suman Sedai; Pallab Kanti Roy; Rahil Garnavi
. Comparative study shows that our proposed method performs better than state-of-the-art multi-atlas label fusion based segmentation methods.
arXiv: Computer Vision and Pattern Recognition | 2018
Suman Sedai; Bhavna J. Antony; Dwarikanath Mahapatra; Rahil Garnavi
The fovea is one of the most important anatomical landmarks in the eye and its localization is required in automated analysis of retinal diseases due to its role in sharp central vision. In this paper, we propose a two-stage deep learning framework for accurate segmentation of the fovea in retinal colour fundus images. In the first stage, coarse segmentation is performed to localize the fovea in the fundus image. The location information from the first stage is then used to perform fine-grained segmentation of the fovea region in the second stage. The proposed method performs end-to-end pixelwise segmentation by creating a deep learning model based on fully convolutional neural networks, which does not require the prior knowledge of the location of other retinal structures such as optic disc (OD) and vasculature geometry. We demonstrate the effectiveness of our method on a dataset with 400 retinal images with average localization error of 14 ± 7 pixels.
arXiv: Computer Vision and Pattern Recognition | 2018
Suman Sedai; Dwarikanath Mahapatra; Zongyuan Ge; Rajib Chakravorty; Rahil Garnavi
Retinal fundus images are mainly used by ophthalmologists to diagnose and monitor the development of retinal and systemic diseases. A number of computer-aided diagnosis (CAD) systems have been developed aimed at automation of mass screening and diagnosis of retinal diseases. Eye type (left or right eye) of a given retinal image is an important meta data information for a CAD. At present, eye type is graded manually which is time consuming and error prone. This article presents an automatic method for eye type detection, which can be integrated into existing retinal CAD systems to make them more faster and accurate. Our method combines transfer learning and anatomical prior knowledge based features to maximize the classification accuracy. We evaluate the proposed method on a retinal image set containing 5000 images. Our method shows a classification accuracy of 94% (area under the receiver operating characteristics curve (AUC) = 0.990).