Avisek Lahiri
Indian Institute of Technology Kharagpur
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Avisek Lahiri.
international conference of the ieee engineering in medicine and biology society | 2016
Avisek Lahiri; Abhijit Guha Roy; Debdoot Sheet; Prabir Kumar Biswas
Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.
computer vision and pattern recognition | 2017
Avisek Lahiri; Kumar Ayush; Prabir Kumar Biswas; Pabitra Mitra
Convolutional Neural Network(CNN) based semantic segmentation require extensive pixel level manual annotation which is daunting for large microscopic images. The paper is aimed towards mitigating this labeling effort by leveraging the recent concept of generative adversarial network(GAN) wherein a generator maps latent noise space to realistic images while a discriminator differentiates between samples drawn from database and generator. We extend this concept to a multi task learning wherein a discriminator-classifier network differentiates between fake/real examples and also assigns correct class labels. Though our concept is generic, we applied it for the challenging task of vessel segmentation in fundus images. We show that proposed method is more data efficient than a CNN. Specifically, with 150K, 30K and 15K training examples, proposed method achieves mean AUC of 0.962, 0.945 and 0.931 respectively, whereas the simple CNN achieves AUC of 0.960, 0.921 and 0.916 respectively.
international conference on advances in pattern recognition | 2015
Avisek Lahiri; Prabir Kumar Biswas
In this paper, we introduce a new scalable platform for knowledge sharing based group learning in an adaptive boosting(AdaBoost) environment for supervised learning Though knowledge sharing has been an active area of research in semi supervised learning, the concept has not been explored thoroughly in supervised learning framework. In our proposed algorithm, several learner members are trained simultaneously on sub sets of original feature spaces. Every agent is trained using the same baseline algorithm such as Artificial Neural Network (ANN). In each knowledge sharing session the colony of agents calculates difficulty of each of the training samples and accordingly changes weight distribution over training data space based on a probabilistic metric. Finally, for classification purpose, the decisions of all the agents are conglomerated based on a novel variant of majority voting. Based on voting protocol, we present three different ensemble learning algorithms. Extensive simulations performed on samples from Color FERET and UCI databases reveal that our algorithm outperforms traditional non cooperative boosting algorithms and some recent variants of collaborative boosting algorithms in terms of training error convergence rate, classification accuracy, and resiliency against labeling noise. Furthermore, error-diversity relationships of the ensemble learners are investigated using Kappa-Error diagrams. The results are promising.
indian conference on computer vision, graphics and image processing | 2014
Avisek Lahiri; Prabir Kumar Biswas
This paper proposes knowledge sharing and cooperation based Adaptive Boosting (KSC-AdaBoost) for supervised collaborative learning in presence of two different feature spaces (views) representing a training example. In such a binary learner space, two learner agents are trained on the two feature spaces. Difficulty of a training example is ascertained not only by classification performance of an individual learner but also by overall group performance on that training example. Group learning is enhanced by a novel algorithm for assigning weight to training set data. Three different models of KSC-AdaBoost are proposed for agglomerating decisions of the two learners. KSC-AdaBoost outperforms traditional AdaBoost and some recent variants of AdaBoost in terms of convergence rate of training set error and generalization accuracy. The paper then presents KSC-AdaBoost based hierarchical model for accurate eye region localization followed by fuzzy rule based system for robust eye center detection. Exhaustive experiments on five publicly available popular datasets reveal the viability of the learning models and superior eye detection accuracy over recent state-of-the-art algorithms.
IEEE Transactions on Neural Networks | 2018
Avisek Lahiri; Biswajit Paria; Prabir Kumar Biswas
Multiview assisted learning has gained significant attention in recent years in supervised learning genre. Availability of high-performance computing devices enables learning algorithms to search simultaneously over multiple views or feature spaces to obtain an optimum classification performance. This paper is a pioneering attempt of formulating a mathematical foundation for realizing a multiview aided collaborative boosting architecture for multiclass classification. Most of the present algorithms apply multiview learning heuristically without exploring the fundamental mathematical changes imposed on traditional boosting. Also, most of the algorithms are restricted to two class or view setting. Our proposed mathematical framework enables collaborative boosting across any finite-dimensional view spaces for multiclass learning. The boosting framework is based on a forward stagewise additive model, which minimizes a novel exponential loss function. We show that the exponential loss function essentially captures the difficulty of a training sample space instead of the traditional “1/0” loss. The new algorithm restricts a weak view from overlearning and thereby preventing overfitting. The model is inspired by our earlier attempt on collaborative boosting, which was devoid of mathematical justification. The proposed algorithm is shown to converge much nearer to global minimum in the exponential loss space and thus supersedes our previous algorithm. This paper also presents analytical and numerical analyses of convergence and margin bounds for multiview boosting algorithms and we show that our proposed ensemble learning manifests lower error bound and higher margin compared with our previous model. Also, the proposed model is compared with traditional boosting and recent multiview boosting algorithms. In the majority of instances, the new algorithm manifests a faster rate of convergence on training set error and also simultaneously offers better generalization performance. The kappa-error diagram analysis reveals the robustness of the proposed boosting framework to labeling noise.
asian conference on computer vision | 2014
Avisek Lahiri; Prabir Kumar Biswas
Multiview representation of data is common in disciplines such as computer vision, bio-informatics, etc. Traditional fusion methods train independent classifiers on each view and finally conglomerate them using weighted summation. Such approaches are void from inter-view communications and thus do not guarantee to yield the best possible ensemble classifier on the given sample-view space. This paper proposes a new algorithm for multiclass classification using multi-view assisted supervised learning (MA-AdaBoost). MA-AdaBoost uses adaptive boosting for initially training baseline classifiers on each view. After each boosting round, the classifiers share their classification performances. Based on this communication, weight of an example is ascertained by its classification difficulties across all views. Two versions of MA-AdaBoost are proposed based on the nature of final output of baseline classifiers. Finally, decisions of baseline classifiers are agglomerated based on a novel algorithm of reward assignment. The paper then presents classification comparisons on benchmark UCI datasets and eye samples collected from FERET database. Kappa-error diversity diagrams are also studied. In majority instances, MA-AdaBoost outperforms traditional AdaBoost, variants of AdaBoost, and recent works on supervised collaborative learning with respect to convergence rate of training set and generalization errors. The error-diversity results are also encouraging.
arXiv: Computer Vision and Pattern Recognition | 2018
Avisek Lahiri; Vineet Jain; Arnab Mondal; Prabir Kumar Biswas
arXiv: Computer Vision and Pattern Recognition | 2018
Avisek Lahiri; Charan Reddy; Prabir Kumar Biswas
arXiv: Computer Vision and Pattern Recognition | 2018
Avisek Lahiri; Arnav Kumar Jain; Divyasri Nadendla; Prabir Kumar Biswas
arXiv: Computer Vision and Pattern Recognition | 2018
Avisek Lahiri; Abhinav Agarwalla; Prabir Kumar Biswas