Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunirmal Khatua is active.

Publication


Featured researches published by Sunirmal Khatua.


international conference on parallel processing | 2013

Application-Centric resource provisioning for amazon EC2 spot instances

Sunirmal Khatua; Nandini Mukherjee

In late 2009, Amazon introduced spot instances to offer their unused resources at lower cost with reduced reliability. Amazons spot instances allow customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current spot price. The spot price changes periodically based on supply and demand of spot instances, and customers whose bid exceeds it gain access to the available spot instances. Customers may expect their services at lower cost with spot instances compared to on-demand or reserved. However the reliability is compromised since the instances (IaaS) providing the service (SaaS) may become unavailable at any time without any notice to the customer. In this paper, we study various checkpointing schemes to increase the reliability over spot instances. Also we devise a novel checkpointing scheme on top of application-centric resource provisioning framework that increases the reliability while reducing the cost significantly.


virtual environments, human-computer interfaces and measurement systems | 2010

Optimizing the utilization of virtual resources in Cloud environment

Sunirmal Khatua; Anirban Ghosh; Nandini Mukherjee

One of the key factors behind successful deployment of Cloud for on demand services is the optimal utilization of its virtual resources. A poorly managed cloud application may lead to huge cost which is even more than the cost of physical deployment. The most important issues in Cloud are the scalability and availability. A highly scalable deployment may lead to poor resource utilization whereas a low scalable deployment may lead to unavailability of services. This paper proposes architecture for optimal utilization of such resources considering both scalability and availability. The proposed architecture, named as Monitoring & Optimizing Virtual Resources (MOVR) architecture, manages and optimizes the usage of the resources required by a cloud application considering auto deployment, auto scaling and auto recovery of the provisioned resources for the application.


2014 Applications and Innovations in Mobile Computing (AIMoC) | 2014

Negotiation based service brokering using game theory

Benay Kumar Ray; Sunirmal Khatua; Sarbani Roy

Enterprise cloud computing has emerged as a promising technology, where on-demand provisioning of services like storage, infrastructure, software and platform are provided. Growing market of cloud computing, resulted in a variety of heterogeneous cloud services. This leads to a difficult problem for Cloud Service Consumer (SC) when selecting their best fitting Cloud Service Providers (SP), who can provide best quality resource at negotiated price. Thus we propose a middleware based Cloud Service Broker (SB) architecture for enterprise cloud computing. The objective of SB is to find the most suitable SP for a SC based on negotiation with Service Level Agreement (SLA) parameters like price and quality. Second we propose game theory model for automatic SLA negotiation between SC and SP where CSB provides optimal value of price and quality to both the parties.


ieee acm international conference utility and cloud computing | 2014

Prediction-Based Instant Resource Provisioning for Cloud Applications

Sunirmal Khatua; Moumita Mitra Manna; Nandini Mukherjee

Dynamic provisioning of computing resources to fulfill the application requirement based on its current demand is one of the key challenges in cloud environment. However, availability of a resource to the application is not possible just by launching the VMs, but by the subsequent reconfiguration of the provisioned VMs, which is time-consuming and application dependent. In order to solve the instant resource provisioning problem, in this paper we propose to use some auto-scaling techniques based on prediction and proportional thresholding.


IOSR Journal of Computer Engineering | 2012

An Efficient Biological Sequence Compression Technique Using LUT and Repeat in the Sequence

Subhankar Roy; Sunirmal Khatua; Sudipta Roy; Samir Kumar Bandyopadhyay

Data compression plays an important role to deal with high volumes of DNA sequences in the field of Bioinformatics. Again data compression techniques directly affect the alignment of DNA sequences. So the time needed to decompress a compressed sequence has to be given equal priorities as with compression ratio. This article contains first introduction then a brief review of different biological sequence compression after that my proposed work then our two improved Biological sequence compression algorithms after that result followed by conclusion and discussion, future scope and finally references. These algorithms gain a very good compression factor with higher saving percentage and less time for compression and decompression than the previous Biological Sequence compression algorithms. Keywords: Hash map table, Tandem repeats, compression factor, compression time, saving percentage, compression, decompression process.


Archive | 2017

A Review on Energy Efficient Resource Management Strategies for Cloud

Srimoyee Bhattacherjee; Sunirmal Khatua; Sarbani Roy

Green computing has acquired significant amount of importance in the present research scenario as with mammoth advancements in the field of Information and Communications Technology (ICT), energy consumption has increased manifold over the last decade. With development in services provided by cloud providers, a large amount of energy is being consumed by the cloud data centers distributed all over the world. These data centers have a major contribution to the carbon footprints that is being generated, which in turn is detrimental for the environment. This paper studies the current researches that are being conducted to build energy efficient cloud networks and explores strategies that can be adopted to reduce the energy consumption of cloud data centers, thus opening new avenues towards building a ‘green’ cloud network.


international conference on distributed computing and internet technology | 2015

Cloud Federation Formation Using Coalitional Game Theory

Benay Kumar Ray; Sunirmal Khatua; Sarbani Roy

Cloud federation has been emerged as a new paradigm in which group of Cloud Service Providers SP cooperate to share resources with peers, to gain economic advantage. In this paper, we study the cooperative behavior of a group of cloud SPs. We present broker based cloud federation architecture and model the formation of cloud federation using coalition game theory. The objective is to find most suitable federation for Cloud SPs that maximize the satisfaction level of each individual Cloud SP on the basis of Quality of ServiceQoS attributes like availability and price.


BMC Bioinformatics | 2014

PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis

Ranjan Kumar Maji; Arijita Sarkar; Sunirmal Khatua; Subhasis Dasgupta; Zhumur Ghosh

BackgroundHigh-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently.ResultsWe analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time.ConclusionsPVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system.


international conference on distributed computing and internet technology | 2013

Power Efficient Data Gathering for Sensor Network

Anushua Dutta; Kunjal Thakkar; Sunirmal Khatua; Rajib K. Das

In this paper we have presented an algorithm to construct a rooted tree with base station as root connecting all the sensor nodes. The tree is constructed with the aim of maximizing the network life-time. It is assumed that all the nodes have same initial energy, but they can adjust their transmission range and thus the amount of energy needed for transmission may vary. The cost of a node is the amount of energy spent by it in each data gathering round. While determining the lifetime of a node, the energy lost in the process of constructing the tree is also considered. Thus the lifetime of a node is its residual energy (initial energy minus energy spent in exchanging messages for construction) divided by its cost. The lifetime of the network is the minimum of the lifetimes of all the nodes in the sensor network. It is also assumed the sensed data is aggregated so that nodes send a fixed sized message to its parent in each data gathering round. The algorithm works in two phases: In the first phase, an initial tree is constructed where the path from a sensor node to the base station consists of least possible number of hops. In the second phase (called fine-tuning) nodes may change their parents if that lead to a reduction in maximum cost of the three nodes involved (the node, its present parent, its future parent). The algorithms for both the phases (initial tree construction and fine-tuning) are distributed where nodes take decision Bbased on the status of its neighbors. Experimental results show that fine-tuning leads to considerable improvement in life-time of the network. The lifetime computed is significantly higher than those obtained by other well-known algorithms for data gathering.


International Journal of Computer Applications | 2013

Compression Algorithm for all Specified bases in Nucleic Acid Sequences

Subhankar Roy; Sunirmal Khatua

rganizations such as IT industry, colleges and Scientists regularly encounter problems to handle large data sets for their different purpose in many areas as for example biological research. These limitations also affect internet search to fetch data, business for analysis etc. So it is simply needed generalized but special types of compression algorithm for dissimilar data to get utmost saving percentage. In this article Compression of biological data that is single and double strand DNA and single strand RNA have been considered. Since biological data are less random compare to any text data that means redundancy within the sequences are more but they have some special property as for example different types of repeat one of such repeat is called dinucleotide repeat .This type of repeat are more in any sequence. Here the two proposed algorithm are based on this repeat using static fixed length LUT for input file and output file mapping.

Collaboration


Dive into the Sunirmal Khatua's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avirup Saha

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge