Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nenavath Srinivas Naik is active.

Publication


Featured researches published by Nenavath Srinivas Naik.


advances in computing and communications | 2014

A review of adaptive approaches to MapReduce scheduling in heterogeneous environments.

Nenavath Srinivas Naik; Atul Negi; V. N. Sastry

MapReduce is currently a significant model for distributed processing of large-scale data intensive applications. MapReduce default scheduler is limited by the assumption that nodes of the cluster are homogeneous and that tasks progress linearly. This model of MapReduce scheduler is used to decide speculatively re-execution of straggler tasks. The assumption of homogeneity does not always hold in practice. MapReduce does not fundamentally consider heterogeneity of nodes in computer clusters. It is evident that total job execution time is extended by the straggler tasks in heterogeneous environments. Adaptation to Heterogeneous environment depends on computation and communication, architectures, memory and power. In this paper, first we explain about existing scheduling algorithms and their respective characteristics. Then we review some of the approaches of scheduling algorithms like LATE, SAMR and ESAMR, which have been aimed specifically to make the performance of MapReduce adaptive in heterogeneous environments. Additionally, we have also introduced a novel approach for scheduling processes for MapReduce scheduling in heterogeneous environments that is adaptive and thus learns from past execution performances.


Recent Advances and Innovations in Engineering (ICRAIE), 2014 | 2014

Self-Healing model for software application

Kethavath Prem Kumar; Nenavath Srinivas Naik

Autonomic computing is an intelligent computing approach to self-managed computing systems with minimum human interference in a way to provide a stable computing environment. Such an environment can be defined in terms of self-sustaining features of an autonomic computing such as Self Configuring, Self-Healing, Self-Optimization and Self-Protecting. Self-Healing is an emerging research discipline, regarded as one of the key autonomic computing attribute. The complexities in computer systems are increasing hence the results in systems which are prone to errors will cause major problems for a user. For a system to be capable of self-healing it must be able to understand what had gone wrong and how to remedy it. This paper proposes a self-healing mechanism that monitors, diagnoses and repairs the corrupted files in the application to its original state. An analysis section of the application will be done by maintaining the hash values of corresponding files at runtime and recovers the corrupted file from the original application.


international conference on computer communications | 2014

Securing information by performing forensic and network analysis on hosted virtualization

Nenavath Srinivas Naik; Kethavath Prem Kumar; D. Vasumathi

A Hypervisor at the same time agrees a single system to run two or additional operating systems. To gather forensic proof of examined activities or attacks against the system, the evidence kept in logs of a system plays an important role. In this paper, we have analyzed logs, snapshots and also the network connectivity of guest and host operating systems. We have studied different virtualization systems and analyzed their logs, snapshots of hypervisor with dissimilar case studies to find the actions done on virtual systems. We have analyzed the deleted and formatted files information with the help of Encase forensic tool on some of the open source virtualization technologies like virtual box and qemu to ensure that the information existing in the system is always secure.


Future Generation Computer Systems | 2019

A data locality based scheduler to enhance MapReduce performance in heterogeneous environments

Nenavath Srinivas Naik; Atul Negi; B. R. Tapas Bapu; R. Anitha

Abstract MapReduce is an essential framework for distributed storage and parallel processing for large-scale data-intensive jobs proposed in recent times. Hadoop default scheduler assumes homogeneous environment. This assumption of homogeneity does not work at all times in practice and limits the performance of MapReduce. Data locality is essentially moving computation closer (faster access) to the input data. Fundamentally, MapReduce does not always look into the heterogeneity from a data locality perspective. Improving data locality for MapReduce framework is an important issue to improve the performance of large-scale Hadoop clusters. This paper proposes a novel data locality based scheduler which allocates input data blocks to the nodes based on their processing capacity. Also schedules m a p and r e d u c e tasks to the nodes based on their computing ability in the heterogeneous Hadoop cluster. We evaluate proposed scheduler using different workloads from Hi-Bench benchmark suite. The experimental results prove that our proposed scheduler enhances the MapReduce performance in heterogeneous environments. Minimizes job execution time, and also improves data locality for different parameters as compared to the Hadoop default scheduler, Matchmaking scheduler and Delay scheduler respectively.


International Journal of Big Data Intelligence | 2018

Improving straggler task performance in a heterogeneous MapReduce framework using reinforcement learning

Nenavath Srinivas Naik; Atul Negi; V. N. Sastry

MapReduce is one of the most significant distributed and parallel processing frameworks for large-scale data-intensive jobs proposed in recent times. Intelligent scheduling decisions can potentially help in significantly reducing the overall runtime of jobs. It is observed that the total time to completion of a job gets extended because of some slow tasks. Especially in heterogeneous environments, the job completion times do not synchronise. As originally conceived, MapReduce default scheduler was not very effective about slow task identification. In the literature, longest approximate time to end (LATE) scheduler extends to the heterogeneous environment, but it has limitations in properly estimating the progress of the tasks. It takes a static view of the task progress. In this paper, we propose a novel reinforcement learning-based MapReduce scheduler for heterogeneous environments called MapReduce reinforcement learning (MRRL) scheduler. It observes the system state of task execution and suggests speculative re-execution of the slower tasks to available nodes in the heterogeneous cluster without assuming any prior knowledge of the environmental characteristics. We observe that the experimental results show consistent improvements in performance as compared to the LATE and Hadoop default schedulers for different workloads of the Hi-bench benchmark suite.


advances in computing and communications | 2017

Implementation of telugu speech synthesis system

Gangala Ramya; Nenavath Srinivas Naik

Speech synthesis is the computer generated human voice. It is also known as a text-to-speech system which converts text information into speech. Speech synthesis systems are often called text-to-speech (TTS) systems about their ability to convert text into speech. A TTS synthesis system converts written orthographic text into corresponding artificial speech signals. In multi-lingual cultural settings, listeners expect a high-quality TTS synthesis system to read the text in a polyglot manner, i.e., in such a way that the origin of the inclusions is heard, by using correct language-specific pronunciation and prosody. Multilingual TTS synthesis systems are unable to convert such mixed-lingual texts into polyglot speech signals correctly. This paper presents a report on the work done to construct a polyglot TTS synthesis system which able to convert mixed lingual text into polyglot speech signals. Here mixed lingual text analyzer is proposed from a combination of monolingual text analyzers and compare both mixed-lingual and multi-lingual systems.


advances in computing and communications | 2017

A learning-based mapreduce scheduler in heterogeneous environments

Nenavath Srinivas Naik; Atul Negi

MapReduce is an essential framework for distributed storage and parallel processing for large-scale dataintensive jobs proposed in recent times. Hadoop default scheduler assumes a homogeneous environment. This assumption of homogeneity does not work at all times in practice and limits the performance of MapReduce. In heterogeneous environments, the job completion times do not synchronize. Data locality is essentially moving computation closer (faster access) to the input data. Fundamentally, MapReduce does not always look into the heterogeneity from a data locality perspective. Improving data locality for MapReduce framework is an important issue to enhance the performance of heterogeneous Hadoop clusters. Learning based scheduling decisions can potentially help in significantly reducing the overall job execution time. In this paper, we provide an overview of the taxonomy for MapReduce schedulers. This paper proposes a novel hybrid scheduler using a Reinforcement learning based approach. The proposed scheduler identifies the true Straggler tasks and schedules these tasks on fast processing nodes in a heterogeneous Hadoop cluster by taking the data locality into account.


Archive | 2016

Performance Improvement of MapReduce Framework by Identifying Slow TaskTrackers in Heterogeneous Hadoop Cluster

Nenavath Srinivas Naik; Atul Negi; V. N. Sastry

MapReduce is presently recognized as a significant parallel and distributed programming model with wide acclaim for large scale computing. MapReduce framework divides a job into map, reduce tasks and schedules these tasks in a distributed manner across the cluster. Scheduling of tasks and identification of “slow TaskTrackers” in heterogeneous Hadoop clusters is the focus of recent research. MapReduce performance is currently limited by its default scheduler, which does not adapt well in heterogeneous environments. In this paper, we propose a scheduling method to identify “slow TaskTrackers” in a heterogeneous Hadoop cluster and implement the proposed method by integrating it with the Hadoop default scheduling algorithm. The performance of this method is compared with the Hadoop default scheduler. We observe that the proposed approach shows modest but consistent improvement against the default Hadoop scheduler in heterogeneous environments. We see that it improves by minimizing the overall job execution time.


Archive | 2016

Enhancing the Performance of MapReduce Default Scheduler by Detecting Prolonged TaskTrackers in Heterogeneous Environments

Nenavath Srinivas Naik; Atul Negi; V. N. Sastry

MapReduce is now a significant parallel processing model for large-scale data-intensive applications using clusters with commodity hardware. Scheduling of jobs and tasks, and identification of TaskTrackers which are slow in Hadoop clusters are the focus research in the recent years. MapReduce performance is currently limited by its default scheduler, which does not adapt well in heterogeneous environments. In this paper, we propose a scheduling method to identify the TaskTrackers which are running slowly in map and reduce phases of the MapReduce framework in a heterogeneous Hadoop cluster. The proposed method is integrated with the MapReduce default scheduling algorithm. The performance of this method is compared with the unmodified MapReduce default scheduler. We observe that the proposed approach shows improvements in performance to the default scheduler in the heterogeneous environments. Performance improvement was observed as the overall job execution times for different workloads from HiBench benchmark suite were reduced.


international conference on advanced computing | 2015

Enhancing Performance of MapReduce Framework in Heterogeneous Environments

Nenavath Srinivas Naik; Atul Negi; V. N. Sastry

MapReduce framework in no time established as a vital distributed model for the applications which are data-intensive. Hadoop default scheduler is restricted by the idea that cluster nodes are homogeneous. The job execution time is extended by the tasks and TaskTrackers which are running slowly in heterogeneous Hadoop cluster. In this paper, we propose a unique MapReduce scheduler that identifies the straggler tasks and TaskTrackers that are running fast in an exceedingly heterogeneous Hadoop cluster so that the JobTracker can assigns slow tasks to the fast TaskTrackers within the cluster. We observe that the experimental results shows consistent improvement in performance to the LATE scheduler and Hadoop default scheduler for various workloads of Hi-Bench benchmark suite by minimizing the job completion time.

Collaboration


Dive into the Nenavath Srinivas Naik's collaboration.

Top Co-Authors

Avatar

Atul Negi

University of Hyderabad

View shared research outputs
Top Co-Authors

Avatar

V. N. Sastry

Institute for Development and Research in Banking Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. Anitha

S.A. Engineering College

View shared research outputs
Researchain Logo
Decentralizing Knowledge