Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghavendra Rao Chillarige is active.

Publication


Featured researches published by Raghavendra Rao Chillarige.


Software - Practice and Experience | 2016

The anatomy of big data computing

Raghavendra Kune; Pramod Kumar Konugurthi; Arun Agarwal; Raghavendra Rao Chillarige; Rajkumar Buyya

Advances in information technology and its widespread growth in several areas of business, engineering, medical, and scientific studies are resulting in information/data explosion. Knowledge discovery and decision‐making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing, a new paradigm that combines large‐scale compute, new data‐intensive techniques, and mathematical models to build data analytics. Big data computing demands a huge storage and computing for data curation and processing that could be delivered from on‐premise or clouds infrastructures. This paper discusses the evolution of big data computing, differences between traditional data warehousing and big data, taxonomy of big data computing and underpinning technologies, integrated platform of big data and clouds known as big data clouds, layered architecture and components of big data cloud, and finally open‐technical challenges and future directions. Copyright


IET Biometrics | 2016

Generating cancellable fingerprint templates based on Delaunay triangle feature set construction

Mulagala Sandhya; Munaga V. N. K. Prasad; Raghavendra Rao Chillarige

In this study, the authors propose a novel fingerprint template protection scheme that is developed using Delaunay triangulation net constructed from the fingerprint minutiae. The authors propose two methods namely FS_INCIR and FS_AVGLO to construct a feature set from the Delaunay triangles. The feature set computed is quantised and mapped to a 3D array to produce fixed length 1D bit string. This bit string is applied with a DFT to generate a complex vector. Finally, the complex vector is multiplied by users key to generate a cancellable template. The proposed computation of feature set maintained a good balance between security and performance. These methods are tested on FVC 2002 and FVC 2004 databases and the experimental results show satisfactory performance. Further, the authors analysed the four requirements namely diversity, revocability, irreversibility and accuracy for protecting biometric templates. Thus, the feasibility of proposed scheme is depicted.


intelligent systems design and applications | 2010

Rough set clustering approach to replica selection in data grids (RSCDG)

Rafah M. Almuttairi; Rajeev Wankar; Atul Negi; Raghavendra Rao Chillarige

In data grids, the fast and proper replica selection decision leads to better resource utilization due to reduction in latencies to access the best replicas and speed up the execution of the data grid jobs. In this paper, we propose a new strategy that improves replica selection in data grids with the help of the reduct concept of the Rough Set Theory (RST). Using Quickreduct algorithm the unsupervised clustering is changed into supervised reducts. Then, Rule algorithm is used for obtaining optimum rules to derive usage patterns from the data grid information system. The experiments are carried out using Rough Set Exploration System (RSES) tool.


2014 IEEE/ACM International Symposium on Big Data Computing | 2014

Genetic Algorithm Based Data-Aware Group Scheduling for Big Data Clouds

Raghavendra Kune; Pramod Kumar Konugurthi; Arun Agarwal; Raghavendra Rao Chillarige; Rajkumar Buyya

Cloud computing is a promising cost efficient service oriented computing platform in the fields of science, engineering, business and social networking for delivering the resources on demand. Big Data Clouds is a new generation data analytics platform using Cloud computing as a back end technologies, for information mining, knowledge discovery and decision making based on statistical and empirical tools. MapReduce scheduling models for Big Data computing operate in the cluster mode, where the data nodes are pre-configured with the computing facility for processing. These MapReduce models are based on compute push model-pushing the logic to the data node for analysis, which is primarily for minimizing or eliminating data migration overheads between computing resources and data nodes. Such models, however, substantially perform well in the cluster setups, but are infelicitous for the platforms having the decoupled data storage and computing resources. In this paper, we propose a Genetic Algorithm based scheduler for such Big Data Cloud where decoupled computational and data services are offered as services. The approach is based on evolutionary methods focussed on data dependencies, computational resources and effective utilization of bandwidth thus achieving higher throughputs.


international conference on cloud computing | 2015

XHAMI -- Extended HDFS and MapReduce Interface for Image Processing Applications

Raghavendra Kune; Pramodkumar Konugurthi; Arun Agarwal; Raghavendra Rao Chillarige; Rajkumar Buyya

Hadoop Distributed File System (HDFS) and MapReduce model have become de facto standard for large scale data organization and analysis. Existing model of data organization and processing in Hadoop using HDFS and MapReduce are ideally tailored for search and data parallel applications, for which there is no data dependency with neighboring/adjacent data. Many scientific applications such as image mining, data mining, knowledge data mining, satellite image processing etc., are dependent on adjacent data for processing and analysis. In this paper, we discuss the requirements of the overlapped data organization and propose XHAMI as a two phase extensions to HDFS and MapReduce programming model to address such requirements. We present the APIs and discuss their implementation specific to Image Processing (IP) domain in detail, followed by sample case studies of image processing functions along with the results. XHAMI though has little overheads in data storage and input/output operations, but greatly improves the system performance and simplifies the application development process. The proposed system works without any changes for the existing MapReduce models with zero overheads, and can be used for many domain specific applications where there is a requirement of overlapped data.


multi disciplinary trends in artificial intelligence | 2012

Tuning the Optimization Parameter Set for Code Size

N. A. B. Sankar Chebolu; Rajeev Wankar; Raghavendra Rao Chillarige

Determining nearly optimal optimization options for modern day compilers is a combinatorial problem. Added to this, specific to a given application, platform and optimization objective, fine tuning the parameter set being used by various optimization passes, enhance the complexity further. In this paper we propose a greedy based iterative approach and investigate the impact of fine-tuning the parameter set on the code size. The effectiveness of our approach is demonstrated on some of benchmark programs from SPEC2006 benchmark suite that there is a significant impact of tuning the parameter values on the code size.


Software - Practice and Experience | 2017

XHAMI – extended HDFS and MapReduce interface for Big Data image processing applications in cloud computing environments

Raghavendra Kune; Pramod Kumar Konugurthi; Arun Agarwal; Raghavendra Rao Chillarige; Rajkumar Buyya

Hadoop distributed file system (HDFS) and MapReduce model have become popular technologies for large‐scale data organization and analysis. Existing model of data organization and processing in Hadoop using HDFS and MapReduce are ideally tailored for search and data parallel applications, for which there is no need of data dependency with its neighboring/adjacent data. However, many scientific applications such as image mining, data mining, knowledge data mining, and satellite image processing are dependent on adjacent data for processing and analysis. In this paper, we identify the requirements of the overlapped data organization and propose a two‐phase extension to HDFS and MapReduce programming model, called XHAMI, to address them. The extended interfaces are presented as APIs and implemented in the context of image processing application domain. We demonstrated effectiveness of XHAMI through case studies of image processing functions along with the results. Although XHAMI has little overhead in data storage and input/output operations, it greatly enhances the system performance and simplifies the application development process. Our proposed system, XHAMI, works without any changes for the existing MapReduce models and can be utilized by many applications where there is a requirement of overlapped data. Copyright


Archive | 2015

GA-Based Compiler Parameter Set Tuning

N. A. B. Sankar Chebolu; Rajeev Wankar; Raghavendra Rao Chillarige

Determining nearly optimal optimization options for modern-day compilers is a combinatorial problem. Added to this, specific to a given application, platform and optimization objective, fine-tuning the parameter set being used by various optimization passes, enhance the complexity further. In this paper, we apply genetic algorithm (GA) to tune compiler parameter set and investigate the impact of fine-tuning the parameter set on the code size. The effectiveness of GA-based parameter tuning mechanism is demonstrated with the benchmark programs from SPEC2006 benchmark suite that there is a significant impact of tuning the parameter values on the code size. Results obtained by the proposed GA-based parameter tuning technique are compared with existing methods and that shows significant performance gains.


multi disciplinary trends in artificial intelligence | 2014

Image Processing Tool for FAE Cloud Dynamics

Mousumi Roy; Apparao Allam; Arun Agarwal; Rajeev Wankar; Raghavendra Rao Chillarige

Understanding the Fuel air explosive cloud characteristics is an important activity in FAE warhead designing. This paper develops and demonstrates the understanding of cloud dynamics through image processing methodology by analyzing the video. This paper develops apt ROI extraction method as well as models for cloud radius and height. This methodology is validated using one of the HEMRL(High Energy Materials Research Laboratory), Pune experimental data.


european conference on applications of evolutionary computation | 2014

A Novel Genetic Algorithmic Approach for Computing Real Roots of a Nonlinear Equation

Vijaya Lakshmi V. Nadimpalli; Rajeev Wankar; Raghavendra Rao Chillarige

Novel Pre-processing and Post-processing methodologies are designed to enhance the performance of the classical Genetic Algorithms (GA) approach so as to obtain efficient interval estimates in finding the real roots of a given nonlinear equation. The Pre-processing methodology suggests a mechanism that adaptively fixes the parameter-‘length of chromosome’ in GA. The proposed methodologies have been implemented and demonstrated through a set of benchmark functions to illustrate the effectiveness.

Collaboration


Dive into the Raghavendra Rao Chillarige's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun Agarwal

University of Hyderabad

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Atul Negi

University of Hyderabad

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Apparao Allam

High Energy Materials Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge