A. Govardhan
Jawaharlal Nehru Technological University, Hyderabad
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. Govardhan.
International Journal of Computer Applications | 2011
Ch. Kavitha; B. Prabhakara Rao; A. Govardhan
Nowadays people are interested in using digital images. So the size of the image database is increasing enormously. Lot of interest is paid to find images in the database. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color and texture are two important visual features of an image. So, an efficient image retrieval technique which uses local color and texture features is proposed. An image is partitioned into sub-blocks of equal size as a first step. Color of each sub-block is extracted by quantifying the HSV color space into non-equal intervals and the color feature is represented by cumulative histogram. Texture of each sub-block is obtained by using gray level cooccurrence matrix. A one to one matching scheme is used to compare the query and target image. Euclidean distance is used in retrieving the similar images. The efficiency of the method is demonstrated with the results. General Terms Algorithm, search, match.
international conference on computer science and education | 2010
K. Srinivas; G. Raghavendra Rao; A. Govardhan
Heart disease (HD) is a major cause of morbidity and mortality in the modern society. Medical diagnosis is extremely important but complicated task that should be performed accurately and efficiently. This study analyzes the Behavioral Risk Factor Surveillance System, survey to test whether self-reported cardiovascular disease rates are higher in Singareni coal mining regions in Andhra Pradesh state, India, compared to other regions after control for other risks. Dependent variables include self-reported measures of being diagnosed with cardiovascular disease (CVD) or with a specific form of CVD including (1) chest pain (2) stroke and (3) heart attack. Heart care study specifies 15 attributes to predict the morbidity. Beside regular attributes other general attributes BMI (Body Mass Index), physician supply, age, ethnicity, education, income, and others are used for prediction. An automated system for medical diagnosis would enhance medical care and reduce costs. In this paper popular data mining techniques namely, Decision Trees, Naïve Bayes and Neural Network are used for prediction of heart disease.
International Journal of Computer Applications | 2010
Basavaraj S. Anami; Suvarna Nandyal; A. Govardhan
This paper presents a method for identification and classification of images of medicinal plants such as herbs, shrubs and trees based on color and texture feature using SVM and neural network classifier. The tribal people in India classify plants according to their medicinal values. In the system of medicine called Ayurveda, identification of medicinal plants is considered an important activity in the preparation of herbal medicines. Ayurveda medicines have become alternate for allopathic medicine. Hence, leveraging technology in automatic identification and classification of medicinal plants has become essential. Plant species belonging to different classes such as Papaya, Neem, Tulasi, Aloe and Garlic are considered in this work. This paper presents edge and color descriptors that have low-dimension, effective and simple. In addition, the rotation invariant texture descriptors namely, directional difference and the gradient histogram are used. These features are obtained from 900 images of medicinal plants and used to train and test the image samples of three classes with SVM and radial basis exact fit neural network (RBENN). The classification accuracies for color, edge texture features are 74% and 80% respectively. The accuracy is improved to 90% with combined color and texture features. The results are encouraging for tree image plants than herbs and shrubs due to distinguishing feature of stem.
international conference on computer science and education | 2010
B. Ramasubbareddy; A. Govardhan; A. Ramamohanreddy
Association rule mining is one of the most popular data mining techniques to find associations among items in a set by mining necessary patterns in a large database. Typical association rules consider only items enumerated in transactions. Such rules are referred to as positive association rules. Negative association rules also consider the same items, but in addition consider negated items (i.e. absent from transactions). Negative association rules are useful in market-basket analysis to identify products that conflict with each other or products that complement each other. They are also very useful for constructing associative classifiers. In this paper, we propose an algorithm that mines positive and negative association rules without adding any additional measure and extra database scans.
International Journal of Computer Applications | 2011
B. Jalender; A. Govardhan; P. Premchand
ABSTRACT Reusable software components are designed to apply the power and benefit of reusable, interchangeable parts from other industries to the field of software construction .Benefits of component reuse such as sharing common code, and components one place and making easier and quicker. The most substantial benefits derive from a product line approach, where a common set of reusable software assets act as a base for subsequent similar products in a given functional domain. Component is fundamental unit of large scale software construction. Every component has an interface and an Implementation. The interface of a component is anything that is visible externally to the component. Everything else belongs to its implementation. This paper addresses the primary boundaries for software component reuse technology. Keywords Software Reuse, component, boundaries, interface, product. 1. INTRODUCTION 1.1 Software Reuse Software reuse is the process of creating software systems from existing software rather than building them from scratch [1]. Software reuse is still an emerging discipline. It appears in many different forms from horizontal reuse and vertical reuse to systematic reuse, and from white-box reuse to black-box reuse. Many different products for reuse range from ideas and algorithms to any documents that are created during the software life cycle [2].Source code is most commonly reused in software systems; thus many people misunderstands software reuse as the reuse of source code alone. Recently source code and design reuse have become popular with (object-oriented) class libraries, application frameworks, and design patterns. Software components provide a vehicle for planned and systematic reuse [2].
Procedia Computer Science | 2014
G. Madhu; T.V. Rajinikanth; A. Govardhan
Abstract In real-time data mining applications discrete values play vital role in knowledge representation as they are easy to handle and very close to knowledge level representation than continuous attributes. Discretization is a major step in data mining process where continuous attributes are transformed into discrete values. However, most of the classifications algorithms are require discrete values as the input. Even though some data mining algorithms directly contract with continuous attributes, the learning process yields low quality results. In this paper, we introduce a new discretization method based on standard deviation technique called ‘z-score’ for continuous attributes on biomedical datasets. We compare performance of the proposed algorithm with the state-of- the-art discretization techniques. The experiment results show the efficiency in terms of accuracy and also minimize the classifier confusion for decision making process.
International Journal of Computer Applications | 2010
A.Sri Nagesh; Dr.G.P.S. Varma; A. Govardhan
Microarrays are novel and dominant techniques that are being made use in the analysis of the expression level of DNA, with pharmacology, medical diagnosis, environmental engineering, and biological sciences being its current applications. Studies on microarray have shown that image processing techniques can considerably influence the precision of microarray data. A crucial issue identified in gene microarray data analysis is to perform accurate quantification of spot shapes and intensities of microarray image. Segmentation methods that have been employed in microarray analysis are a vital source of variability in microarray data that directly affects precision and the identification of differentially expressed genes. The effect of different segmentation methods on the variability of data derived from microarray images has been overlooked. This article proposes a methodology to investigate the accuracy of spot segmentation of a microarray image, using morphological image analysis techniques, watershed algorithm and iterative watershed algorithm. The input to the methodology is a microarray image, which is then subjected to spotted microarray image preprocessing and gridding. Subsequently, the resulting microarray sub grid is segmented using morphological operators, watershed algorithm and iterative watershed algorithm. Based on the precision of segmentation and its intensity profile, a formal investigation of the three segmentation algorithms employed (morphological operators, watershed algorithm and iterative watershed algorithm) is performed. The experimental results demonstrate the segmentation effectiveness of the proposed methodology and also the better of the three segmentation algorithms employed for segmentation.
computer software and applications conference | 2015
Jema David Ndibwile; A. Govardhan; Kazuya Okada; Youki Kadobayashi
Application layer Distributed Denial of Service (DDoS) attacks are among the deadliest kinds of attacks that have significant impact on destination servers and networks due to their ability to be launched with minimal computational resources to cause an effect of high magnitude. Commercial and government Web servers have become the primary target of these kinds of attacks, with the recent mitigation efforts struggling to deaden the problem efficiently. Most application layer DDoS attacks can successfully mimic legitimate traffic without being detected by Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). IDSs and IPSs can also mistake a normal and legitimate activity for a malicious one, producing a False Positive (FP) that affects Web users if it is ignored or dropped. False positives in a large and complex network topology can potentially be dangerous as they may cause IDS/IPS to block the users benign traffic. Our focus and contributions in this paper are first, to mitigate the undetected malicious traffic mimicking legitimate traffic and developing a special anti-DDoS module for general and specific DDoS tools attacks by using a trained classifier in a random tree machine-learning algorithm. We use labeled datasets to generate rules to incorporate and fine-tune existing IDS/IPS such as Snort. Secondly, we further assist IDS/IPS by processing traffic that is classified as malicious by the IDS/IPS in order to identify FPs and route them to their intended destinations. To achieve this, our approach uses active authentication of traffic source of both legitimate and malicious traffic at the Bait and Decoy server respectively before destined to the Web server.
international conference on computer science and information technology | 2011
Nagaraju Devarakonda; Srinivasulu Pamidi; V. Valli Kumari; A. Govardhan
Outlier detection is a popular technique that can be utilized for finding Intruders. Security is becoming a critical part of organizational information systems. Network Intrusion Detection System ( NIDS) is an important detection system that is used as a counter measure to preserve data integrity and system availability from attacks [2]. However, current researches find that it is extremely difficult to find out outliers directly from high dimensional datasets. In our work we used entropy method for reducing high dimensionality to lower dimensionality, where the processing time can be saved without compromising the efficiency. Here we proposed a framework for finding outliers from high dimensional dataset and also presented the results. We implemented our proposed method on standard dataset kddcup’99 and the results shown with the high accuracy.
advanced data mining and applications | 2010
V. A. Narayana; P. Premchand; A. Govardhan
The drastic development of the WWW in recent times has made the concept of Web Crawling receive remarkable significance. The voluminous amounts of web documents swarming the web have posed huge challenges to web search engines making their results less relevant to the users. The presence of duplicate and near duplicate web documents in abundance has created additional overheads for the search engines critically affecting their performance and quality which have to be removed to provide users with the relevant results for their queries. In this paper, we have presented a novel and efficient approach for the detection of near duplicate web pages in web crawling where the keywords are extracted from the crawled pages and the similarity score between two pages is calculated. The documents having similarity score greater than a threshold value are considered as near duplicates. In this paper we have fixed the threshold value.