Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kishore Varma Indukuri is active.

Publication


Featured researches published by Kishore Varma Indukuri.


computational intelligence | 2007

Similarity Analysis of Patent Claims Using Natural Language Processing Techniques

Kishore Varma Indukuri; Anurag Anil Ambekar; Ashish Sureka

Claims typically found at the end of a patent document are one of the key elements of a patent and define the boundaries or scope of protection conferred by a patent. Claims of related patents also need to be read and reviewed carefully by an inventor or a patent attorney at the time of drafting a patent application. We present a method and a tool to do a claim similarity analysis between two different patents based on natural language processing techniques. The technique proposed in this paper relies on computing similarity between two claims based on syntactic and semantic matching of the natural language text describing the claims. We present results of experiments performed on patent claim data obtained from patents published on Google patents website. The motivation behind the research presented in this paper is to build patent processing tools to increase the overall productivity of a patent analyst or a patent attorney while doing claims infringement, validity and quality analysis.


bangalore annual compute conference | 2010

Mining e-contract documents to classify clauses

Kishore Varma Indukuri; P. Radha Krishna

E-contracts begin as legal documents and end up as processes that help organizations abide by legal rules while fulfilling contract terms. As contracts are complex, their deployment is predominantly established and fulfilled with significant human involvement. One of the key difficulties with any kind of contract processing is the legal ambiguity, which makes it difficult to address any violation of the contract terms. Thus, there is a need to track clauses for the contract activities under execution and violation of clauses. This necessitates deriving clause patterns from e-contract documents and map to their respective activities for further monitoring and fulfillment of e-contracts during their enactment. In this paper, we present a classification approach to extract clause patterns from e-contract documents. This is a challenging task as activities and clauses are mostly derived from both legal and business process driven contract knowledge.


international conference on information technology | 2010

Analyzing Internet Slang for Sentiment Mining

K. Manuel; Kishore Varma Indukuri; P. Radha Krishna

Every consumer has his own opinion about the product he is using which they are willing to share in social groups like forums, chat rooms and weblogs. As these review comments are actual feedbacks from customers, mining the sentiments in these reviews is being increasingly inducted into the feedback pipeline for any company. Along with it, the increasing use of slang in such communities in expressing emotions and sentiment makes it important to consider Slang in determining the sentiment. In this paper, we present an approach for finding the sentiment score of newly found slang sentiment words in blogs, reviews and forum texts over the World Wide Web. A simple mechanism for calculating sentiment score of documents using slang words with the help of Delta Term Frequency and Weighted Inverse Document Frequency technique is also presented in this paper.


advances in databases and information systems | 2010

Natural language querying over databases using cascaded CRFs

Kishore Varma Indukuri; Srikumar Krishnamoorthy; P. Radha Krishna

Retrieving information from relational databases using a natural language query is a challenging task. Usually, the natural language query is transformed into its approximate SQL or formal languages. However, this requires knowledge about database structures, semantic relationships, natural language constructs and also handling ambiguities due to overlapping column names and column values. We present a machine learning based natural language search system to query databases without any knowledge of Structure Query Language (SQL) for underlying database. The proposed system - Cascaded Conditional Random Field is an extension to Conditional Random Fields, an undirected graph model. Unlike traditional Conditional Random Field models, we offer efficient labelling schemes to realize enhanced quality of search results. The system uses text indexing techniques as well as database constraint relationships to identify hidden semantic relationships present in the data. The presented system is implemented and evaluated on two real-life datasets.


web age information management | 2008

An Algorithm for Classifying Articles and Patent Documents Using Link Structure

Kishore Varma Indukuri; Pranav Prabhakar Mirajkar; Ashish Sureka

Studying link structure of the World Wide Web (WWW) is an area which has attracted a lot of interest. Several papers have been published on structural analysis of hyperlinked environments such as the WWW. The WWW can be modeled as a graph and valuable information can be derived by analyzing links between the Web-pages primarily for the purpose of building better search engines. Many novel methods have been presented to discover communities from the WWW and discover authoritative Web-pages. Citation analysis is a branch of information science on which plenty of research has been done. Citation analysis pertains to analysis of articles and research paper citations in a scholarly field and deriving useful information from it. It has primarily been used as a useful tool to quantify and judge the impact of a paper or a journal. The work presented in this paper lies at the intersection of the two fields: structural analysis of WWW and citation analysis. In this paper, we present a method for classifying documents (such as articles and patents containing references) to a class or topic based on their link structure, references and citations. The method consists of analyzing the link structure of a corpus to first identify authoritative papers and assigning a class label to them. The class labels are assigned manually by a domain expert by going through the respective documents. The next step consists of identifying related papers to the authoritative papers using citation analysis. The authoritative papers, their class labels and their related papers constitute a model. Papers for which class label needs to be determined are classified based on the created model.


bangalore annual compute conference | 2010

Linguistic analysis of bug report titles with respect to the dimension of bug importance

Ashish Sureka; Kishore Varma Indukuri

We perform linguistic analysis of bug-report titles obtained from the publicly available Bugzilla defect tracking tool for the open-source Firefox browser (Mozilla project) and present the results of our analysis. Our motivation is to gain insights on how people describe software defects and do a feasibility study on the possibility of building a predictive model (a classifier) for categorizing bug report based only on the titles to one of the predefined severity levels (bug importance). We observed that in general bug titles do not contain enough information for automatically predicting its importance with high accuracy. However, we noticed that two of the bug importance categories such as critical and enhancement have characteristics or features in the title that can be exploited to assign the correct severity level. We perform statistical analysis on part-of-speech, word frequency and distribution across various severity levels.


advanced data mining and applications | 2008

Using Genetic Algorithms for Parameter Optimization in Building Predictive Data Mining Models

Ashish Sureka; Kishore Varma Indukuri

We present an application of genetic algorithms to search the space of model building parameters for optimizing the score function or accuracy of a predictive data mining model. The goal of predictive modeling is to build a classification or regression model that can accurately predict the value of a target column by observing the values of the input attributes. The process of finding an optimal algorithm and its control parameters for building a predictive model is a non-trivial process because of two reasons. The first reason is that the number of classification algorithms and its control parameters are very large. The second reason is that it can be quite time consuming to build a model for datasets containing a large number of records and attributes. These two reasons makes it impractical to enumerate through every algorithm and its possible control parameters for finding an optimal model. Genetic Algorithms are adaptive heuristic search algorithm and have been successfully applied to solve optimization problems in diverse domains. In this work, we formulate the problem of finding optimal predictive model building parameter as an optimization problem and examine the usefulness of genetic algorithms. We perform experiments on several datasets and report empirical results to show the applicability of genetic algorithms to the problem of finding optimal predictive model building parameters.


bangalore annual compute conference | 2012

A generic topology discovery approach for huge social networks

P. Radha Krishna; Kishore Varma Indukuri; Shahanaz Syed

Social Networks are gaining importance due to their enablement of modeling various types of interactions among individuals, communities and organizations. Network Topologies play a major role in analyzing the social networks for a variety of business application scenarios such as finding influencers in product campaigning and virtual communities to recommend music downloads. Social networks are dynamic in nature and detection of topologies from these networks presents a host of new challenges. In this paper, we present approaches for topology discovery, particularly star, ring and mesh, based on the measures of network centrality. These approaches facilitate an efficient way of discovering topologies for analyzing large social networks. We also discuss experiments on DBLP dataset to show the viability of our proposed approach.


Archive | 2010

System and method for developing a rule-based named entity extraction

Ashish Sureka; Pranav Prabhakar Mirajkar; Kishore Varma Indukuri


Archive | 2012

Methods for discovering and analyzing network topologies and devices thereof

Kishore Varma Indukuri; Shahanaz Syed; Radha Krishna Pisipati

Collaboration


Dive into the Kishore Varma Indukuri's collaboration.

Top Co-Authors

Avatar

Ashish Sureka

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Mirza Mahmood

Acharya Nagarjuna University

View shared research outputs
Top Co-Authors

Avatar

Arathi R Shankar

B.M.S. College of Engineering

View shared research outputs
Top Co-Authors

Avatar

Atul Negi

University of Hyderabad

View shared research outputs
Researchain Logo
Decentralizing Knowledge