Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. B. V. Subramanyam is active.

Publication


Featured researches published by R. B. V. Subramanyam.


International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2005

A fuzzy data mining algorithm for incremental mining of quantitative sequential patterns

R. B. V. Subramanyam; Adrijit Goswami

In real world applications, the databases are constantly added with a large number of transactions and hence maintaining latest sequential patterns valid on the updated database is crucial. Existing data mining algorithms can incrementally mine the sequential patterns from databases with binary values. Temporal transactions with quantitative values are commonly seen in real world applications. In addition, several methods have been proposed for representing uncertain data in a database. In this paper, a fuzzy data mining algorithm for incremental mining of sequential patterns from quantitative databases is proposed. Proposed algorithm called IQSP algorithm uses the fuzzy grid notion to generate fuzzy sequential patterns validated on the updated database containing the transactions in the original database and in the incremental database. It uses the information about sequential patterns that are already mined from original database and avoids start-from-scratch process. Also, it minimizes the number of candidates to check as well as number of scans to original database by identifying the potential sequences in incremental database.


Expert Systems | 2006

Mining fuzzy quantitative association rules

R. B. V. Subramanyam; Adrijit Goswami

: The concept of fuzzy sets is one of the most fundamental and influential tools in the development of computational intelligence. In this paper the fuzzy pincer search algorithm is proposed. It generates fuzzy association rules by adopting combined top-down and bottom-up approaches. A fuzzy grid representation is used to reduce the number of scans of the database and our algorithm trims down the number of candidate fuzzy grids at each level. It has been observed that fuzzy association rules provide more realistic visualization of the knowledge extracted from databases.


International Journal of Data Analysis Techniques and Strategies | 2008

Mining fuzzy temporal patterns from process instances with weighted temporal graphs

R. B. V. Subramanyam; Adrijit Goswami; Bhanu Prasad

This paper presents an algorithm for mining fuzzy temporal patterns from a given process instance. The fuzzy representation of time intervals embedded between the activities is used for this purpose. Initially, the activities are portrayed with their temporal relationships through temporal graphs and then, the defined data structures are used to retrieve the data suitable for the proposed algorithm. Similar to the familiar k-itemsets and k-dim sequences, their counterparts are introduced in this work. The proposed process-instance level data structure generates an optimum number of temporal itemsets. The proposed algorithm differs from the other existing algorithms on this topic in the representation of the mined data and patterns. An example is provided to demonstrate the algorithm.


soft computing and pattern recognition | 2013

Paired feature constraints for latent dirichlet topic models

Nagesh Bhattu Sristy; Durvasula V. L. N. Somayajulu; R. B. V. Subramanyam

Non Parametric Bayes models, so called family of Latent Dirichlet Allocation (LDA) Topic Models have found application in various aspects of pattern recognition like sentiment analysis, information retrieval, question answering etc. The topics induced by LDA are used for later tasks such as classification, regression(movie ratings), ranking and recommendation. Recently various approaches are suggested to improve the utility of topics induced by LDA using various side-information such as labeled examples and labeled features. Pair-Wise feature constraints such as cannot-link and must-link, represent weak-supervision and are prevalent in domains such as sentiment analysis. Though must-link constraints are relatively easier to incorporate by using dirichlet tree, the cannot-link constraints are harder to incorporate using the dirichlet forest. In this paper we proposed an approach to address this problem using posterior constraints. We introduced additional latent variables for capturing the constraints, and modified the gibbs sampling algorithm to incorporate these constraints. Our method of Posterior Regularization has enabled us to deal with both types of constraints seamlessly in the same optimization framework. We have demonstrated our approach on a product sentiment review data set which is typically used in text analysis.


international conference on computer modelling and simulation | 2012

Partition-Based Approach for Fast Mining of Transitional Patterns

R. B. V. Subramanyam; Soma Raju Suvvari

Studying the dynamic behaviour of patterns whose frequency changes over time is of recent interest. Such patterns are named as transitional patterns [1] and were found only upon mining the frequent patterns in a transaction database. In this paper, a partition-based algorithm is proposed to find a variation of transitional patterns. Our approach do not require a set of frequent patterns mined over the database to initiate the process of finding transitional patterns and thus reduces the number of scans over the database. In [1] the range of Tξ was fixed, whereas we expressed it in terms of support so as it varies with the sizes of the databases and minimum pattern support value.


international conference on computational linguistics | 2017

Benchmarking Multimodal Sentiment Analysis.

Erik Cambria; Devamanyu Hazarika; Soujanya Poria; Amir Hussain; R. B. V. Subramanyam

We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.


advances in computing and communications | 2013

Topic dependent cross-word Spelling Corrections for Web Sentiment Analysis

Swapnil Ashok Jadhav; Durvasula V. L. N. Somayajulu; S. Nagesh Bhattu; R. B. V. Subramanyam; P. Suresh

Spelling Correction is a crucial component in modern text mining systems such as Web Sentiment Analysis systems where spelling errors may affect the sentiment scores. Many existing spelling correction methods generally deal with in-word spelling errors. Major drawback with such methods is that they are unable to handle cross-words spelling errors such as splitting and concatenation. In this paper we address this limitation by our discriminative approach that handles splitting and concatenation errors over a particular topic. It also handles the cases where these errors occur over in-word spelling errors.


advances in computing and communications | 2013

Context Dependent Bag of words generation

Swapnil Ashok Jadhav; Durvasula V. L. N. Somayajulu; S. Nagesh Bhattu; R. B. V. Subramanyam; P. Suresh

Query spelling correction is a crucial component in modern text mining systems such as Question-answering systems and Sentiment Analysis systems where noise can affect the query matching score. In many existing query matching systems Bag of Words (BoW) generation method is used to generate candidates for noisy words. But in these systems candidate generation do not depend upon context of a query sentence. BoW count for each noisy word may vary and selecting correct candidates from such list is not easy and may result in wrong selection. With our context dependent BoW generation method very few but highly probable candidates are generated which are easy for look up and process of query spelling correction would be easier and efficient.


Cluster Computing | 2018

A novel Bit Vector Product algorithm for mining frequent itemsets from large datasets using MapReduce framework

Sumalatha Saleti; R. B. V. Subramanyam

Frequent itemset mining (FIM) is an interesting sub-area of research in the field of Data Mining. With the increase in the size of datasets, conventional FIM algorithms are not suitable and efforts are made to migrate to the Big Data Frameworks for designing algorithms using MapReduce like computing paradigms. We too interested in designing MapReduce based algorithm. Initially, our Parallel Compression algorithm makes data simpler to handle. A novel bit vector data structure is proposed to maintain compressed transactions and it is formed by scanning the dataset only once. Our Bit Vector Product algorithm follows the MapReduce approach and effectively searches for frequent itemsets from a given list of transactions. The experimental results are present to prove the efficacy of our approach over some of the recent works.


Applied Intelligence | 2018

A novel mapreduce algorithm for distributed mining of sequential patterns using co-occurrence information

Sumalatha Saleti; R. B. V. Subramanyam

Sequential Pattern Mining (SPM) problem is much studied and extended in several directions. With the tremendous growth in the size of datasets, traditional algorithms are not scalable. In order to solve the scalability issue, recently few researchers have developed distributed algorithms based on MapReduce. However, the existing MapReduce algorithms require multiple rounds of MapReduce, which increases communication and scheduling overhead. Also, they do not address the issue of handling long sequences. They generate huge number of candidate sequences that do not appear in the input database and increases the search space. This results in more number of candidate sequences for support counting. Our algorithm is a two phase MapReduce algorithm that generates the promising candidate sequences using the pruning strategies. It also reduces the search space and thus the support computation is effective. We make use of the item co-occurrence information and the proposed Sequence Index List (SIL) data structure helps in computing the support at fast. The experimental results show that the proposed algorithm has better performance over the existing MapReduce algorithms for the SPM problem.

Collaboration


Dive into the R. B. V. Subramanyam's collaboration.

Top Co-Authors

Avatar

Adrijit Goswami

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soma Raju Suvvari

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Nagesh Bhattu

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Swapnil Ashok Jadhav

National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Soujanya Poria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Devamanyu Hazarika

National Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge