Jajati Keshari Sahoo
Birla Institute of Technology and Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jajati Keshari Sahoo.
international conference on artificial neural networks | 2014
Kratarth Goel; Raunaq Vohra; Jajati Keshari Sahoo
In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than an RBM. We apply this technique to the task of polyphonic music generation.
ieee international conference on data science and advanced analytics | 2015
Raunaq Vohra; Kratarth Goel; Jajati Keshari Sahoo
Since the advent of deep learning, it has been used to solve various problems using many different architectures. The application of such deep architectures to auditory data is also not uncommon. However, these architectures do not always adequately consider the temporal dependencies in data. We thus propose a new generic architecture called the Deep Belief Network - Long Short-Term Memory (DBN-LSTM) network that models sequences by keeping track of the temporal information while enabling deep representations in the data. We demonstrate this new architecture by applying it to the task of music generation and obtain state-of-the-art results.
Archive | 2019
Rajendra Kumar Roul; Jajati Keshari Sahoo
Topic modeling is one of the most applied and active research areas in the domain of information retrieval. Topic modeling has become increasingly important due to the large and varied amount of data produced every second. In this paper, we try to exploit two major drawbacks (topic independence and unsupervised learning) of latent Dirichlet allocation (LDA). To remove the first drawback, we use Wikipedia as a knowledge source to make a semi-supervised model (Source-LDA) for generating predefined topic-word distribution. The second drawback is removed using a correlation matrix containing cosine-similarity measure of all the topics. The reason for using a semi-supervised LDA instead of a supervised model is not to overfit the data for new labels. Experimental results show that the performance of Source-LDA combine with correlation matrix is better than the traditional LDA and Source-LDA.
Archive | 2019
Rajendra Kumar Roul; Jajati Keshari Sahoo; Kushagr Arora
This paper addresses a ranking model which uses the content of the documents along with their link structures to obtain an efficient ranking scheme. The proposed model combines the advantages of TF-IDF and PageRank algorithm. TF-IDF is a term-weighting scheme that is widely used to evaluate the importance of a term in a document by converting textual representation of information into a vector space model. The PageRank algorithm uses hyperlink (links between documents) to determine the importance of a Web document in the corpus. Combining the relevance of documents with their PageRanks will refine the retrieval results. The idea is to update the link structure based on the document similarity score with the user query. Results obtained from the experiment indicate that the performance of the proposed ranking technique is promising and thus can be considered as a new direction in ranking the documents.
pattern recognition and machine intelligence | 2017
Rajendra Kumar Roul; Jajati Keshari Sahoo; Rohan Goel
Text summarization is the process of generating a shorter version of the input text which captures its most important information. This paper addresses and tries to solve the problem of extractive text summarization which works by selecting a subset of phrases or sentences from the original document(s) to form a summary. Selections of such sentences are done based on certain criteria which formulates a feature set. Multilayer ELM (Extreme Learning Machine) which is based on the underlying deep network architecture is trained over this feature set to classify the sentences as important or unimportant. The used approach is unique and highlights the effectiveness of Multilayer ELM and its stability for usage in the domain of text summarization. Effectiveness of Multilayer ELM is justified by the experimental results on DUC and TAC datasets wherein it significantly outperforms the other well known classifiers.
Archive | 2017
Rajendra Kumar Roul; Jajati Keshari Sahoo
The amount of research work taking place in all streams of Science, Engineering, Medicines, etc., is growing rapidly and hence the research articles are increasing everyday. In this dynamic environment, identifying and maintaining such a large collection of articles in one place and classifying them manually are becoming very exhaustive. Often, the allocation of articles in various subject areas will be made simply on the basis of the journals in which they are published. This paper proposes an approach for handling such huge volume of articles by classifying them into their respective categories based on the keywords extracted from the keyword section of the article. Query enrichment is used by generating unigram and bigram of these keywords and giving them proper weights using probability measure. Microsoft Academic Research dataset is used for the experimental purpose and the empirical results show the effectiveness of the propose approach.
International Conference on Advances in Computing and Data Sciences | 2016
Jajati Keshari Sahoo; Akhil Balaji
The accuracy obtained when classifying multi-class data depends on the classifier and the features used for training the classifier. The parameters passed to the classifier and feature selection techniques can help improve accuracy. In this paper we propose certain dataset and classifier optimization to help improve the accuracy when classifying multi-class data. These optimization also help in reducing the training time.
Asian-european Journal of Mathematics | 2009
Jajati Keshari Sahoo; Arindama Singh
In this paper we study how Lavrentiev regularization can be used in the context of learning theory, especially in regularization networks that are closely related to support vector machines. We briefly discuss formulations of learning from examples in the context of ill-posed inverse problem and regularization. We then study the interplay between the Lavrentiev regularization of the concerned continuous and discretized ill-posed inverse problems. As the main result of this paper, we give an improved probabilistic bound for the regularization networks or least square algorithms, where we can afford to choose the regularization parameter in a larger interval.
international conference for internet technology and secured transactions | 2012
Alok Upadhyay; Jajati Keshari Sahoo; Vibhor Bajpai
International Journal of Computational Systems Engineering | 2018
Rajendra Kumar Roul; Jajati Keshari Sahoo