Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobal B. Niraula is active.

Publication


Featured researches published by Nobal B. Niraula.


international conference on computational linguistics | 2013

Similarity measures based on latent dirichlet allocation

Vasile Rus; Nobal B. Niraula; Rajendra Banjade

We present in this paper the results of our investigation on semantic similarity measures at word- and sentence-level based on two fully-automated approaches to deriving meaning from large corpora: Latent Dirichlet Allocation, a probabilistic approach, and Latent Semantic Analysis, an algebraic approach. The focus is on similarity measures based on Latent Dirichlet Allocation, due to its novelty aspects, while the Latent Semantic Analysis measures are used for comparison purposes. We explore two types of measures based on Latent Dirichlet Allocation: measures based on distances between probability distribution that can be applied directly to larger texts such as sentences and a word-to-word similarity measure that is then expanded to work at sentence-level. We present results using paraphrase identification data in the Microsoft Research Paraphrase corpus.


north american chapter of the association for computational linguistics | 2015

NeRoSim: A System for Measuring and Interpreting Semantic Textual Similarity

Rajendra Banjade; Nobal B. Niraula; Nabin Maharjan; Vasile Rus; Dan Stefanescu; Mihai C. Lintean; Dipesh Gautam

We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a - English Semantic Textual Similarity, STS, and 2c - Interpretable Similarity) and the results of the submitted runs. For the English STS subtask, we used regression models combining a wide array of features including semantic similarity scores obtained from various methods. One of our runs achieved weighted mean correlation score of 0.784 for sentence similarity subtask (i.e., English STS) and was ranked tenth among 74 runs submitted by 29 teams. For the interpretable similarity pilot task, we employed a rule-based approach blended with chunk alignment labeling and scoring based on semantic similarity features. Our system for interpretable text similarity was among the top three best performing systems.


conference on intelligent text processing and computational linguistics | 2015

Lemon and Tea Are Not Similar: Measuring Word-to-Word Similarity by Combining Different Methods

Rajendra Banjade; Nabin Maharjan; Nobal B. Niraula; Vasile Rus; Dipesh Gautam

Substantial amount of work has been done on measuring word-to-word relatedness which is also commonly referred as similarity. Though relatedness and similarity are closely related, they are not the same as illustrated by the words lemon and tea which are related but not similar. The relatedness takes into account a broader ranLemge of relations while similarity only considers subsumption relations to assess how two objects are similar. We present in this paper a method for measuring the semantic similarity of words as a combination of various techniques including knowledge-based and corpus-based methods that capture different aspects of similarity. Our corpus based method exploits state-of-the-art word representations. We performed experiments with a recently published significantly large dataset called Simlex-999 and achieved a significantly better correlation (ρ = 0.642, P < 0.001) with human judgment compared to the individual performance.


Natural Language Dialog Systems and Intelligent Assistants | 2015

Rapidly Scaling Dialog Systems with Interactive Learning

Jason D. Williams; Nobal B. Niraula; Pradeep Dasigi; Aparna Lakshmiratan; Carlos Garcia Jurado Suarez; Mouni Reddy; Geoffrey Zweig

In personal assistant dialog systems, intent models are classifiers that identify the intent of a user utterance, such as to add a meeting to a calendar or get the director of a stated movie. Rapidly adding intents is one of the main bottlenecks to scaling—adding functionality to—personal assistants. In this paper we show how interactive learning can be applied to the creation of statistical intent models. Interactive learning (Simard, ICE: enabling non-experts to build models interactively for large-scale lopsided problems, 2014) combines model definition, labeling, model building, active learning, model evaluation, and feature engineering in a way that allows a domain expert—who need not be a machine learning expert—to build classifiers. We apply interactive learning to build a handful of intent models in three different domains. In controlled lab experiments, we show that intent detectors can be built using interactive learning and then improved in a novel end-to-end visualization tool. We then applied this method to a publicly deployed personal assistant—Microsoft Cortana—where a non-machine learning expert built an intent model in just over 2 h, yielding excellent performance in the commercial service.


SLSP'13 Proceedings of the First international conference on Statistical Language and Speech Processing | 2013

Experiments with semantic similarity measures based on LDA and LSA

Nobal B. Niraula; Rajendra Banjade; Dan Ştefănescu; Vasile Rus

We present in this paper experiments with several semantic similarity measures based on the unsupervised method Latent Dirichlet Allocation. For comparison purposes, we also report experimental results using an algebraic method, Latent Semantic Analysis. The proposed semantic similarity methods were evaluated using one dataset that includes student answers from conversational intelligent tutoring systems and a standard paraphrase dataset, the Microsoft Research Paraphrase corpus. Results indicate that the method based on word representations as topic vectors outperforms methods based on distributions over topics and words. The proposed evaluation methods can also be regarded as an extrinsic method for evaluating topic coherence or selecting the number of topics in LDA models, i.e. a task-based evaluation of topic coherence and selection of number of topics in LDA.


spoken language technology workshop | 2014

Forms2Dialog: Automatic dialog generation for Web tasks

Nobal B. Niraula; Amanda Stent; Hyuckchul Jung; Giuseppe Di Fabbrizio; I. Dan Melamed; Vasile Rus

Today, many common tasks (e.g. booking flights, ordering food) can be done by filling out web forms. Automatic processing of Web forms to support interactive speech input is useful for numerous reasons, including ease of use for mobile device users and accessibility for people with visual or print disabilities. In this paper, we propose an automated method to process web forms and convert them into dialog flows for spoken interaction. First we identify relevant information for each form element (including element type, label, values and help messages) and key relationships between form elements (including ordering and dependencies). We then generate two types of dialog flow for each Web form. Experimental results show that the method generates efficient and informative dialog flows for web tasks, a key step for building virtual assistants. An Android application has been realized as a use case of the generated dialog flows.


intelligent tutoring systems | 2014

Macro-adaptation in Conversational Intelligent Tutoring Matters

Vasile Rus; Dan Stefanescu; William Baggett; Nobal B. Niraula; Donald R. Franceschetti; Arthur C. Graesser

We present in this paper the findings of a study on the role of macro-adaptation in conversational intelligent tutoring. Macro-adaptivity refers to a systems capability to select appropriate instructional tasks for the learner to work on. Micro-adaptivity refers to a systems capability to adapt its scaffolding while the learner is working on a particular task. We compared an intelligent tutoring system that offers both macro- and micro-adaptivity fully-adaptive with an intelligent tutoring system that offers only micro-adaptivity. Experimental data analysis revealed that learning gains were significantly higher for students randomly assigned to the fully-adaptive intelligent tutor condition compared to the micro-adaptive-only condition.


BMC Genomics | 2014

RandAL: a randomized approach to aligning DNA sequences to reference genomes

Nam S Vo; Quang Tran; Nobal B. Niraula; Vinhthuy Phan

BackgroundThe alignment of short reads generated by next-generation sequencers to genomes is an important problem in many biomedical and bioinformatics applications. Although many proposed methods work very well on narrow ranges of read lengths, they tend to suffer in performance and alignment quality for reads outside of these ranges.ResultsWe introduce RandAL, a novel method that aligns DNA sequences to reference genomes. Our approach utilizes two FM indices to facilitate efficient bidirectional searching, a pruning heuristic to speed up the computing of edit distances, and most importantly, a randomized strategy that enables effective estimation of key parameters. Extensive comparisons showed that RandAL outperformed popular aligners in most instances and was unique in its consistent and accurate performance over a wide range of read lengths and error rates. The software package is publicly available at https://github.com/namsyvo/RandAL.ConclusionsRandAL promises to align effectively and accurately short reads that come from a variety of technologies with different read lengths and rates of sequencing error.


Archive | 2017

A Study On Two Hint-level Policies in Conversational Intelligent Tutoring Systems

Vasile Rus; Rajendra Banjade; Nobal B. Niraula; Elizabeth Gire; Donald R. Franceschetti

In this work, we compared two hint-level instructional strategies, minimum scaffolding vs. maximum scaffolding, in the context of conversational intelligent tutoring systems (ITSs). The two strategies are called policies because they have a clear bias, as detailed in the paper. To this end, we conducted a randomized controlled trial experiment with two conditions corresponding to two versions of the same underlying state-of-the-art conversational ITS, i.e. DeepTutor. Each version implemented one of the two hint-level strategies. Experimental data analysis revealed that pre-post learning gains were significant in both conditions. We also learned that, in general, students need more than just a minimally informative hint in order to infer the next steps in the solution to a challenging problem; this is the case in the context of a problem selection strategy that picks challenging problems for students to work on.


north american chapter of the association for computational linguistics | 2016

DTSim at SemEval-2016 Task 2: Interpreting Similarity of Texts Based on Automated Chunking, Chunk Alignment and Semantic Relation Prediction.

Rajendra Banjade; Nabin Maharjan; Nobal B. Niraula; Vasile Rus

In this paper we describe our system (DTSim) submitted at SemEval-2016 Task 2: Interpretable Semantic Textual Similarity (iSTS). We participated in both gold chunks category (texts chunked by human experts and provided by the task organizers) and system chunks category (participants had to automatically chunk the input texts). We developed a Conditional Random Fields based chunker and applied rules blended with semantic similarity methods in order to predict chunk alignments, alignment types and similarity scores. Our system obtained F1 score up to 0.648 in predicting the chunk alignment types and scores together and was one of the top performing systems overall.

Collaboration


Dive into the Nobal B. Niraula's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nam S Vo

University of Memphis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge