Featured Researches

Information Retrieval

Automatic Extraction of Agriculture Terms from Domain Text: A Survey of Tools and Techniques

Agriculture is a key component in any country's development. Domain-specific knowledge resources serve to gain insight into the domain. Existing knowledge resources such as AGROVOC and NAL Thesaurus are developed and maintained by the domain experts. Population of terms into these knowledge resources can be automated by using automatic term extraction tools for processing unstructured agricultural text. Automatic term extraction is also a key component in many semantic web applications, such as ontology creation, recommendation systems, sentiment classification, query expansion among others. The primary goal of an automatic term extraction system is to maximize the number of valid terms and minimize the number of invalid terms extracted from the input set of documents. Despite its importance in various applications, the availability of online tools for the said purpose is rather limited. Moreover, the performance of the most popular ones among them varies significantly. As a consequence, selection of the right term extraction tool is perceived as a serious problem for different knowledge-based applications. This paper presents an analysis of three commonly used term extraction tools, viz. RAKE, TerMine, TermRaider and compares their performance in terms of precision and recall, vis-a-vis RENT, a more recent term extractor developed by these authors for agriculture domain.

Read more
Information Retrieval

BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation

Existing topic modeling and text segmentation methodologies generally require large datasets for training, limiting their capabilities when only small collections of text are available. In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest. In developing a methodology to handle single documents, we face two major challenges. First is sparse information: with access to only one document, we cannot train traditional topic models or deep learning algorithms. Second is significant noise: a considerable portion of words in any single document will produce only noise and not help discern topics or segments. To tackle these issues, we design an unsupervised, computationally efficient methodology called BATS: Biclustering Approach to Topic modeling and Segmentation. BATS leverages three key ideas to simultaneously identify topics and segment text: (i) a new mechanism that uses word order information to reduce sample complexity, (ii) a statistically sound graph-based biclustering technique that identifies latent structures of words and sentences, and (iii) a collection of effective heuristics that remove noise words and award important words to further improve performance. Experiments on four datasets show that our approach outperforms several state-of-the-art baselines when considering topic coherence, topic diversity, segmentation, and runtime comparison metrics.

Read more
Information Retrieval

BERT-QE: Contextualized Query Expansion for Document Re-ranking

Query expansion aims to mitigate the mismatch between the language used in a query and in a document. However, query expansion methods can suffer from introducing non-relevant information when expanding the query. To bridge this gap, inspired by recent advances in applying contextualized models like BERT to the document retrieval task, this paper proposes a novel query expansion model that leverages the strength of the BERT model to select relevant document chunks for expansion. In evaluation on the standard TREC Robust04 and GOV2 test collections, the proposed BERT-QE model significantly outperforms BERT-Large models.

Read more
Information Retrieval

BERTERS: Multimodal Representation Learning for Expert Recommendation System with Transformer

The objective of an expert recommendation system is to trace a set of candidates' expertise and preferences, recognize their expertise patterns, and identify experts. In this paper, we introduce a multimodal classification approach for expert recommendation system (BERTERS). In our proposed system, the modalities are derived from text (articles published by candidates) and graph (their co-author connections) information. BERTERS converts text into a vector using the Bidirectional Encoder Representations from Transformer (BERT). Also, a graph Representation technique called ExEm is used to extract the features of candidates from the co-author network. Final representation of a candidate is the concatenation of these vectors and other features. Eventually, a classifier is built on the concatenation of features. This multimodal approach can be used in both the academic community and the community question answering. To verify the effectiveness of BERTERS, we analyze its performance on multi-label classification and visualization tasks.

Read more
Information Retrieval

Balancing Reinforcement Learning Training Experiences in Interactive Information Retrieval

Interactive Information Retrieval (IIR) and Reinforcement Learning (RL) share many commonalities, including an agent who learns while interacts, a long-term and complex goal, and an algorithm that explores and adapts. To successfully apply RL methods to IIR, one challenge is to obtain sufficient relevance labels to train the RL agents, which are infamously known as sample inefficient. However, in a text corpus annotated for a given query, it is not the relevant documents but the irrelevant documents that predominate. This would cause very unbalanced training experiences for the agent and prevent it from learning any policy that is effective. Our paper addresses this issue by using domain randomization to synthesize more relevant documents for the training. Our experimental results on the Text REtrieval Conference (TREC) Dynamic Domain (DD) 2017 Track show that the proposed method is able to boost an RL agent's learning effectiveness by 22\% in dealing with unseen situations.

Read more
Information Retrieval

Beyond Lexical: A Semantic Retrieval Framework for Textual SearchEngine

Search engine has become a fundamental component in various web and mobile applications. Retrieving relevant documents from the massive datasets is challenging for a search engine system, especially when faced with verbose or tail queries. In this paper, we explore a vector space search framework for document retrieval. Specifically, we trained a deep semantic matching model so that each query and document can be encoded as a low dimensional embedding. Our model was trained based on BERT architecture. We deployed a fast k-nearest-neighbor index service for online serving. Both offline and online metrics demonstrate that our method improved retrieval performance and search quality considerably, particularly for tail

Read more
Information Retrieval

Beyond Next Item Recommendation: Recommending and Evaluating List of Sequences

Recommender systems (RS) suggest items-based on the estimated preferences of users. Recent RS methods utilise vector space embeddings and deep learning methods to make efficient recommendations. However, most of these methods overlook the sequentiality feature and consider each interaction, e.g., check-in, independent from each other. The proposed method considers the sequentiality of the interactions of users with items and uses them to make recommendations of a list of multi-item sequences. The proposed method uses FastText \cite{bojanowski2016enriching}, a well-known technique in natural language processing (NLP), to model the relationship among the subunits of sequences, e.g., tracks, playlists, and utilises the trained representation as an input to a traditional recommendation method. The recommended lists of multi-item sequences are evaluated by the ROUGE \cite{lin2003automatic,lin2004rouge} metric, which is also commonly used in the NLP literature. The current experimental results reveal that it is possible to recommend a list of multi-item sequences, in addition to the traditional next item recommendation. Also, the usage of FastText, which utilise sub-units of the input sequences, helps to overcome cold-start user problem.

Read more
Information Retrieval

Boosting Retailer Revenue by Generated Optimized Combined Multiple Digital Marketing Campaigns

Campaign is a frequently employed instrument in lifting up the GMV (Gross Merchandise Volume) of retailer in traditional marketing. As its counterpart in online context, digital-marketing-campaign (DMC) has being trending in recent years with the rapid development of the e-commerce. However, how to empower massive sellers on the online retailing platform the capacity of applying combined multiple digital marketing campaigns to boost their shops' revenue, is still a novel topic. In this work, a comprehensive solution of generating optimized combined multiple DMCs is presented. Firstly, a potential personalized DMC pool is generated for every retailer by a newly proposed neural network model, i.e. the DMCNet (Digital-Marketing-Campaign Net). Secondly, based on the sub-modular optimization theory and the DMC pool by DMCNet, the generated combined multiple DMCs are ranked with respect to their revenue generation strength then the top three ranked campaigns are returned to the sellers' back-end management system, so that retailers can set combined multiple DMCs for their online shops just in one-shot. Real online A/B-test shows that with the integrated solution, sellers of the online retailing platform increase their shops' GMVs with approximately 6 % .

Read more
Information Retrieval

Bootstrapping Large-Scale Fine-Grained Contextual Advertising Classifier from Wikipedia

Contextual advertising provides advertisers with the opportunity to target the context which is most relevant to their ads. However, its power cannot be fully utilized unless we can target the page content using fine-grained categories, e.g., "coupe" vs. "hatchback" instead of "automotive" vs. "sport". The widely used advertising content taxonomy (IAB taxonomy) consists of 23 coarse-grained categories and 355 fine-grained categories. With the large number of categories, it becomes very challenging either to collect training documents to build a supervised classification model, or to compose expert-written rules in a rule-based classification system. Besides, in fine-grained classification, different categories often overlap or co-occur, making it harder to classify accurately. In this work, we propose wiki2cat, a method to tackle the problem of large-scaled fine-grained text classification by tapping on Wikipedia category graph. The categories in IAB taxonomy are first mapped to category nodes in the graph. Then the label is propagated across the graph to obtain a list of labeled Wikipedia documents to induce text classifiers. The method is ideal for large-scale classification problems since it does not require any manually-labeled document or hand-curated rules or keywords. The proposed method is benchmarked with various learning-based and keyword-based baselines and yields competitive performance on both publicly available datasets and a new dataset containing more than 300 fine-grained categories.

Read more
Information Retrieval

Building benchmarking frameworks for supporting replicability and reproducibility: spatial and textual analysis as an example

Replicability and reproducibility (R&R) are critical for the long-term prosperity of a scientific discipline. In GIScience, researchers have discussed R&R related to different research topics and problems, such as local spatial statistics, digital earth, and metadata (Fotheringham, 2009; Goodchild, 2012; Anselin et al., 2014). This position paper proposes to further support R&R by building benchmarking frameworks in order to facilitate the replication of previous research for effective and effcient comparisons of methods and software tools developed for addressing the same or similar problems. Particularly, this paper will use geoparsing, an important research problem in spatial and textual analysis, as an example to explain the values of such benchmarking frameworks.

Read more

Ready to get started?

Join us today