Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicola Stokes is active.

Publication


Featured researches published by Nicola Stokes.


international acm sigir conference on research and development in information retrieval | 2001

Combining semantic and syntactic document classifiers to improve first story detection

Nicola Stokes; Joe Carthy

In this paper we describe a type of data fusion involving the combination of evidence derived from multiple document representations. Our aim is to investigate if a composite representation can improve the online detection of novel events in a stream of broadcast news stories. This classification process otherwise known as first story detection FSD (or in the Topic Detection and Tracking pilot study as online new event detection [1]), is one of three main classification tasks defined by the TDT initiative. Our composite document representation consists of a semantic representation (based on the lexical chains derived from a text) and a syntactic representation (using proper nouns). Using the TDT1 evaluation methodology, we evaluate a number of document representation combinations using these document classifiers.


Information Retrieval | 2009

Exploring criteria for successful query expansion in the genomic domain

Nicola Stokes; Yi Li; Lawrence Cavedon; Justin Zobel

Query Expansion is commonly used in Information Retrieval to overcome vocabulary mismatch issues, such as synonymy between the original query terms and a relevant document. In general, query expansion experiments exhibit mixed results. Overall TREC Genomics Track results are also mixed; however, results from the top performing systems provide strong evidence supporting the need for expansion. In this paper, we examine the conditions necessary for optimal query expansion performance with respect to two system design issues: IR framework and knowledge source used for expansion. We present a query expansion framework that improves Okapi baseline passage MAP performance by 185%. Using this framework, we compare and contrast the effectiveness of a variety of biomedical knowledge sources used by TREC 2006 Genomics Track participants for expansion. Based on the outcome of these experiments, we discuss the success factors required for effective query expansion with respect to various sources of term expansion, such as corpus-based cooccurrence statistics, pseudo-relevance feedback methods, and domain-specific and domain-independent ontologies and databases. Our results show that choice of document ranking algorithm is the most important factor affecting retrieval performance on this dataset. In addition, when an appropriate ranking algorithm is used, we find that query expansion with domain-specific knowledge sources provides an equally substantive gain in performance over a baseline system.


Information Retrieval | 2010

Document clustering of scientific texts using citation contexts

Bader Aljaber; Nicola Stokes; James Bailey; Jian Pei

Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.


International Journal of Geographical Information Science | 2008

An empirical study of the effects of NLP components on Geographic IR performance

Nicola Stokes; Yi Li; Alistair Moffat; Jiawen Rong

Natural language processing (NLP) techniques, such as toponym detection and resolution, are an integral part of most geographic information retrieval (GIR) architectures. Without these components, synonym detection, ambiguity resolution and accurate toponym expansion would not be possible. However, there are many important factors affecting the success of an NLP approach to GIR, including toponym detection errors, toponym resolution errors and query overloading. The aim of this paper is to determine how severe these errors are in state‐of‐the‐art systems, and to what extent they affect GIR performance. We show that a careful choice of weighting schemes in the IR engine can minimize the negative impact of these errors on GIR accuracy. We provide empirical evidence from the GeoCLEF 2005 and 2006 datasets to support our observations.


international conference on computational linguistics | 2008

Investigating Statistical Techniques for Sentence-Level Event Classification

Martina Naughton; Nicola Stokes; Joe Carthy

The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Summarisation. In this paper, we treat event detection as a sentence level text classification problem. We compare the performance of two approaches to this task: a Support Vector Machine (SVM) classifier and a Language Modeling (LM) approach. We also investigate a rule based method that uses hand crafted lists of terms derived from WordNet. These terms are strongly associated with a given event type, and can be used to identify sentences describing instances of that type. We use two datasets in our experiments, and evaluate each technique on six distinct event types. Our results indicate that the SVM consistently outperform the LM technique for this task. More interestingly, we discover that the manual rule based classification system is a very powerful baseline that outperforms the SVM on three of the six event types.


international conference on human language technology research | 2001

First story detection using a composite document representation

Nicola Stokes; Joe Carthy

In this paper, we explore the effects of data fusion on First Story Detection [1] in a broadcast news domain. The data fusion element of this experiment involves the combination of evidence derived from two distinct representations of document content in a single cluster run. Our composite document representation consists of a concept representation (based on the lexical chains derived from a text) and free text representation (using traditional keyword index terms). Using the TDT1 evaluation methodology we evaluate a number of document representation strategies and propose reasons why our data fusion experiment shows performance improvements in the TDT domain.


Information Retrieval | 2010

Sentence-level event classification in unstructured texts

Martina Naughton; Nicola Stokes; Joe Carthy

The ability to correctly classify sentences that describe events is an important task for many natural language applications such as Question Answering (QA) and Text Summarisation. In this paper, we treat event detection as a sentence level text classification problem. Overall, we compare the performance of discriminative versus generative approaches to this task: namely, a Support Vector Machine (SVM) classifier versus a Language Modeling (LM) approach. We also investigate a rule-based method that uses handcrafted lists of ‘trigger’ terms derived from WordNet. Two datasets are used in our experiments to test each approach on six different event types, i.e., Die, Attack, Injure, Meet, Transport and Charge-Indict. Our experimental results show that the trained SVM classifier significantly outperforms the simple rule-based system and language modeling approach on both datasets: ACE (F1 66% vs. 45% and 38%, respectively) and IBC (F1 92% vs. 88% and 74%, respectively). A detailed error analysis framework for the task is also provided which separates errors into different types: semantic, inference, continuous and trigger-less.


Nucleic Acids Research | 2009

UniPrime2: a web service providing easier Universal Primer design

Robin Boutros; Nicola Stokes; Michaël Bekaert; Emma C. Teeling

The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.


north american chapter of the association for computational linguistics | 2003

Spoken and written news story segmentation using lexical chains

Nicola Stokes

In this paper we describe a novel approach to lexical chain based segmentation of broadcast news stories. Our segmentation system SeLeCT is evaluated with respect to two other lexical cohesion based segmenters TextTiling and C99. Using the Pk and WindowDiff evaluation metrics we show that SeLeCT outperforms both systems on spoken news transcripts (CNN) while the C99 algorithm performs best on the written newswire collection (Reuters). We also examine the differences between spoken and written news styles and how these differences can affect segmentation accuracy.


conference on intelligent text processing and computational linguistics | 2004

Assessing the Impact of Lexical Chain Scoring Methods and Sentence Extraction Schemes on Summarization

William P. Doran; Nicola Stokes; Joe Carthy; John Dunnion

We present a comparative study of lexical chain-based summarisation techniques. The aim of this paper is to highlight the effect of lexical chain scoring metrics and sentence extraction techniques on summary generation. We present our own lexical chain-based summarisation system and compare it to other chain-based summarisation systems. We also compare the chain scoring and extraction techniques of our system to those of several other baseline systems, including a random summarizer and one based on tf.idf statistics. We use a task-orientated summarisation evaluation scheme that determines summary quality based on TDT story link detection performance.

Collaboration


Dive into the Nicola Stokes's collaboration.

Top Co-Authors

Avatar

Joe Carthy

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

John Dunnion

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Li

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

James Cogley

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

Ruichao Wang

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paula Hatch

University College Dublin

View shared research outputs
Researchain Logo
Decentralizing Knowledge