Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Kilgour is active.

Publication


Featured researches published by Jonathan Kilgour.


Behavior Research Methods Instruments & Computers | 2003

The NITE XML Toolkit: Flexible annotation for multimodal language data

Jean Carletta; Stefan Evert; Ulrich Heid; Jonathan Kilgour; Judy Robertson; Holger Voormann

Multimodal corpora that show humans interacting via language are now relatively easy to collect. Current tools allow one either to apply sets of time-stamped codes to the data and consider their timing and sequencing or to describe some specific linguistic structure that is present in the data, built over the top of some form of transcription. To further our understanding of human communication, the research community needs code sets with both timings and structure, designed flexibly to address the research questions at hand. The NITE XML Toolkit offers library support that software developers can call upon when writing tools for such code sets and, thus, enables richer analyses than have previously been possible. It includes data handling, a query language containing both structural and temporal constructs, components that can be used to build graphical interfaces, sample programs that demonstrate how to use the libraries, a tool for running queries, and an experimental engine that builds interfaces on the basis of declarative specifications.


language resources and evaluation | 2005

The NITE XML Toolkit: Data Model and Query Language

Jean Carletta; Stefan Evert; Ulrich Heid; Jonathan Kilgour

The NITE XML Toolkit (NXT) is open source software for working with language corpora, with particular strengths for multimodal and heavily cross-annotated data sets. In NXT, annotations are described by types and attribute value pairs, and can relate to signal via start and end times, to representations of the external environment, and to each other via either an arbitrary graph structure or a multi-rooted tree structure characterized by both temporal and structural orderings. Simple queries in NXT express variable bindings for n-tuples of objects, optionally constrained by type, and give a set of conditions on the n-tuples combined with boolean operators. The defined operators for the condition tests allow full access to the timing and structural properties of the data model. A complex query facility passes variable bindings from one query to another for filtering, returning a tree structure. In addition to describing NXTȁ9s core data handling and search capabilities, we explain the stand-off XML data storage format that it employs and illustrate its use with examples from an early adopter of the technology.


ACM Transactions on Speech and Language Processing | 2009

Extrinsic summarization evaluation: A decision audit task

Gabriel Murray; Thomas Kleinbauer; Peter Poller; Tilman Becker; Steve Renals; Jonathan Kilgour

In this work we describe a large-scale extrinsic evaluation of automatic speech summarization technologies for meeting speech. The particular task is a decision audit, wherein a user must satisfy a complex information need, navigating several meetings in order to gain an understanding of how and why a given decision was made. We compare the usefulness of extractive and abstractive technologies in satisfying this information need, and assess the impact of automatic speech recognition (ASR) errors on user performance. We employ several evaluation methods for participant performance, including post-questionnaire data, human subjective and objective judgments, and a detailed analysis of participant browsing behavior. We find that while ASR errors affect user satisfaction on an information retrieval task, users can adapt their browsing behavior to complete the task satisfactorily. Results also indicate that users consider extractive summaries to be intuitive and useful tools for browsing multimodal meeting data. We discuss areas in which automatic summarization techniques can be improved in comparison with gold-standard meeting abstracts.


international conference on machine learning | 2008

The AMIDA Automatic Content Linking Device: Just-in-Time Document Retrieval in Meetings

Andrei Popescu-Belis; Erik Boertjes; Jonathan Kilgour; Peter Poller; Sandro Castronovo; Theresa Wilson; Alejandro Jaimes; Jean Carletta

The AMIDA Automatic Content Linking Device (ACLD) is a just-in-time document retrieval system for meeting environments. The ACLD listens to a meeting and displays information about the documents from the groups history that are most relevant to what is being said. Participants can view an outline or the entire content of the documents, if they feel that these documents are potentially useful at that moment of the meeting. The ACLD proof-of-concept prototype places meeting-related documents and segments of previously recorded meetings in a repository and indexes them. During a meeting, the ACLD continually retrieves the documents that are most relevant to keywords found automatically using the current meeting speech. The current prototype simulates the real-time speech recognition that will be available in the near future. The software components required to achieve these functions communicate using the Hub, a client/server architecture for annotation exchange and storage in real-time. Results and feedback for the first ACLD prototype are outlined, together with plans for its future development within the AMIDA EU integrated project. Potential users of the ACLD supported the overall concept, and provided feedback to improve the user interface and to access documents beyond the groups own history.


international conference on machine learning | 2004

The NITE XML toolkit meets the ICSI meeting corpus: import, annotation, and browsing

Jean Carletta; Jonathan Kilgour

The NITE XML Toolkit (NXT) provides library support for working with multimodal language corpora. We describe work in progress to explore its potential for the AMI project by applying it to the ICSI Meeting Corpus. We discuss converting existing data into the NXT data format; using NXTs query facility to explore the corpus; hand-annotation and automatic indexing; and the integration of data obtained by applying NXT-external processes such as parsers. Finally, we describe use of NXT as a meeting browser itself, and how it can be used to integrate other browser components.


acm symposium on applied computing | 1998

Automatic generation of diagrammatic Web site maps

Robert Inder; Jonathan Kilgour; John Lee

In this paper we outline a site map application on the World Wide Web, whose aim is to improve navigation and orientation within the web site of the Human Communication Research Centre (HCRC). The map takes the form of a graphical fisheye view of the web site with the current node as the single focus. We discuss the semiautomatic process whereby the site map description is produced, describe how it is turned into a relevant map, and discuss reaction and usage patterns. We outline how the approach can be extended in three future directions: affording the user greater control of the graph drawing variables; covering more of the site, and indeed the web in the large; and using the users interaction history to constrain the site map display further.


Proceedings of the 2010 international workshop on Searching spontaneous conversational speech | 2010

The ambient spotlight: queryless desktop search from meeting speech

Jonathan Kilgour; Jean Carletta; Steve Renals

It has recently become possible to record any small meeting using a laptop equipped with a plug-and-play USB microphone array. We show the potential for such recordings in a personal aid that allows project managers to record their meetings and, when reviewing them afterwards through a standard calendar interface, to find relevant documents on their computer. This interface is intended to supplement or replace the textual searches that managers typically perform. The prototype, which relies on meeting speech recognition and topic segmentation, formulates and runs desktop search queries in order to present its results


international conference on machine learning | 2008

Extrinsic Summarization Evaluation: A Decision Audit Task

Gabriel Murray; Thomas Kleinbauer; Peter Poller; Steve Renals; Jonathan Kilgour; Tilman Becker

In this work we describe a large-scale extrinsic evaluation of automatic speech summarization technologies for meeting speech. The particular task is a decision audit, wherein a user must satisfy a complex information need, navigating several meetings in order to gain an understanding of how and why a given decision was made. We compare the usefulness of extractive and abstractive technologies in satisfying this information need, and assess the impact of automatic speech recognition (ASR) errors on user performance. We employ several evaluation methods for participant performance, including post-questionnaire data, human subjective and objective judgments, and an analysis of participant browsing behaviour.


international conference on multimodal interfaces | 2009

A multimedia retrieval system using speech input

Andrei Popescu-Belis; Peter Poller; Jonathan Kilgour

The AMIDA Automatic Content Linking Device (ACLD) monitors a conversation using automatic speech recognition (ASR), and uses the detected words to retrieve documents that are of potential use to the participants in the conversation. The document set that is available includes project related documents such as reports, memos or emails, as well as snippets of past meetings that were transcribed using offline ASR. In addition, results of Web searches are also displayed. Several visualisation interfaces are available.


Health Informatics Journal | 2016

Designing a Spoken Dialogue Interface to an Intelligent Cognitive Assistant for People with Dementia

Maria Wolters; Fiona Kelly; Jonathan Kilgour

Intelligent cognitive assistants support people who need help performing everyday tasks by detecting when problems occur and providing tailored and context-sensitive assistance. Spoken dialogue interfaces allow users to interact with intelligent cognitive assistants while focusing on the task at hand. In order to establish requirements for voice interfaces to intelligent cognitive assistants, we conducted three focus groups with people with dementia, carers, and older people without a diagnosis of dementia. Analysis of the focus group data showed that voice and interaction style should be chosen based on the preferences of the user, not those of the carer. For people with dementia, the intelligent cognitive assistant should act like a patient, encouraging guide, while for older people without dementia, assistance should be to the point and not patronising. The intelligent cognitive assistant should be able to adapt to cognitive decline.

Collaboration


Dive into the Jonathan Kilgour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Renals

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Murray

University of the Fraser Valley

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Evert

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge