Graham Wilcock
University of Helsinki
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Graham Wilcock.
annual meeting of the special interest group on discourse and dialogue | 2002
Kristiina Jokinen; Antti Kerminen; Tommi Lagus; Jukka Kuusisto; Graham Wilcock; Markku Turunen; Jaakko Hakulinen; Krista Jauhiainen
Technological development has made computer interaction more common and also commercially feasible, and the number of interactive systems has grown rapidly. At the same time, the systems should be able to adapt to various situations and various users, so as to provide the most efficient and helpful mode of interaction. The aim of the Interact project is to explore natural human-computer interaction and to develop dialogue models which will allow users to interact with the computer in a natural and robust way. The paper describes the innovative goals of the project and presents ways that the Interact system supports adaptivity on different system design and interaction management levels.
Natural Interaction with Robots, Knowbots and Smartphones, Putting Spoken Dialog Systems into Practice | 2014
Kristiina Jokinen; Graham Wilcock
In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.
annual meeting of the special interest group on discourse and dialogue | 2001
Kristiina Jokinen; Graham Wilcock
The paper addresses the issue of how to increase adaptivity in response generation for a spoken dialogue system. Realization strategies for dialogue responses depend on communicative confidence levels and interaction management goals. We first describe a Java/XML-based generator which produces different realizations of system responses based on agendas specified by the dialogue manager. We then discuss how greater adaptivity can be achieved by using a set of distinct generator agents, each of which is specialized in its realization strategy (e.g. highly elliptical or highly explicit). This allows a simpler design of each generator agent, while increasing the overall system adaptivity to meet the requirements for flexible cooperation in incremental and immediate interactive situations.
conference of the european chapter of the association for computational linguistics | 2003
Graham Wilcock
The paper describes a software demo integrating Natural Language Generation (NLG) techniques with recent developments in XML web technology. The NLG techniques include a form of template-based generation, transformation of text plan trees to text specification trees, and a multi-stage pipeline architecture. The web technology includes XSLT transformation processors, an XML database, a Java servlet engine, the Cocoon web publishing framework and a Java speech synthesizer. The software is all free, open-source.
Archive | 2011
Graham Wilcock; Kristiina Jokinen
We present a demo showing different levels of emergent verbal behaviour that arise when speech is added to a robotics simulator. After showing examples of (silent) robot activities in the simulator, adding speech output enables the robot to give spoken explanations of its behaviour. Adding speech input allows the robot´s movements to be guided by voice commands. In addition, the robot can modify its own verbal behaviour when asked to talk less or more. The robotics toolkit supports different behavioural paradigms, including finite state machines. The demo shows an example state transition based spoken dialogue system implemented within the robotics framework. Other more experimental combinations of speech and robot behaviours will also be shown.
meeting of the association for computational linguistics | 1998
Graham Wilcock
As HPSG is head-driven, with clear semantic heads, semantic head-driven generation should be simple. We adapt van Noords Prolog generator for use with an HPSG grammar in ProFIT. However, quantifiers and context factors are difficult to include in head-driven generation. We must adopt recent theoretical proposals for lexicalized scoping and context. With these revisions, head-driven generation with HPSG is not so simple, but it is possible.
annual meeting of the special interest group on discourse and dialogue | 2015
Graham Wilcock; Kristiina Jokinen
At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.
meeting of the association for computational linguistics | 2007
Graham Wilcock
The paper presents an OWL ontology for HPSG. The HPSG ontology is integrated with an existing OWL ontology, GOLD, as a community of practice extension. The basic ideas are illustrated by visualizations of type hierarchies for parts of speech.
Archive | 2003
Kristiina Jokinen; Graham Wilcock
The paper addresses the issue of how to increase adaptivity in response generation for a spoken dialogue system. Realization strategies for dialogue responses depend on communicative confidence levels and interaction management goals. We first describe a Java/XML-based generator which produces different realizations of system responses based on agendas specified by the dialogue manager. We then discuss how greater adaptivity can be achieved by using a set of distinct generator agents, each of which is specialized in its realization strategy (e.g. highly elliptical or highly explicit). This allows a simpler design of each generator agent, while increasing the overall system adaptivity to meet the requirements for flexible cooperation in incremental and immediate interactive situations.
International Workshop on Spoken Dialogue Systems | 2017
Graham Wilcock; Niklas Laxström; Juho Leinonen; Peter Smit; Mikko Kurimo; Kristiina Jokinen
We describe our work towards developing SamiTalk , a robot application for the North Sami language. With SamiTalk, users will hold spoken dialogues with a humanoid robot that speaks and recognizes North Sami. The robot will access information from the Sami Wikipedia, talk about requested topics using the Wikipedia texts, and make smooth topic shifts to related topics using the Wikipedia links. SamiTalk will be based on the existing WikiTalk system for Wikipedia-based spoken dialogues, with newly developed speech components for North Sami.