Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Caroline Lyon is active.

Publication


Featured researches published by Caroline Lyon.


International Journal of Speech Technology | 2004

Speech-Based Real-Time Subtitling Services

Andrew Lambourne; Jill Hewitt; Caroline Lyon; Sandra Warren

Recent advances in technology have led to the availability of powerful speech recognizers at low cost and to the possibility of using speech interaction in a variety of new and exciting practical applications. The purpose of this research was to investigate and develop the use of speech recognition in live television subtitling. This paper describes how the “SpeakTitle” project met the challenges of real time speech recognition and live subtitling through the development of a customisable speaker interface and use of ‘Topics’ for specific subject domains. In the prototype system (described in Hewitt et al., 2000; Bateman et al., 2001) output from the speech recognition system (the IBM ViaVoice® engine) is passed in to a custom-built editor from where it can be corrected and passed on to an existing subtitling system. The system was developed to the extent that it was acceptable for the production of subtitles for live television broadcasts and it has been adopted by three subtitle production facilities in the UK.The evolution of the product and the experiences of users in developing the system in a live subtitling environment are considered, and the system is analysed against industry standards. Ease-of-use and accuracy are also discussed and further research areas are identified.


PLOS ONE | 2012

Interactive language learning by robots : The transition from babbling to word forms

Caroline Lyon; Chrystopher L. Nehaniv; Joe Saunders

The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.


language resources and evaluation | 2007

Copy detection in Chinese documents using Ferret

Jun-Peng Bao; Caroline Lyon; Peter C. R. Lane

The Ferret copy detector has been used since 2001 to find plagiarism in large collections of students’ coursework in English. This article reports on extending its application to Chinese, with experiments on corpora of coursework collected from two Chinese universities. Our experiments show that Ferret can find both artificially constructed plagiarism and actually occurring, previously undetected plagiarism. We discuss issues of representation, focus on the effectiveness of a sub-symbolic approach, and show that Ferret does not need to find word boundaries first.


IEEE Transactions on Autonomous Mental Development | 2009

What is Needed for a Robot to Acquire Grammar? Some Underlying Primitive Mechanisms for the Synthesis of Linguistic Ability

Caroline Lyon; Yo Sato; Joe Saunders; Chrystopher L. Nehaniv

A robot that can communicate with humans using natural language will have to acquire a grammatical framework. This paper analyses some crucial underlying mechanisms that are needed in the construction of such a framework. The work is inspired by language acquisition in infants, but it also draws on the emergence of language in evolutionary time and in ontogenic (developmental) time. It focuses on issues arising from the use of real language with all its evolutionary baggage, in contrast to an artificial communication system, and describes approaches to addressing these issues. We can deconstruct grammar to derive underlying primitive mechanisms, including serial processing, segmentation, categorization, compositionality, and forward planning. Implementing these mechanisms are necessary preparatory steps to reconstruct a working syntactic/semantic/pragmatic processor which can handle real language. An overview is given of our own initial experiments in which a robot acquires some basic linguistic capacity via interacting with a human.


Artificial Life | 2009

A constructivist approach to robot language learning via simulated babbling and holophrase extraction

Joe Saunders; Caroline Lyon; Frank Förster; Chrystopher L. Nehaniv; Kerstin Dautenhahn

It is thought that meaning may be grounded in early childhood language learning via the physical and social interaction of the infant with those around him or her, and that the capacity to use words, phrases and their meaning are acquired through shared referential ‘inference’ in pragmatic interactions. In order to create appropriate conditions for language learning by a humanoid robot, it would therefore be necessary to expose the robot to similar physical and social contexts. However in the early stages of language learning it is estimated that a 2-year-old child can be exposed to as many as 7,000 utterances per day in varied contextual situations. In this paper we report on the issues behind and the design of our currently ongoing and forthcoming experiments aimed to allow a robot to carry out language learning in a manner analogous to that in early child development and which effectively ‘short cuts’ holophrase learning. Two approaches are used: (1) simulated babbling through mechanisms which will yield basic word or holophrase structures and (2) a scenario for interaction between a human and the humanoid robot where shared ‘intentional’ referencing and the associations between physical, visual and speech modalities can be experienced by the robot. The output of these experiments, combined to yield word or holophrase structures grounded in the robots own actions and modalities, would provide scaffolding for further proto-grammatical usage-based learning. This requires interaction with the physical and social environment involving human feedback to bootstrap developing linguistic competencies. These structures would then form the basis for further studies on language acquisition, including the emergence of negation and more complex grammar.


conference of the european chapter of the association for computational linguistics | 1995

A fast partial parse of natural language sentences using a connectionist method

Caroline Lyon; Bob Dickerson

The pattern matching capabilities of neural networks can be used to locate syntactic constituents of natural language. This paper describes a fully automated hybrid system, using neural nets operating within a grammatic framework. It addresses the representation of language for connectionist processing, and describes methods of constraining the problem size. The function of the network is briefly explained, and results are given.


Neural Computing and Applications | 1997

Using single layer networks for discrete, sequential data: an example from natural language processing

Caroline Lyon; Ray J. Frank

Natural Language Processing (NLP) is concerned with processing ordinary, unrestricted text. This work takes a new approach to a traditional NLP task, using neural computing methods. A parser which has been successfully implemented is described. It is a hybrid system, in which neural processors operate within a rule based framework. The neural processing components belong to the class of Generalized Single Layer Networks (GSLN). In general, supervised, feed-forward networks need more than one layer to process data. However, in some cases data can be pre-processed with a non-linear transformation, and then presented in a linearly separable form for subsequent processing by a single layer net. Such networks offer advantages of functional transparency and operational speed. For our parser, the initial stage of processing maps linguistic data onto a higher order representation, which can then be analysed by a single layer network. This transformation is supported by information theoretic analysis. Three different algorithms for the neural component were investigated. Single layer nets can be trained by finding weight adjustments based on (a) factors proportional to the input, as in the Perceptron, (b) factors proportional to the existing weights, and (c) an error minimization method. In our experiments generalization ability varies little; method (b) is used for a prototype parser. This is available via telnet.


Artificial Life | 2013

Interaction and experience in enactive intelligence and humanoid robotics

Chrystopher L. Nehaniv; Frank Förster; Joe Saunders; Frank Broz; Elena Antonova; Hatice Kose; Caroline Lyon; Hagen Lehmann; Yo Sato; Kerstin Dautenhahn

We overview how sensorimotor experience can be operationalized for interaction scenarios in which humanoid robots acquire skills and linguistic behaviours via enacting a “form-of-life” in interaction games (following Wittgenstein) with humans. The enactive paradigm is introduced which provides a powerful framework for the construction of complex adaptive systems, based on interaction, habit, and experience. Enactive cognitive architectures (following insights of Varela, Thompson and Rosch) that we have developed support social learning and robot ontogeny by harnessing information-theoretic methods and raw uninterpreted sensorimotor experience to scaffold the acquisition of behaviours. The success criterion here is validation by the robot engaging in ongoing human-robot interaction with naive participants who, over the course of iterated interactions, shape the robots behavioural and linguistic development. Engagement in such interaction exhibiting aspects of purposeful, habitual recurring structure evidences the developed capability of the humanoid to enact language and interaction games as a successful participant.


Topics in Cognitive Science | 2014

The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

Frank Broz; Chrystopher L. Nehaniv; Tony Belpaeme; Ambra Bisio; Kerstin Dautenhahn; Luciano Fadiga; Tomassino Ferrauto; Kerstin Fischer; Frank Förster; Onofrio Gigliotta; Sascha S. Griffiths; Hagen Lehmann; Katrin Solveig Lohan; Caroline Lyon; Davide Marocco; Gianluca Massera; Giorgio Metta; Vishwanathan Mohan; Anthony F. Morse; Stefano Nolfi; Francesco Nori; Martin Peniak; Karola Pitsch; Katharina J. Rohlfing; Gerhard Sagerer; Yo Sato; Joe Saunders; Lars Schillingmann; Alessandra Sciutti; Vadim Tikhanoff

This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about ones own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each others development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agents capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.


Archive | 2014

Beyond Vision: Extending the Scope of a Sensorimotor Account of Perception

Caroline Lyon

We examine the scope of some sensorimotor accounts of perception, and their application in developmental robotics. Current interest in sensorimotor theories, and the enactive paradigm, was stimulated by the seminal book The Embodied Mind by Varela, Thompson and Rosch (1991) [32]. However, both in this initial book and subsequently there has been much work on visual perception and less attention to other perceptual modalities. We suggest that the insights gained from an exploration of the visual domain need supplementing, and in some respects qualifying: some significant characteristics of vision do not hold for audition, in particular for the perception of speech. This leads into a discussion of the importance of integrating different perceptual modes, with particular reference to robots and human-robot interaction. We examine the effect of including audition in accounts of perception, and suggest that it makes sense to avoid the unnecessary straight jacket of a model based primarily on vision and touch alone. The sensorimotor approach can be extended to other perceptual modes.

Collaboration


Dive into the Caroline Lyon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe Saunders

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Bob Dickerson

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

James A. Malcolm

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Ruth Barrett

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Yo Sato

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Frank Förster

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Kerstin Dautenhahn

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jill Hewitt

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge