Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theocharis Kyriacou is active.

Publication


Featured researches published by Theocharis Kyriacou.


Robotics and Autonomous Systems | 2002

Mobile robot programming using natural language

Stanislao Lauria; Guido Bugmann; Theocharis Kyriacou; Ewan Klein

Abstract How will naive users program domestic robots? This paper describes the design of a practical system that uses natural language to teach a vision-based robot how to navigate in a miniature town. To enable unconstrained speech the robot is provided with a set of primitive procedures derived from a corpus of route instructions. When the user refers to a route that is not known to the robot, the system will learn it by combining primitives as instructed by the user. This paper describes the components of the Instruction-Based Learning architecture and discusses issues of knowledge representation, the selection of primitives and the conversion of natural language into robot-understandable procedures.


IEEE Intelligent Systems | 2001

Training personal robots using natural language instruction

Stanislao Lauria; Guido Bugmann; Theocharis Kyriacou; Johan Bos; A. Klein

As domestic robots become pervasive, uninitiated users will need a way to instruct them to adapt to their particular needs. The authors are designing a practical system that uses natural language to instruct a vision-based robot.


Robotics and Autonomous Systems | 2005

Vision-based urban navigation procedures for verbally instructed robots

Theocharis Kyriacou; Guido Bugmann; Stanislao Lauria

Abstract When humans explain a task to be executed by a robot they decompose it into chunks of actions. These form a chain of search-and-act sensory-motor loops that exit when a condition is met. In this paper we investigate the nature of these chunks in an urban visual navigation context, and propose a method for implementing the corresponding robot primitives such as “take the nth turn right/left”. These primitives make use of a “short-lived” internal map updated as the robot moves along. The recognition and localisation of intersections is done in the map using task-guided template matching. This approach takes advantage of the content of human instructions to save computation time and improve robustness.


robot and human interactive communication | 2002

Converting natural language route instructions into robot executable procedures

Stanislao Lauria; Guido Bugmann; Theocharis Kyriacou; Johan Bos; Ewan Klein

Humans explaining a task to a robot use chunks of actions that are often complex procedures for robots. An instructable robot needs to be able to map such chunks to existing pre-programmed primitives. We investigate an architecture used in spoken dialogue systems that is able to extract executable robot procedures from user instructions. A suitable representation of route instructions is introduced, then a Procedure Specification Language (PSL) is described that allows to extract from the semantic representation of the dialogue both the robot executable procedures and their parameters.


intelligent robots and systems | 2007

Robot programming by demonstration through system identification

Ulrich Nehmzow; Otar Akanyeti; Christoph Weinrich; Theocharis Kyriacou; Stephen A. Billings

Increasingly, personalised robots - robots especially designed and programmed for an individuals needs and preferences - are being used to support humans in their daily lives, most notably in the area of service robotics. Arguably, the closer the robot is programmed to the individuals needs, the more useful it is, and we believe that giving people the opportunity to program their own robots, rather than programming robots for them, will push robotics research one step further in the personalised robotics field. However, traditional robot programming techniques require specialised technical skills from different disciplines and it is not reasonable to expect end-users to have these skills. In this paper, we therefore present a new method of obtaining robot control code - programming by demonstration through system identification - which algorithmically and automatically transfers human behaviours into robot control code, using transparent, analysable mathematical functions. Besides providing a simple means of generating perception-action mappings, they have the additional advantage that can also be used to form hypotheses and theoretical analysis of robot behaviour. We demonstrate the viability of this approach by teaching a Scitos G5 mobile robot to achieve wall following and corridor passing behaviours.


Robotics and Autonomous Systems | 2007

Visual task identification and characterization using polynomial models

Otar Akanyeti; Theocharis Kyriacou; Ulrich Nehmzow; Roberto Iglesias; S.A. Billings

Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process. This paper proposes a method of dealing with these issues by (a) establishing task-achieving sensor-motor couplings through robot training, and (b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour. We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique.


Journal of Electromyography and Kinesiology | 2016

Feasibility of using combined EMG and kinematic signals for prosthesis control: A simulation study using a virtual reality environment.

Dimitra Blana; Theocharis Kyriacou; Joris M. Lambrecht; E.K.J. Chadwick

Transhumeral amputation has a significant effect on a person’s independence and quality of life. Myoelectric prostheses have the potential to restore upper limb function, however their use is currently limited due to lack of intuitive and natural control of multiple degrees of freedom. The goal of this study was to evaluate a novel transhumeral prosthesis controller that uses a combination of kinematic and electromyographic (EMG) signals recorded from the person’s proximal humerus. Specifically, we trained a time-delayed artificial neural network to predict elbow flexion/extension and forearm pronation/supination from six proximal EMG signals, and humeral angular velocity and linear acceleration. We evaluated this scheme with ten able-bodied subjects offline, as well as in a target-reaching task presented in an immersive virtual reality environment. The offline training had a target of 4° for flexion/extension and 8° for pronation/supination, which it easily exceeded (2.7° and 5.5° respectively). During online testing, all subjects completed the target-reaching task with path efficiency of 78% and minimal overshoot (1.5%). Thus, combining kinematic and muscle activity signals from the proximal humerus can provide adequate prosthesis control, and testing in a virtual reality environment can provide meaningful data on controller performance.


Robotics and Autonomous Systems | 2008

Accurate robot simulation through system identification

Theocharis Kyriacou; Ulrich Nehmzow; Roberto Iglesias; Stephen A. Billings

Robot simulators are useful tools for developing robot behaviour. They provide a fast and efficient means for testing robot control code at the convenience of the office desk. In all but the simplest cases though, due to complexities of physical systems modelled in the simulator, there are considerable differences between the behaviour of the robot in the simulator and that in the real world environment. In this paper we present a novel method to create a robot simulator using real sensor data. Logged sensor data are used to construct a mathematically explicit model (in the form of a NARMAX polynomial) of the robots environment. The advantage of such a transparent model -in contrast to opaque modelling methods such as artificial neural networks -is that it can be analysed to characterise the modelled system, using established mathematical methods. In this paper we compare the behaviour of the robot running a particular task in both the simulator and the real-world using qualitative and quantitative measures including statistical methods to investigate the faithfulness of the simulator.


Robotics and Autonomous Systems | 2006

Robot learning through task identification

Ulrich Nehmzow; Roberto Iglesias; Theocharis Kyriacou; Stephen A. Billings

Abstract The operation of an autonomous mobile robot in a semi-structured environment is a complex, usually non-linear and partly unpredictable process. Lacking a theory of robot–environment interaction that allows the design of robot control code based on theoretical analysis, roboticists still have to resort to trial-and-error methods in mobile robotics. The RobotMODIC project aims to develop a theoretical understanding of a robot’s interaction with its environment, and uses system identification techniques to identify the system robot–task–environment. In this paper, we present two practical examples of the RobotMODIC process: mobile robot self-localisation and mobile robot training to achieve door traversal. In both examples, a transparent mathematical function is obtained that maps inputs–sensory perception in both cases–to output — location and steering velocity respectively. Analysis of the obtained models reveals further information about the way in which a task is achieved, the relevance of individual sensors, possible ways of obtaining more parsimonious models, etc.


conference towards autonomous robotic systems | 2011

An implementation of a biologically inspired model of head direction cells on a robot

Theocharis Kyriacou

A biologically inspired model of head direction cells is presented and tested on a small mobile robot. Head direction cells (discovered in the brain of rats in 1984) encode the head orientation of their host irrespective of the hosts location in the environment. The head direction system thus acts as a biological compass (though not a magnetic one) for its host. Head direction cells are influenced in different ways by idiothetic (host-centred) and allothetic (not host-centred) cues. The model presented here uses the visual, vestibular and kinesthetic inputs that are simulated by robot sensors. Three test cases are presented that cover different state combinations of the inputs. The test results are compared with biological observations in previous literature.

Collaboration


Dive into the Theocharis Kyriacou's collaboration.

Top Co-Authors

Avatar

Roberto Iglesias

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guido Bugmann

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Ewan Klein

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louis Major

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Bos

University of Groningen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge