Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xingkun Liu is active.

Publication


Featured researches published by Xingkun Liu.


Applied Physics Letters | 2001

Nonlinear optical properties of chalcogenide glasses: Observation of multiphoton absorption

K.S. Bindra; Henry T. Bookey; Ajoy K. Kar; Brian S. Wherrett; Xingkun Liu; Animesh Jha

We report observation of four- and five-photon absorption in the chalcogenide glasses at the telecommunication wavelengths. The nonlinear refractive index is sufficiently large that the optical switching criterion is satisfied.


Current Opinion in Solid State & Materials Science | 2001

Inorganic glasses as Kerr-like media

Animesh Jha; Xingkun Liu; Ajoy K. Kar; Henry T. Bookey

High-refractive index chalcogenide and heavy-metal oxide glasses have been developed recently for their application as all-optical switching devices in telecommunication networks. Depending upon the nature of electromagnetic phenomena, the switching speed may vary between milliseconds in a rare-earth doped glass and femtoseconds for surface plasmon relaxation in a nano-scale dispersion of metal particles in glasses.


spoken language technology workshop | 2014

Training a statistical surface realiser from automatic slot labelling

Heriberto Cuayáhuitl; Nina Dethlefs; Helen Hastie; Xingkun Liu

Training a statistical surface realiser typically relies on labelled training data or parallel data sets, such as corpora of paraphrases. The procedure for obtaining such data for new domains is not only time-consuming, but it also restricts the incorporation of new semantic slots during an interaction, i.e. using an online learning scenario for automatically extended domains. Here, we present an alternative approach to statistical surface realisation from unlabelled data through automatic semantic slot labelling. The essence of our algorithm is to cluster clauses based on a similarity function that combines lexical and semantic information. Annotations need to be reliable enough to be utilised within a spoken dialogue system. We compare different similarity functions and evaluate our surface realiser-trained from unlabelled data-in a human rating study. Results confirm that a surface realiser trained from automatic slot labels can lead to outputs of comparable quality to outputs trained from human-labelled inputs.


international conference on multimodal interfaces | 2017

Trust triggers for multimodal command and control interfaces

Helen Hastie; Xingkun Liu; Pedro Patron

For autonomous systems to be accepted by society and operators, they have to instil the appropriate level of trust. In this paper, we discuss what dimensions constitute trust and examine certain triggers of trust for an autonomous underwater vehicle, comparing a multimodal command and control interface with a language-only reporting system. We conclude that there is a relationship between perceived trust and the clarity of a users Mental Model and that this Mental Model is clearer in a multimodal condition, compared to language-only. Regarding trust triggers, we are able to show that a number of triggers, such as anomalous sensor readings, noticeably modify the perceived trust of the subjects, but in an appropriate manner, thus illustrating the utility of the interface.


annual meeting of the special interest group on discourse and dialogue | 2009

Automatic Generation of Information State Update Dialogue Systems that Dynamically Create Voice XML, as Demonstrated on the iPhone

Helen Wright Hastie; Xingkun Liu; Oliver Lemon

We demonstrate DUDE (Dialogue and Understanding Development Environment), a prototype development environment that automatically generates dialogue systems from business-user resources and databases. These generated spoken dialogue systems (SDS) are then deployed on an industry standard Voice XML platform. Specifically, the deployed system works by dynamically generating context-sensitive Voice XML pages. The dialogue move of each page is determined in real time by the dialogue manager, which is an Information State Update engine. Firstly, we will demonstrate the development environment which includes automatic generation of speech recognition grammars for robust interpretation of spontaneous speech, and uses the application database to generate lexical entries and grammar rules. A simple graphical interface allows users (i.e. developers) to easily and quickly create and the modify SDS without the need for expensive service providers. Secondly, we will demonstrate the deployed system which enables participants to call up and speak to the SDS recently created. We will also show a pre-built application running on the iPhone and Google Android phone for searching for places such as restaurants, hotels and museums.


OCEANS 2017 - Aberdeen | 2017

Talking autonomous vehicles: Automatic AUV mission analysis in natural language

Helen Hastie; Xingkun Liu; Yvan Petillot; Pedro Patron

As AUVs are enabled with greater levels of autonomy, there is the need for them to clearly explain their actions and reasoning, maintaining a level of situation awareness and operator trust. Here, we describe the REGIME system that automatically generates natural language updates in real-time as well as post-mission reports. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language. These natural language updates can be used in isolation or as an add-on to existing interfaces such as SeeBytes SeeTrack common operator interface. A usability study reflects the high situation awareness induced by the system, as well as interest in future use.


international conference on multimodal interfaces | 2016

A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission

Helen Hastie; Xingkun Liu; Pedro Patron

A prototype will be demonstrated that takes activity and sensor data from Autonomous Underwater Vehicles (AUVs) and automatically generates multimodal output in the form of mission reports containing natural language and visual elements. Specifically, the system takes time-series sensor data, mission logs, together with mission plans as its input, and generates descriptions of the missions in natural language, which would be verbalised by a Text-to-Speech Synthesis (TTS) engine in a multimodal system. In addition, we will demonstrate an in-mission system that provides a stream of real-time updates in natural language, thus improving situation awareness of the operator and increasing trust in the system during missions.


annual meeting of the special interest group on discourse and dialogue | 2014

The PARLANCE mobile application for interactive search in English and Mandarin

Helen Hastie; Marie-Aude Aufaure; Panos Alexopoulos; Hugues Bouchard; Catherine Breslin; Heriberto Cuayáhuitl; Nina Dethlefs; Milica Gasic; James Henderson; Oliver Lemon; Xingkun Liu; Peter Mika; Nesrine Ben Mustapha; Tim Potter; Verena Rieser; Blaise Thomson; Pirros Tsiakoulis; Yves Vanrompay; Boris Villazon-Terrazas; Majid Yazdani; Steve J. Young; Yanchao Yu

We demonstrate a mobile application in English and Mandarin to test and evaluate components of the Parlance dialogue system for interactive search under real-world conditions.


Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI '18 | 2018

Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface

David Robb; Francisco Javier Chiyah Garcia; Atanas Laskov; Xingkun Liu; Pedro Patron; Helen Hastie

Autonomous systems are designed to carry out activities in remote, hazardous environments without the need for operators to micro-manage them. It is, however, essential that operators maintain situation awareness in order to monitor vehicle status and handle unforeseen circumstances that may affect their intended behaviour, such as a change in the environment. We present MIRIAM, a multimodal interface that combines visual indicators of status with a conversational agent component. This multimodal interface offers a fluid and natural way for operators to gain information on vehicle status and faults, mission progress and to set reminders. We describe the system and an evaluation study providing evidence that such an interactive multimodal interface can assist in maintaining situation awareness for operators of autonomous systems, irrespective of cognitive styles.


Computers, Environment and Urban Systems | 2018

A dialogue based mobile virtual assistant for tourists: The SpaceBook Project

Phil Bartie; William Mackaness; Oliver Lemon; Tiphaine Dalmas; Srinivasan Chandrasekaran Janarthanam; Robin L. Hill; Anna Dickinson; Xingkun Liu

Abstract Ubiquitous mobile computing offers innovative approaches in the delivery of information that can facilitate free roaming of the city, informing and guiding the tourist as the city unfolds before them. However making frequent visual reference to mobile devices can be distracting, the user having to interact via a small screen thus disrupting the explorative experience. This research reports on an EU funded project, SpaceBook, that explored the utility of a hands-free, eyes-free virtual tour guide, that could answer questions through a spoken dialogue user interface and notify the user of interesting features in view while guiding the tourist to various destinations. Visibility modelling was carried out in real-time based on a LiDAR sourced digital surface model, fused with a variety of map and crowd sourced datasets (e.g. Ordnance Survey, OpenStreetMap, Flickr, Foursquare) to establish the most interesting landmarks visible from the users location at any given moment. A number of variations of the SpaceBook system were trialled in Edinburgh (Scotland). The research highlighted the pleasure derived from this novel form of interaction and revealed the complexity of prioritising route guidance instruction alongside identification, description and embellishment of landmark information – there being a delicate balance between the level of information ‘pushed’ to the user, and the users requests for further information. Among a number of challenges, were issues regarding the fidelity of spatial data and positioning information required for pedestrian based systems – the pedestrian having much greater freedom of movement than vehicles.

Collaboration


Dive into the Xingkun Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Phil Bartie

University of Stirling

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge