Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristiina Jokinen is active.

Publication


Featured researches published by Kristiina Jokinen.


language resources and evaluation | 2007

The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena

Jens Allwood; Loredana Cerrato; Kristiina Jokinen; Costanza Navarretta; Patrizia Paggio

This paper deals with a multimodal annotation scheme dedicated to the study of gestures in interpersonal communication, with particular regard to the role played by multimodal expressions for feedback, turn management and sequencing. The scheme has been developed under the framework of the MUMIN network and tested on the analysis of multimodal behaviour in short video clips in Swedish, Finnish and Danish. The preliminary results obtained in these studies show that the reliability of the categories defined in the scheme is acceptable, and that the scheme as a whole constitutes a versatile analysis tool for the study of multimodal communication behaviour.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2000

Cooperation, dialogue and ethics

Jens Allwood; David R. Traum; Kristiina Jokinen

This paper describes some of the basic cooperative mechanisms of dialogue. Ideal cooperation is seen as consisting of four features (cognitive consideration, joint purpose, ethical consideration and trust), which can also to some extent be seen as requirements building on each other. Weaker concepts such as “coordination” and “collaboration” have only some of these features or have them to lesser degrees. We point out the central role of ethics and trust in cooperation, and contrast the result with popular AI accounts of collaboration. Dialogue is also seen as associated with social activities, in which certain obligations and rights are connected with particular roles. Dialogue is seen to progress through the written, vocal or gestural contributions made by participants. Each of the contributions has associated with it both expressive and evocative functions, as well as specific obligations for participants. These functions are dependent on the surface form of a contribution, the activity and the local context, for their interpretation. We illustrate the perspective by analysing dialogue extracts from three different activity types (a travel dialogue, a quarrel and a dialogue with a computer system). Finally, we consider what kind of information is shared in dialogue, and the ways in which dialogue participants manifest this sharing to each other through linguistic and other communicative behaviour. The paper concludes with a comparison to other accounts of dialogue and prospects for integration of these ideas within dialogue systems.


international universal communication symposium | 2009

Eye-gaze experiments for conversation monitoring

Kristiina Jokinen; Masafumi Nishida; Seiichi Yamamoto

Eye-tracking technology has recently been matured so that its use in studies dealing with unobtrusive and natural user experiments has become easier to conduct. Simultaneously, human computer interactions have become more conversational in style, and more challenging in that they require various human conversational strategies, such as giving feedback and managing turn-taking. In this paper, we focus on eye-gaze in order to investigate turn taking signals and conversation monitoring in naturally occurring dialogues. We seek to build models that deal with the important aspects of which interlocutor the speaker is talking to, and what kind of turn taking signals the partners elicit, and we report the first results of our eye-tracking experiments.


meeting of the association for computational linguistics | 2004

User Expertise Modeling and Adaptivity in a Speech-Based E-Mail System

Kristiina Jokinen; Kari Kanto

This paper describes the user expertise model in AthosMail, a mobile, speech-based e-mail system. The model encodes the systems assumptions about the user expertise, and gives recommendations on how the system should respond depending on the assumed competence levels of the user. The recommendations are realized as three types of explicitness in the system responses. The system monitors the users competence with the help of parameters that describe e.g. the success of the users interaction with the system. The model consists of an online and an offline version, the former taking care of the expertise level changes during the same session, the latter modelling the overall user expertise as a function of time and repeated interactions.


annual meeting of the special interest group on discourse and dialogue | 2002

Adaptive Dialogue Systems - Interaction with Interact

Kristiina Jokinen; Antti Kerminen; Tommi Lagus; Jukka Kuusisto; Graham Wilcock; Markku Turunen; Jaakko Hakulinen; Krista Jauhiainen

Technological development has made computer interaction more common and also commercially feasible, and the number of interactive systems has grown rapidly. At the same time, the systems should be able to adapt to various situations and various users, so as to provide the most efficient and helpful mode of interaction. The aim of the Interact project is to explore natural human-computer interaction and to develop dialogue models which will allow users to interact with the computer in a natural and robust way. The paper describes the innovative goals of the project and presents ways that the Interact system supports adaptivity on different system design and interaction management levels.


international conference on machine learning | 2008

Distinguishing the Communicative Functions of Gestures

Kristiina Jokinen; Costanza Navarretta; Patrizia Paggio

This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function. The results of the experiments show that the granularity of the attributes used seems appropriate for the task of distinguishing the desired communicative functions. This is a promising result in view of a future automation of the annotation task.


international conference on universal access in human-computer interaction | 2009

Gaze and Gesture Activity in Communication

Kristiina Jokinen

Non-verbal communication is important in order to maintain fluency of communication. Gestures, facial expressions and eye-gazing function as non-verbal means to convey feedback and provide subtle cues to control and organise conversations. In this paper, verbal and non-verbal feedback are discussed from the point of view of how they contribute to the communicative activity in conversations, especially the type of strategies that the speakers deploy when they aim to construct shared understanding of the tasks and duties in interaction in general. The study concerns conversational data, collected for the purposes of designing and developing more natural interactive systems.


Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction | 2010

On eye-gaze and turn-taking

Kristiina Jokinen; Masafumi Nishida; Seiichi Yamamoto

In this paper we describe our eye-tracking data collection and preliminary experiments concerning the relation between eyegazing and turn-taking in natural human-human conversations, and how these observations can be extended to multimodal human-machine interactions. We confirm the earlier findings that eye-gaze is important in coordinating turn-taking and information flow in dialogues, but note that in multiparty dialogues also head movement seems to pay a crucial role in signalling the persons intention to take, hold, or yield the turn.


Natural Interaction with Robots, Knowbots and Smartphones, Putting Spoken Dialog Systems into Practice | 2014

Multimodal Open-Domain Conversations with the Nao Robot

Kristiina Jokinen; Graham Wilcock

In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.


international conference on computational linguistics | 1996

Goal formulation based on communicative principles

Kristiina Jokinen

The paper presents the Constructive Dialogue Model as a new approach to formulate system goals in intelligent dialogue systems. The departure point is in general communicative principles which constrain cooperative and coherent communication. Dialogue participants are engaged in a cooperative task whereby a model of the joint purpose is constructed. Contributions are planned as reactions to the changing context, and no dialogue grammar is needed. Also speech act classification is abandoned, in favour of contextual reasoning and rationality considerations.

Collaboration


Dive into the Kristiina Jokinen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Allwood

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge