Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaakko Hakulinen is active.

Publication


Featured researches published by Jaakko Hakulinen.


Ibm Systems Journal | 2005

An architecture and applications for speech-based accessibility systems

Markku Turunen; Jaakko Hakulinen; Kari-Jouko Räihä; Esa-Pekka Salonen; Anssi Kainulainen; Perttu Prusi

Speech can be an efficient and natural means for communication between humans and computers. The development of speech applications requires techniques, methodology, and development tools capable of flexible and adaptive interaction, taking into account the needs of different users and different environments. In this paper, we discuss how the needs of different user groups can be supported by using novel architectural solutions. We present the Jaspis speech application architecture, which introduces a new paradigm for adaptive applications and has been released as open-source software to assist in practical application development. To illustrate how the architecture supports adaptive interaction and accessibility, we present several applications that are based on the Jaspis architecture, including multilingual e-mail systems, timetable systems, and guidance systems.


Behaviour & Information Technology | 2015

Defining user experience goals to guide the design of industrial systems

Eija Kaasinen; Virpi Roto; Jaakko Hakulinen; Tomi Heimonen; Jussi P. P. Jokinen; Hannu Karvonen; Tuuli Keskinen; Hanna Koskinen; Yichen Lu; Pertti Saariluoma; Helena Tokkonen; Markku Turunen

The key prerequisite for experience-driven design is to define what experience to design for. User experience (UX) goals concretise the intended experience. Based on our own case studies from industrial environments and a literature study, we propose five different approaches to acquiring insight and inspiration for UX goal setting: Brand, Theory, Empathy, Technology, and Vision. Each approach brings in a different viewpoint, thus supporting the multidisciplinary character of UX. The Brand approach ensures that the UX goals are in line with the companys brand promise. The Theory approach utilises the available scientific knowledge of human behaviour. The Empathy approach focuses on knowing the actual users and stepping into their shoes. The Technology approach considers the new technologies that are being introduced and their positive or negative influence on UX. Finally, the Vision approach focuses on renewal, introducing new kinds of UXs. In the design of industrial systems, several stakeholders are involved and they should share common design goals. Using the different UX goal-setting approaches together brings in the viewpoints of different stakeholders, thus committing them to UX goal setting and emphasising UX as a strategic design decision.


Computer Speech & Language | 2011

Multimodal and mobile conversational Health and Fitness Companions

Markku Turunen; Jaakko Hakulinen; Olov Ståhl; Björn Gambäck; Preben Hansen; María del Carmen Rodríguez Gancedo; Raul Santos de la Camara; Cameron G. Smith; Daniel Charlton; Marc Cavazza

Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. This paper describes how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such Companions. In particular how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, results from the first evaluations of the Companions interfaces are presented.


annual meeting of the special interest group on discourse and dialogue | 2002

Adaptive Dialogue Systems - Interaction with Interact

Kristiina Jokinen; Antti Kerminen; Tommi Lagus; Jukka Kuusisto; Graham Wilcock; Markku Turunen; Jaakko Hakulinen; Krista Jauhiainen

Technological development has made computer interaction more common and also commercially feasible, and the number of interactive systems has grown rapidly. At the same time, the systems should be able to adapt to various situations and various users, so as to provide the most efficient and helpful mode of interaction. The aim of the Interact project is to explore natural human-computer interaction and to develop dialogue models which will allow users to interact with the computer in a natural and robust way. The paper describes the innovative goals of the project and presents ways that the Interact system supports adaptivity on different system design and interaction management levels.


human computer interaction with mobile devices and services | 2009

User expectations and user experience with different modalities in a mobile phone controlled home entertainment system

Markku Turunen; Aleksi Melto; Juho Hella; Tomi Heimonen; Jaakko Hakulinen; Erno Mäkinen; Tuuli Laivo; Hannu Soronen

Home environment is an exciting application domain for multimodal mobile interfaces. Instead of multiple remote controls, personal mobile devices could be used to operate home entertainment systems. This paper reports a subjective evaluation of multimodal inputs and outputs for controlling a home media center using a mobile phone. A within-subject evaluation with 26 participants revealed significant differences on user expectations on and experiences with different modalities. Speech input was received extremely well, even surpassing expectations in some cases, while gestures and haptic feedback were almost failing to meet the lowest expectations. The results can be applied for designing similar multimodal applications in home environments.


advances in computer entertainment technology | 2009

Multimodal interaction with speech and physical touch interface in a media center application

Markku Turunen; Aleksi Kallinen; Iván Sánchez; Jukka Riekki; Juho Hella; Thomas Olsson; Aleksi Melto; Juha-Pekka Rajaniemi; Jaakko Hakulinen; Erno Mäkinen; Pellervo Valkama; Toni Miettinen; Mikko Pyykkönen; Timo Saloranta; Ekaterina Gilman; Roope Raisamo

We present a multimodal media center interface based on a novel combination of new modalities. The application is based on a combination of a large high-definition display and a mobile phone. Users can interact with the system using speech input (speech recognition), physical touch (touching physical icons with the mobile phone), and gestures. We present the key results from a laboratory experiment where user expectations and actual usage experiences are compared.


text speech and dialogue | 2001

Agent-Based Adaptive Interaction and Dialogue Management Architecture for Speech Applications

Markku Turunen; Jaakko Hakulinen

In this paper we present an adaptive architecture for interaction and dialogue management in spoken dialogue applications. This architecture is targeted for applications that adapt to the situation and the user. We have implemented the architecture as part of our Jaspis speech application development framework. We also introduce some application issues discovered in applications built on top of Jaspis.


conference of the european chapter of the association for computational linguistics | 2009

A Mobile Health and Fitness Companion Demonstrator

Olov Strahl; Björn Gambäck; Markku Turunen; Jaakko Hakulinen

Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. The paper presents a multimodal conversational Companion system focused on health and fitness, which has both a stationary and a mobile component.


conference on computability in europe | 2010

Accessible Multimodal Media Center Application for Blind and Partially Sighted People

Markku Turunen; Hannu Soronen; Santtu Pakarinen; Juho Hella; Tuuli Laivo; Jaakko Hakulinen; Aleksi Melto; Juha-Pekka Rajaniemi; Erno Mäkinen; Tomi Heimonen; Jussi Rantala; Pellervo Valkama; Toni Miettinen; Roope Raisamo

We present a multimodal media center interface designed for blind and partially sighted people. It features a zooming focus-plus-context graphical user interface coupled with speech output and haptic feedback. A multimodal combination of gestures, key input, and speech input is utilized to interact with the interface. The interface has been developed and evaluated in close cooperation with representatives from the target user groups. We discuss the results from longitudinal evaluations that took place in participants’ homes, and compare the results to other pilot and laboratory studies carried out previously with physically disabled and nondisabled users.


intelligent virtual agents | 2008

Integrating Planning and Dialogue in a Lifestyle Agent

Cameron G. Smith; Marc Cavazza; Daniel Charlton; Li Zhang; Markku Turunen; Jaakko Hakulinen

In this paper, we describe an Embodied Conversational Agent advising users to promote a healthier lifestyle. This embodied agent provides advice on everyday user activities, in order to promote a healthy lifestyle. It operates by generating user activity models (similar to decompositional task models), using a Hierarchical Task Network (HTN) planner. These activity models are refined through various cycles of planning and dialogue, during which the agent suggests possible activities to the user, and the user expresses her preferences in return. A first prototype has been fully implemented (as a spoken dialogue system) and tested with 20 subjects. Early results show a high level of task completion despite the word error rate, and further potential for improvement.

Collaboration


Dive into the Jaakko Hakulinen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomi Heimonen

University of Wisconsin–Stevens Point

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge