Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aleksi Melto is active.

Publication


Featured researches published by Aleksi Melto.


human computer interaction with mobile devices and services | 2009

User expectations and user experience with different modalities in a mobile phone controlled home entertainment system

Markku Turunen; Aleksi Melto; Juho Hella; Tomi Heimonen; Jaakko Hakulinen; Erno Mäkinen; Tuuli Laivo; Hannu Soronen

Home environment is an exciting application domain for multimodal mobile interfaces. Instead of multiple remote controls, personal mobile devices could be used to operate home entertainment systems. This paper reports a subjective evaluation of multimodal inputs and outputs for controlling a home media center using a mobile phone. A within-subject evaluation with 26 participants revealed significant differences on user expectations on and experiences with different modalities. Speech input was received extremely well, even surpassing expectations in some cases, while gestures and haptic feedback were almost failing to meet the lowest expectations. The results can be applied for designing similar multimodal applications in home environments.


advances in computer entertainment technology | 2009

Multimodal interaction with speech and physical touch interface in a media center application

Markku Turunen; Aleksi Kallinen; Iván Sánchez; Jukka Riekki; Juho Hella; Thomas Olsson; Aleksi Melto; Juha-Pekka Rajaniemi; Jaakko Hakulinen; Erno Mäkinen; Pellervo Valkama; Toni Miettinen; Mikko Pyykkönen; Timo Saloranta; Ekaterina Gilman; Roope Raisamo

We present a multimodal media center interface based on a novel combination of new modalities. The application is based on a combination of a large high-definition display and a mobile phone. Users can interact with the system using speech input (speech recognition), physical touch (touching physical icons with the mobile phone), and gestures. We present the key results from a laboratory experiment where user expectations and actual usage experiences are compared.


conference on computability in europe | 2010

Accessible Multimodal Media Center Application for Blind and Partially Sighted People

Markku Turunen; Hannu Soronen; Santtu Pakarinen; Juho Hella; Tuuli Laivo; Jaakko Hakulinen; Aleksi Melto; Juha-Pekka Rajaniemi; Erno Mäkinen; Tomi Heimonen; Jussi Rantala; Pellervo Valkama; Toni Miettinen; Roope Raisamo

We present a multimodal media center interface designed for blind and partially sighted people. It features a zooming focus-plus-context graphical user interface coupled with speech output and haptic feedback. A multimodal combination of gestures, key input, and speech input is utilized to interact with the interface. The interface has been developed and evaluated in close cooperation with representatives from the target user groups. We discuss the results from longitudinal evaluations that took place in participants’ homes, and compare the results to other pilot and laboratory studies carried out previously with physically disabled and nondisabled users.


international conference on human computer interaction | 2009

Multimodal Interaction with Speech, Gestures and Haptic Feedback in a Media Center Application

Markku Turunen; Jaakko Hakulinen; Juho Hella; Juha-Pekka Rajaniemi; Aleksi Melto; Erno Mäkinen; Jussi Rantala; Tomi Heimonen; Tuuli Laivo; Hannu Soronen; Mervi Hansen; Pellervo Valkama; Toni Miettinen; Roope Raisamo

We demonstrate interaction with a multimodal media center application. Mobile phone-based interface includes speech and gesture input and haptic feedback. The setup resembles our long-term public pilot study, where a living room environment containing the application was constructed inside a local media museum allowing visitors to freely test the system.


international conference on human computer interaction | 2009

Multimodal Media Center Interface Based on Speech, Gestures and Haptic Feedback

Markku Turunen; Jaakko Hakulinen; Juho Hella; Juha-Pekka Rajaniemi; Aleksi Melto; Erno Mäkinen; Jussi Rantala; Tomi Heimonen; Tuuli Laivo; Hannu Soronen; Mervi Hansen; Pellervo Valkama; Toni Miettinen; Roope Raisamo

We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users. Finally, we present the key results from its user evaluation and public pilot studies.


mobile and ubiquitous multimedia | 2013

Mobile dictation for healthcare professionals

Tuuli Keskinen; Aleksi Melto; Jaakko Hakulinen; Markku Turunen; Santeri Saarinen; Tamás Pallos; Pekka Kallioniemi; Riitta Danielsson-Ojala; Sanna Salanterä

We demonstrate a mobile dictation application utilizing automatic speech recognition for healthcare professionals. Development was done in close collaboration between human-technology interaction and nursing science researchers and professionals working in the area. Our work was motivated by the need for improvements in getting spoken patient information to the next treatment steps without additional steps. In addition, we wanted to enable truly mobile spoken information entry, i.e., dictation can take place on the spot. In order to study the applicability we conducted a small-scale Wizard-of-Oz evaluation in a real hospital environment with real nurses. Our main focus was to gather subjective expectations and experiences from the actual nurses themselves. The results show true potential for our mobile dictation application and its further development.


international mindtrek conference | 2009

User experience of speech controlled media center for physically disabled users

Hannu Soronen; Santtu Pakarinen; Mervi Hansen; Markku Turunen; Jaakko Hakulinen; Juho Hella; Juha-Pekka Rajaniemi; Aleksi Melto; Tuuli Laivo

In this paper, we present results from a long-term user pilot study of speech controlled media center. The pilot users in this case were physically disabled and the system was installed in their apartment for six weeks. We designed a multimodal media center interface based on speech. Full speech control is provided with a hands-free speech recognition input method for people with physical disabilities. In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface was designed following human-centered principles. Finally, the results of user experience studies are presented.


COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony | 2009

Accessible speech-based and multimodal media center interface for users with physical disabilities

Markku Turunen; Jaakko Hakulinen; Aleksi Melto; Juho Hella; Tuuli Laivo; Juha-Pekka Rajaniemi; Erno Mäkinen; Hannu Soronen; Mervi Hansen; Santtu Pakarinen; Tomi Heimonen; Jussi Rantala; Pellervo Valkama; Toni Miettinen; Roope Raisamo

We present a multimodal media center user interface with a hands-free speech recognition input method for users with physical disabilities. In addition to speech input, the application features a zoomable context + focus graphical user interface and several other modalities, including speech output, haptic feedback, and gesture input. These features have been developed in co-operation with representatives from the target user groups. In this article, we focus on the speech input interface and its evaluations. We discuss the user interface design and results from a long-term pilot study taking place in homes of physically disabled users, and compare the results to a public pilot study and laboratory studies carried out with non-disabled users.


conference of the international speech communication association | 2009

SUXES - user experience evaluation method for spoken and multimodal interaction.

Markku Turunen; Jaakko Hakulinen; Aleksi Melto; Tomi Heimonen; Tuuli Laivo; Juho Hella


conference of the international speech communication association | 2007

Design of a rich multimodal interface for mobile spoken route guidance.

Markku Turunen; Jaakko Hakulinen; Anssi Kainulainen; Aleksi Melto; Topi Hurtig

Collaboration


Dive into the Aleksi Melto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomi Heimonen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hannu Soronen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge