Erno Mäkinen
University of Tampere
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erno Mäkinen.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008
Erno Mäkinen; Roope Raisamo
We present a systematic study on gender classification with automatically detected and aligned faces. We experimented with 120 combinations of automatic face detection, face alignment, and gender classification. One of the findings was that the automatic face alignment methods did not increase the gender classification rates. However, manual alignment increased classification rates a little, which suggests that automatic alignment would be useful when the alignment methods are further improved. We also found that the gender classification methods performed almost equally well with different input image sizes. In any case, the best classification rate was achieved with a support vector machine. A neural network and Adaboost achieved almost as good classification rates as the support vector machine and could be used in applications where classification speed is considered more important than the maximum classification accuracy.
Pattern Recognition Letters | 2008
Erno Mäkinen; Roope Raisamo
Successful face analysis requires robust methods. It has been hard to compare the methods due to different experimental setups. We carried out a comparison study for the state-of-the-art gender classification methods to find out their actual reliability. The main contributions are comprehensive and comparable classification results for the gender classification methods combined with automatic real-time face detection and, in addition, with manual face normalization. We also experimented by combining gender classifier outputs arithmetically. This lead to increased classification accuracies. Furthermore, we contribute guidelines to carry out classification experiments, knowledge on the strengths and weaknesses of the gender classification methods, and two new variants of the known methods.
human computer interaction with mobile devices and services | 2009
Markku Turunen; Aleksi Melto; Juho Hella; Tomi Heimonen; Jaakko Hakulinen; Erno Mäkinen; Tuuli Laivo; Hannu Soronen
Home environment is an exciting application domain for multimodal mobile interfaces. Instead of multiple remote controls, personal mobile devices could be used to operate home entertainment systems. This paper reports a subjective evaluation of multimodal inputs and outputs for controlling a home media center using a mobile phone. A within-subject evaluation with 26 participants revealed significant differences on user expectations on and experiences with different modalities. Speech input was received extremely well, even surpassing expectations in some cases, while gestures and haptic feedback were almost failing to meet the lowest expectations. The results can be applied for designing similar multimodal applications in home environments.
advances in computer entertainment technology | 2009
Markku Turunen; Aleksi Kallinen; Iván Sánchez; Jukka Riekki; Juho Hella; Thomas Olsson; Aleksi Melto; Juha-Pekka Rajaniemi; Jaakko Hakulinen; Erno Mäkinen; Pellervo Valkama; Toni Miettinen; Mikko Pyykkönen; Timo Saloranta; Ekaterina Gilman; Roope Raisamo
We present a multimodal media center interface based on a novel combination of new modalities. The application is based on a combination of a large high-definition display and a mobile phone. Users can interact with the system using speech input (speech recognition), physical touch (touching physical icons with the mobile phone), and gestures. We present the key results from a laboratory experiment where user expectations and actual usage experiences are compared.
human computer interaction with mobile devices and services | 2013
Jani Heikkinen; Erno Mäkinen; Jani Lylykangas; Toni Pakkanen; Kaisa Väänänen-Vainio-Mattila; Roope Raisamo
The spreading of mobile devices to all areas of everyday life impacts many contexts of use, including cars. Even though driving itself has remained relatively unchanged, there are now a wide variety of new in-car tasks, which people perform with both integrated infotainment systems and their mobile devices. To gain insights into this new task context and how it could be improved, we conducted a qualitative, contextual study in which we observed real-life car journeys with eight participants. The focus was on user interaction with touchscreen mobile devices, due to their wide range of functions and services. The findings show that the car is an extension of other contexts and it contains a rich set of infotainment tasks, including use of social media. Drivers emphasized gesture interaction and the use of non-visual modalities, for replacing visual information and notifying of changes in the driving context. Based on the findings, we present design implications for future in-car infotainment systems.
conference on computability in europe | 2010
Markku Turunen; Hannu Soronen; Santtu Pakarinen; Juho Hella; Tuuli Laivo; Jaakko Hakulinen; Aleksi Melto; Juha-Pekka Rajaniemi; Erno Mäkinen; Tomi Heimonen; Jussi Rantala; Pellervo Valkama; Toni Miettinen; Roope Raisamo
We present a multimodal media center interface designed for blind and partially sighted people. It features a zooming focus-plus-context graphical user interface coupled with speech output and haptic feedback. A multimodal combination of gestures, key input, and speech input is utilized to interact with the interface. The interface has been developed and evaluated in close cooperation with representatives from the target user groups. We discuss the results from longitudinal evaluations that took place in participants’ homes, and compare the results to other pilot and laboratory studies carried out previously with physically disabled and nondisabled users.
international conference on human haptic sensing and touch enabled computer applications | 2012
Roope Raisamo; Tomi Nukarinen; Johannes Pystynen; Erno Mäkinen; Johan Kildal
Current mobile navigation systems often require visual attention. This may lead to both inconvenient and unsafe use while walking. In this paper, we are introducing orientation inquiry, a new haptic interaction technique for non-visual pedestrian navigation. In a pilot experiment, the orientation inquiry technique was compared to tactile icons used as vibration patterns indicating the direction of travel. The results suggest that both techniques are suitable for navigation, but the participants preferred orientation inquiry to tactile icons.
mobile and ubiquitous multimedia | 2014
Kaisa Väänänen-Vainio-Mattila; Jani Heikkinen; Ahmed Farooq; Grigori Evreinov; Erno Mäkinen; Roope Raisamo
Haptic feedback based on the sense of touch and movement is a promising area of human-computer interaction in the car context. Most user studies on haptic feedback in the car have been controlled experiments of specific types of haptic stimuli. For the study presented in this paper, twelve participants tried novel haptic feedback prototypes and evaluated communication scenarios in the physical car context. Our aim was to understand user experiences and usage potential of haptic feedback in the car. The qualitative results show that haptic feedback may offer support for safety and social communication, but can be hard to interpret. We propose design considerations for in-car haptics such as simplicity, subtleness and directionality.
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research | 2010
Yulia Gizatdinova; Veikko Surakka; Guoying Zhao; Erno Mäkinen; Roope Raisamo
Facial expressions are emotionally, socially and otherwise meaningful reflective signals in the face. Facial expressions play a critical role in human life, providing an important channel of nonverbal communication. Automation of the entire process of expression analysis can potentially facilitate human-computer interaction, making it to resemble mechanisms of human-human communication. In this paper, we present an ongoing research that aims at development of a novel spatiotemporal approach to expression classification in video. The novelty comes from a new facial representation that is based on local spatiotemporal feature descriptors. In particular, a combined dynamic edge and texture information is used for reliable description of both appearance and motion of the expression. Support vector machines are utilized to perform a final expression classification. The planned experiments will further systematically evaluate the performance of the developed method with several databases of complex facial expressions.
international conference on human computer interaction | 2009
Markku Turunen; Jaakko Hakulinen; Juho Hella; Juha-Pekka Rajaniemi; Aleksi Melto; Erno Mäkinen; Jussi Rantala; Tomi Heimonen; Tuuli Laivo; Hannu Soronen; Mervi Hansen; Pellervo Valkama; Toni Miettinen; Roope Raisamo
We demonstrate interaction with a multimodal media center application. Mobile phone-based interface includes speech and gesture input and haptic feedback. The setup resembles our long-term public pilot study, where a living room environment containing the application was constructed inside a local media museum allowing visitors to freely test the system.