Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Peltola is active.

Publication


Featured researches published by Johannes Peltola.


international conference of the ieee engineering in medicine and biology society | 2006

Activity classification using realistic data from wearable sensors

Juha Pärkkä; Miikka Ermes; Panu Korpipää; Jani Mäntyjärvi; Johannes Peltola; Ilkka Korhonen

Automatic classification of everyday activities can be used for promotion of health-enhancing physical activities and a healthier lifestyle. In this paper, methods used for classification of everyday activities like walking, running, and cycling are described. The aim of the study was to find out how to recognize activities, which sensors are useful and what kind of signal processing and classification is required. A large and realistic data library of sensor data was collected. Sixteen test persons took part in the data collection, resulting in approximately 31 h of annotated, 35-channel data recorded in an everyday environment. The test persons carried a set of wearable sensors while performing several activities during the 2-h measurement session. Classification results of three classifiers are shown: custom decision tree, automatically generated decision tree, and artificial neural network. The classification accuracies using leave-one-subject-out cross validation range from 58 to 97% for custom decision tree classifier, from 56 to 97% for automatically generated decision tree, and from 22 to 96% for artificial neural network. Total classification accuracy is 82% for custom decision tree classifier, 86% for automatically generated decision tree, and 82% for artificial neural network


ubiquitous computing | 2003

Bayesian approach to sensor-based context awareness

Panu Korpipää; Miika Koskinen; Johannes Peltola; Satu-Marja Mäkelä; Tapio Seppänen

AbstractThe usability of a mobile device and services can be enhanced by context awareness. The aim of this experiment was to expand the set of generally recognizable constituents of context concerning personal mobile device usage. Naive Bayesian networks were applied to classify the contexts of a mobile device user in her normal daily activities. The distinguishing feature of this experiment in comparison to earlier context recognition research is the use of a naive Bayes framework, and an extensive set of audio features derived partly from the algorithms of the upcoming MPEG-7 standard. The classification was based mainly on audio features measured in a home scenario. The classification results indicate that with a resolution of one second in segments of 5–30 seconds, situations can be extracted fairly well, but most of the contexts are likely to be valid only in a restricted scenario. Naive Bayes framework is feasible for context recognition. In real world conditions, the recognition accuracy using leave-one-out cross validation was 87% of true positives and 95% of true negatives, averaged over nine eight-minute scenarios containing 17 segments of different lengths and nine different contexts. Respectively, the reference accuracies measured by testing with training data were 88% and 95%, suggesting that the model was capable of covering the variability introduced in the data on purpose. Reference recognition accuracy in controlled conditions was 96% and 100%, respectively. However, from the applicability viewpoint, generalization remains a problem, as from a wider perspective almost any feature may refer to many possible real world situations.


Pattern Recognition Letters | 2006

Soft biometrics-combining body weight and fat measurements with fingerprint biometrics

Heikki Ailisto; Elena Vildjiounaite; Mikko Lindholm; Satu-Marja Mäkelä; Johannes Peltola

The aim of this study was to examine whether using soft biometrics, i.e. easily measurable personal characteristics, such as weight and fat percentage, can improve the performance of biometrics in verification type applications. Fusing fingerprint biometrics with soft biometrics, in this case body weight measurements, decreased the total error rate (TER) from 3.9% to 1.5% in an experiment with 62 test subjects. This result shows that simple physiological measurements can be used to support biometric recognition. Furthermore, soft biometrics are unobtrusive, there is no risk of identity theft, the perception of the big-brother effect is small, the equipment needed is low-cost, and the methods are easy to understand. Soft biometrics alone are not suitable for security related applications, but they can be used for improving the performance of traditional biometrics. A potentially feasible use for soft biometrics may be found in non-security, convenience type cases, such as domestic applications.


Signal Processing-image Communication | 2007

Cross-layer architecture for scalable video transmission in wireless network

Jyrki Huusko; Janne Vehkaperä; Peter Amon; Catherine Lamy-Bergot; Gianmarco Panza; Johannes Peltola; Maria G. Martini

Multimedia applications such as video conference, digital video broadcasting (DVB), and streaming video and audio have been gaining popularity during last years and the trend has been to allocate these services more and more also on mobile users. The demand of quality of service (QoS) for multimedia raises huge challenges on the network design, not only concerning the physical bandwidth but also the protocol design and services. One of the goals for system design is to provide efficient solutions for adaptive multimedia transmission over different access networks in all-IP environment. The joint source and channel coding (JSCC/D) approach has already given promising results in optimizing multimedia transmission. However, in practice, arranging the required control mechanism and delivering the required side information through network and protocol stack have caused problems and quite often the impact of network has been neglected in studies. In this paper we propose efficient cross-layer communication methods and protocol architecture in order to transmit the control information and to optimize the multimedia transmission over wireless and wired IP networks. We also apply this architecture to the more specific case of streaming of scalable video streams. Scalable video coding has been an active research topic recently and it offers simple and flexible solutions for video transmission over heterogeneous networks to heterogeneous terminals. In addition it provides easy adaptation to varying transmission conditions. In this paper we illustrate how scalable video transmission can be improved with efficient use of the proposed cross-layer design, adaptation mechanisms and control information.


conference on image and video retrieval | 2003

Detecting semantic concepts from video using temporal gradients and audio classification

Mika Rautiainen; Tapio Seppänen; Jani Penttilä; Johannes Peltola

In this paper we describe new methods to detect semantic concepts from digital video based on audible and visual content. Temporal Gradient Correlogram captures temporal correlations of gradient edge directions from sampled shot frames. Power-related physical features are extracted from short audio samples in video shots. Video shots containing people, cityscape, landscape, speech or instrumental sound are detected with trained self-organized maps and kNN classification results of audio samples. Test runs and evaluations in TREC 2002 Video Track show consistent performance for Temporal Gradient Correlogram and state-of-the-art precision in audio-based instrumental sound detection.


electronic imaging | 2005

Personal video retrieval and browsing for mobile users

Anna Sachinopoulou; Satu-Marja Mäkelä; Sari Järvinen; Utz Westermann; Johannes Peltola; Paavo Pietarila

The latest camera-equipped mobile phones and faster cellular networks have increased the interest in mobile multimedia services. But for content consumption, delivery and creation, the limited capabilities of mobile terminals require special attention. This paper introduces the Candela platform, an infrastructure that allows the creation, storage and retrieval of home videos with special consideration of mobile terminals. Candela features a J2ME-based video recording and annotation tool which permits the creation and annotation of home videos on mobile phones. It offers an MPEG-7-based home video database which can be queried in an intelligent and user-oriented manner exploiting users’ personal domain ontologies. The platform employs terminal profiling techniques to deliver video retrieval user interfaces that personalize the search results according to the users preferences and terminal capabilities, facilitating effective retrieval of home videos via various both mobile and fixed terminals. For video playout, Candela features a meta player, a video player augmented by an interactive metadata display which can be used for fast content-based in-video browsing, helping to avoid the consumption and streaming of uninteresting video parts, thus reducing network load. Thereby, Candela forms a comprehensive video management platform for mobile phones fully covering mobile home video management from acquisition to delivery.


acm multimedia | 2005

MobiCon: integrated capture, annotation, and sharing of video clips with mobile phones

Janne Lahti; Utz Westermann; Marko Palola; Johannes Peltola; Elena Vildjiounaite

This paper presents MobiCon, a video production tool for mobile camera phones. MobiCon integrates video clip capture with context-aware, personalized clip annotation -- supporting automatic annotation suggestions based on context data and efficient manual annotation with user-specific ontologies and keywords -- and clip sharing secured by digital rights management techniques. Thus, MobiCon allows users to inexpensively create metadata-annotated video clips for a better management of their clip collections and keeps them in control of the clips they share.


international conference on multimedia and expo | 2009

Deploying mobile multimedia services for everyday experience sharing

Sari Järvinen; Johannes Peltola; Johan Plomp; Onni Ojutkangas; Immo Heino; Janne Lahti; Juhani Heinilä

This paper presents a solution for creating light-weighted content and context-aware mobile multimedia services. The main application domain is in user created multimedia content and experience sharing. Our approach is to have a platform supporting state-of-the art content management functionalities in order to enable easy creation of specialized multimedia services for various target groups and purposes. We have developed a mobile multimedia content creation platform with integrated context metadata support. To verify the overall functionality of our platform we have defined a multimedia content service template and created a set of exemplary services using web-based technologies such as JavaScript, Media RSS feed and Java.


mobile and ubiquitous multimedia | 2009

Multimedia service creation platform for mobile experience sharing

Sari Järvinen; Johannes Peltola; Janne Lahti; Anna Sachinopoulou

The multimedia content created by users with their mobile phones is often shared with family and friends to recreate personal experiences. It is difficult for a single media sharing service to cover all variations on how people would like to present their experiences of different events; thus it is important to be able to easily create versions of these services. This paper presents in detail our platform implementation for enabling the creation of lightweight content- and context-aware mobile multimedia services. The platform supports state-of-the-art content management functionalities in order to enable easy creation of specialized multimedia services for various target groups and purposes. Our solution includes context metadata support for mobile multimedia content and creation of location-aware multimedia services. We have built example services on top of the platform using web-based technologies such as JavaScript, Media RSS feed and Java. The functionality of the platform has been tested in user evaluations with promising results.


international conference on acoustics, speech, and signal processing | 2007

Unsupervised Speaker Change Detection for Mobile Device Recorded Speech

Olli Vuorinen; Johannes Peltola; Satu-Marja Mäkelä

In this paper we propose an unsupervised speaker change detection (SCD) system developed for mobile device applications. We use Bayesian information criterion (BIC) to find initial speaker changes, which are then verified or discarded in the second phase by utilizing modified BIC and silence detector information. Silence information usage after initial BIC in decision making is useful to separate real changes from noise peaks. Enhanced peak detector adjusts BIC penalty parameter automatically, which improve the robustness and feasibility. Improved BIC based false alarm compensation (FAC) merges effectively consecutive segments belonging to same speaker. Our experiments have shown the robustness of the algorithm and it produces very satisfactory results for difficult mobile phone recorded speech data.

Collaboration


Dive into the Johannes Peltola's collaboration.

Top Co-Authors

Avatar

Satu-Marja Mäkelä

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Sari Järvinen

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Janne Vehkaperä

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Olli Vuorinen

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Elena Vildjiounaite

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Janne Lahti

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jyrki Huusko

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Mikko Myllyniemi

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar

Onni Ojutkangas

VTT Technical Research Centre of Finland

View shared research outputs
Researchain Logo
Decentralizing Knowledge