Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mika Rautiainen is active.

Publication


Featured researches published by Mika Rautiainen.


international conference on pattern recognition | 2002

Temporal color correlograms for video retrieval

Mika Rautiainen; David S. Doermann

This paper presents a novel method to retrieve segmented video shots based on their color content. The temporal color correlogram captures the spatiotemporal relationship of colors in a video shot using cooccurrence statistics. The temporal color correlogram extends the HSV color correlogram that has been found to be very effective in content-based image retrieval. Temporal color correlograms compute the autocorrelation of quantized HSV color values from a set of frame samples taken from a video shot. In this paper, the efficiency of the temporal color correlogram and HSV color correlograms are evaluated against other retrieval systems participating the TREC video track evaluation and against color histograms used commonly in content-based retrieval. We used queries and relevance judgments on the I I hours of segmented MPEG-1 video provided to track participants. Tests are executed using our content-based multimedia retrieval system that was specifically developed for multimedia information retrieval applications.


international conference on multimedia and expo | 2004

Cluster-temporal browsing of large news video databases

Mika Rautiainen; Timo Ojala; Tapio Seppänen

The paper describes cluster-temporal browsing of news video databases. Cluster-temporal browsing combines content similarities and temporal adjacency into a single representation. Visual, conceptual and lexical features are used to organize and view similar shot content. Interactive experiments with eight test users have been carried out using a database of roughly 60 hours of news video. Results indicate improvements in browsing efficiency when automatic speech recognition transcripts are incorporated into browsing by visual similarity. The cluster-temporal browsing application received positive comments from the test users and performed well in overall comparison with interactive video retrieval systems in TRECVID 2003 evaluation.


IEEE MultiMedia | 2009

Digital Television for Mobile Devices

Jiehan Zhou; Zhonghong Ou; Mika Rautiainen; Timo Koskela; Mika Ylianttila

A survey of mobile television technologies analyzes technical characteristics for each mobile TV solution, discusses specifications and standards, and presents possible future developments.


multimedia information retrieval | 2004

Analysing the performance of visual, concept and text features in content-based video retrieval

Mika Rautiainen; Timo Ojala; Tapio Seppänen

This paper describes revised content-based search experiments in the context of TRECVID 2003 benchmark. Experiments focus on measuring content-based video retrieval performance with following search cues: visual features, semantic concepts and text. The fusion of features uses weights and similarity ranks. Visual similarity is computed using Temporal Gradient Correlogram and Temporal Color Correlogram features that are extracted from the dynamic content of a video shot. Automatic speech recognition transcripts and concept detectors enable higher-level semantic searching. 60 hours of news videos from TRECVID 2003 search task were used in the experiments. System performance was evaluated with 25 pre-defined search topics using average precision. In visual search, multiple examples improved the results over single example search. Weighted fusion of text, concept and visual features improved the performance over text search baseline. Expanded query term list of text queries gave also notable increase in performance over the baseline text search


acm symposium on applied computing | 2010

Semantics for intelligent delivery of multimedia content

Ioan Marius Bilasco; Samir Amir; Patrick Blandin; Chabane Djeraba; Juhani Laitakari; Jean Martinet; Eduardo Martínez Graciá; Daniel Pakkala; Mika Rautiainen; Mika Ylianttila; Jiehan Zhou

This paper describes a new generic metadata model, called CAM Metamodel, that merges altogether information about content, services, physical and technical environment in order to enable homogenous delivery and consumption of content. We introduce a metadata model that covers all these aspects and which can be easily extended so as to absorb new types of models and standards. We ensure this flexibility by introducing an abstract metamodel, which defines structured archetypes for metadata and metadata containers. The metamodel is the foundation for the technical metadata specification. We also introduce new structures in the abstract and core metamodels supporting the management of distributed community created metadata.


international conference on big data | 2015

Low latency analytics for streaming traffic data with Apache Spark

Altti Ilari Maarala; Mika Rautiainen; Miikka Salmi; Susanna Pirttikangas; Jukka Riekki

Demand for new efficient methods for processing large-scale heterogeneous data in real-time is growing. Currently, one key challenge in Big Data is performing low-latency analysis with real-time data. In vehicle traffic, continuous high speed data streams generate large data volumes. Harnessing new technologies is required to benefit from all the potential this data withholds. This work studies the state-of-the-art in distributed and parallel computing, storage, query and ingestion methods, and evaluates tools for periodical and real-time analysis of heterogeneous data. We also introduce a Big Data cloud platform with ingestion, analysis, storage and data query APIs to provide programmable environment for analytics system development and evaluation.


conference on image and video retrieval | 2003

Detecting semantic concepts from video using temporal gradients and audio classification

Mika Rautiainen; Tapio Seppänen; Jani Penttilä; Johannes Peltola

In this paper we describe new methods to detect semantic concepts from digital video based on audible and visual content. Temporal Gradient Correlogram captures temporal correlations of gradient edge directions from sampled shot frames. Power-related physical features are extracted from short audio samples in video shots. Video shots containing people, cityscape, landscape, speech or instrumental sound are detected with trained self-organized maps and kNN classification results of audio samples. Test runs and evaluations in TREC 2002 Video Track show consistent performance for Temporal Gradient Correlogram and state-of-the-art precision in audio-based instrumental sound detection.


personal, indoor and mobile radio communications | 2013

Distributed multimedia content analysis with MapReduce

Arto Heikkinen; Jouni Sarvanko; Mika Rautiainen; Mika Ylianttila

This paper introduces a scalable solution for distributing content-based video analysis tasks using the emerging MapReduce programming model. Scalable and efficient solutions are needed for this type of tasks, as the number of multimedia content is growing at an increasing rate. We present a novel implementation utilizing the popular Apache Hadoop MapReduce framework for both analysis job scheduling and video data distribution. We employ face detection as a case example because it represents a popular visual content analysis task. The main contribution of this paper is the performance evaluation of distribution models for video content processing in various configurations. In our experiments, we have compared the performance of our video data distribution method against two alternatives solutions on a seven node cluster. Hadoops performance overhead in video content analysis was also evaluated. We found Hadoop to be a data efficient solution with minimal computational overhead for the face detection task.


international conference on multimedia and expo | 2005

Comparison of Visual Features and Fusion Techniques in Automatic Detection of Concepts from News Video

Mika Rautiainen; T. Seppdnen

This study describes experiments on automatic detection of semantic concepts, which are textual descriptions about the digital video content. The concepts can be further used in content-based categorization and access of digital video repositories. Temporal gradient correlograms, temporal color correlograms and motion activity low-level features are extracted from the dynamic visual content of a video shot. Semantic concepts are detected with an expeditious method that is based on the selection of small positive example sets and computational low-level feature similarities between video shots. Detectors using several feature and fusion operator configurations are tested in 60-hour news video database from TRECVID 2003 benchmark. Results show that the feature fusion based on ranked lists gives better detection performance than fusion of normalized low-level feature spaces distances. Best performance was obtained by pre-validating the configurations of features and rank fusion operators. Results also show that minimum rank fusion of temporal color and structure provides comparable performance


Archive | 2012

Grid and Pervasive Computing Workshops

Mika Rautiainen; Timo Korhonen; Edward Mutafungwa; Eila Ovaska; Artem Katasonov; Antti Evesti; Heikki Ailisto; Aaron J. Quigley; Jonna Häkkilä; Natasa Milic-Frayling; Jukka Riekki

Printed eBook exclusively available to patrons whose library offers Springer’s eBook Collection.*** ▶ € |

Collaboration


Dive into the Mika Rautiainen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Artem Katasonov

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge