Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Petra Galuščáková is active.

Publication


Featured researches published by Petra Galuščáková.


international conference on multimedia retrieval | 2013

Multimedia information seeking through search and hyperlinking

Maria Eskevich; Gareth J. F. Jones; Robin Aly; Roeland Ordelman; Shu Chen; Danish Nadeem; Camille Guinaudeau; Guillaume Gravier; Pascale Sébillot; Tom De Nies; Pedro Debevere; Rik Van de Walle; Petra Galuščáková; Pavel Pecina; Martha Larson

Searching for relevant webpages and following hyperlinks to related content is a widely accepted and effective approach to information seeking on the textual web. Existing work on multimedia information retrieval has focused on search for individual relevant items or on content linking without specific attention to search results. We describe our research exploring integrated multimodal search and hyperlinking for multimedia data. Our investigation is based on the MediaEval 2012 Search and Hyperlinking task. This includes a known-item search task using the Blip10000 internet video collection, where automatically created hyperlinks link each relevant item to related items within the collection. The search test queries and link assessment for this task was generated using the Amazon Mechanical Turk crowdsourcing platform. Our investigation examines a range of alternative methods which seek to address the challenges of search and hyperlinking using multimodal approaches. The results of our experiments are used to propose a research agenda for developing effective techniques for search and hyperlinking of multimedia content.


international conference on multimedia retrieval | 2017

Visual Descriptors in Methods for Video Hyperlinking

Petra Galuščáková; Michal Batko; Jan Cech; David Novak; Pavel Pecina

In this paper, we survey different state-of-the-art visual processing methods and utilize them in hyperlinking. Visual information, calculated using Features Signatures, SIMILE descriptors and convolutional neural networks (CNN), is utilized as similarity between video frames and used to find similar faces, objects and setting. Visual concepts in frames are also automatically recognized and textual output of the recognition is combined with search based on subtitles and transcripts. All presented experiments were performed in the Search and Hyperlinking 2014 MediaEval task and Video Hyperlinking 2015 TRECVid task.


european conference on information retrieval | 2016

SHAMUS: UFAL Search and Hyperlinking Multimedia System

Petra Galuščáková; Shadi Saleh; Pavel Pecina

In this paper, we describe SHAMUS, our system for an easy search and navigation in multimedia archives. The system consists of three components. The Search component provides a text-based search in a multimedia collection, the Anchoring component determines the most important segments of videos, and segments topically related to the anchoring ones are retrieved by the Hyperlinking component. In the paper, we describe each component of the system as well as the online demo interface http://ufal.mff.cuni.cz/shamus which currently works with a collection of TED talks.


acm multimedia | 2015

Audio Information for Hyperlinking of TV Content

Petra Galuščáková; Pavel Pecina

In this paper, we explore the use of audio information in the retrieval of multimedia content. Specifically, we focus on linking similar segments in a collection consisting of 4,000 hours of BBC TV programmes. We provide a description of our system submitted to the Hyperlinking Sub-task of the Search and Hyperlinking Task in the MediaEval 2014 Benchmark, in which it scored best. We explore three automatic transcripts and compare them to available subtitles. We confirm the relationship between retrieval performance and transcript quality. The performance of the retrieval is further improved by extending transcripts by metadata and context, by combining different transcripts, using the highest confident words of the transcripts, and by utilizing acoustic similarity.


international acm sigir conference on research and development in information retrieval | 2013

Segmentation strategies for passage retrieval in audio-visual documents

Petra Galuščáková

The importance of Information Retrieval (IR) in audio-visual recordings has been increasing with steeply growing numbers of audio-visual documents available on-line. Compared to traditional IR methods, this task requires specific techniques, such as Passage Retrieval which can accelerate the search process by retrieving the exact relevant passage of a recording instead of the full document. In Passage Retrieval, full recordings are divided into shorter segments which serve as individual documents for the further IR setup. This technique also allows normalizing document length and applying positional information. It was shown that it can even improve retrieval results. In this work, we examine two general strategies for Passage Retrieval: blind segmentation into overlapping regular-length passages and segmentation into variable-length passages based on semantics of their content. Time-based segmentation was already shown to improve retrieval of textual documents and audio-visual recordings. Our experiments performed on the test collection used in the Search subtask of the Search and Hyperlinking Task in MediaEval Benchmarking 2012 confirm those findings and show that parameters (segment length and shift) tuning for a specific test collection can further improve the results. Our best results on this collection were achieved by using 45-second long segments with 15-second shifts. Semantic-based segmentation can be divided into three types: similarity-based (producing segments with high intra-similarity and low inter-similarity), lexical-chain-based (producing segments with frequent lexically connected words), and feature-based (combining various features which signalize a segment break in a machine-learning setting). In this work, we mainly focus on feature-based segmentation which allows exploiting various features from all modalities of the data (including segment length) in a single trainable model and produces segments which can eventually overlap. Our preliminary results show that even simple semantic-based segmentation outperforms regular segmentation. Our model is a decision tree incorporating the following features: shot segments, output of TextTiling algorithm, cue words (well, thanks, so, I, now), sentence breaks, and the length of the silence after the previous word. In terms of the MASP, the relative improvement over regular segmentation is more than 19%.


MediaEval | 2014

CUNI at MediaEval 2014 Search and Hyperlinking Task: Visual and Prosodic Features in Hyperlinking

Petra Galuščáková; Martin Kruliš; Jakub Lokoč; Pavel Pecina


MediaEval | 2014

CUNI at MediaEval 2014 Search and Hyperlinking Task: Search Task Experiments.

Petra Galuščáková; Pavel Pecina


international conference on multimedia retrieval | 2014

Experiments with Segmentation Strategies for Passage Retrieval in Audio-Visual Documents

Petra Galuščáková; Pavel Pecina


cross language evaluation forum | 2012

Penalty functions for evaluation measures of unsegmented speech retrieval

Petra Galuščáková; Pavel Pecina; Jan Hajic


Baltic HLT | 2012

Improving SMT by Using Parallel Data of a Closely Related Language.

Petra Galuščáková; Ondrej Bojar

Collaboration


Dive into the Petra Galuščáková's collaboration.

Top Co-Authors

Avatar

Pavel Pecina

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Ondřej Bojar

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Jan Hajic

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Aleš Tamchyna

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

David Mareček

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakub Lokoč

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Jan Cech

Czech Technical University in Prague

View shared research outputs
Top Co-Authors

Avatar

Jan Švec

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Jiří Maršík

Charles University in Prague

View shared research outputs
Researchain Logo
Decentralizing Knowledge