Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Klaus Schoeffmann is active.

Publication


Featured researches published by Klaus Schoeffmann.


Spie Reviews | 2010

Video browsing interfaces and applications: a review

Klaus Schoeffmann; Frank Hopfgartner; Oge Marques; Laszlo Boeszoermenyi; Joemon M. Jose

We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data-which, if presented in its raw format, is rather unwieldy and costly-have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other.


ACM Computing Surveys | 2015

Video Interaction Tools: A Survey of Recent Work

Klaus Schoeffmann; Marco A. Hudelist; Jochen Huber

Digital video enables manifold ways of multimedia content interaction. Over the last decade, many proposals for improving and enhancing video content interaction were published. More recent work particularly leverages on highly capable devices such as smartphones and tablets that embrace novel interaction paradigms, for example, touch, gesture-based or physical content interaction. In this article, we survey literature at the intersection of Human-Computer Interaction and Multimedia. We integrate literature from video browsing and navigation, direct video manipulation, video content visualization, as well as interactive video summarization and interactive video retrieval. We classify the reviewed works by the underlying interaction method and discuss the achieved improvements so far. We also depict a set of open problems that the video interaction community should address in future.


Proceedings of the first annual ACM SIGMM conference on Multimedia systems | 2010

The video explorer: a tool for navigation and searching within a single video based on fast content analysis

Klaus Schoeffmann; Mario Taschwer; Laszlo Boeszoermenyi

We propose a video browsing tool supporting new efficient navigation means and content-based search within a single video, allowing for interactive exploration and playback of video content. The user interface provides flexible navigation indices by visualizing low-level features and frame surrogates along one or more timelines, called interactive navigation summaries. By applying simple and fast content analysis, navigation summary computation becomes feasible during browsing, enabling addition, removal, and update of navigation summaries at runtime. Semantically similar video segments will be visualized by similar patterns in certain navigation summaries, which enables users to quickly recognize and navigate to potential similar segments. Moreover, free-shape regions of video frames or video segments within navigation summaries can be selected by the user to launch a fast content-based search to find frames with similar regions or segments with similar navigation summaries.


IEEE MultiMedia | 2014

A User-Centric Media Retrieval Competition: The Video Browser Showdown 2012-2014

Klaus Schoeffmann

The Video Browser Showdown is an international competition in the field of interactive video search and retrieval. It is held annually as a special session at the International Conference on Multimedia Modeling (MMM). The Video Browser Showdown evaluates the performance of exploratory tools for interactive content search in videos in direct competition and in front of an audience. Its goal is to push research on user-centric video search tools including video navigation, content browsing, content interaction, and video content visualization. This article summarizes the first three VBS competitions (2012-2014).


International Journal of Multimedia Information Retrieval | 2013

The Video Browser Showdown: a live evaluation of interactive video search tools

Klaus Schoeffmann; David Ahlström; Werner Bailer; Claudiu Cobârzan; Frank Hopfgartner; Kevin McGuinness; Cathal Gurrin; Christian Frisson; Duy-Dinh Le; Manfred Del Fabro; Hongliang Bai; Wolfgang Weiss

The Video Browser Showdown evaluates the performance of exploratory video search tools on a common data set in a common environment and in presence of the audience. The main goal of this competition is to enable researchers in the field of interactive video search to directly compare their tools at work. In this paper, we present results from the second Video Browser Showdown (VBS2013) and describe and evaluate the tools of all participating teams in detail. The evaluation results give insights on how exploratory video search tools are used and how they perform in direct comparison. Moreover, we compare the achieved performance to results from another user study where 16 participants employed a standard video player to complete the same tasks as performed in VBS2013. This comparison shows that the sophisticated tools enable better performance in general, but for some tasks common video players provide similar performance and could even outperform the expert tools. Our results highlight the need for further improvement of professional tools for interactive search in videos.


international symposium on multimedia | 2013

Relevance Segmentation of Laparoscopic Videos

Bernd Münzer; Klaus Schoeffmann; Laszlo Böszörmenyi

In recent years, it became common to record video footage of laparoscopic surgeries. This leads to large video archives that are very hard to manage. They often contain a considerable portion of completely irrelevant scenes which waste storage capacity and hamper an efficient retrieval of relevant scenes. In this paper we (1) define three classes of irrelevant segments, (2) propose visual feature extraction methods to obtain irrelevance indicators for each class and (3) present an extensible framework to detect irrelevant segments in laparoscopic videos. The framework includes a training component that learns a prediction model using nonlinear regression with a generalized logistic function and a segment composition algorithm that derives segment boundaries from the fuzzy frame classifications. The experimental results show that our method performs very good both for the classification of individual frames and the detection of segment boundaries in videos and enables considerable storage space savings.


Multimedia Tools and Applications archive | 2014

Multimedia modeling

Chong-Wah Ngo; Klaus Schoeffmann; Yiannis Andreopoulos; Christian Breiteneder

Multimedia modeling aims to study computational models for addressing real-world multimedia problems from various perspectives, including information fusion, perceptual understanding, performance evaluation and social media. The topic becomes increasingly important with the massive amount of data available over the Internet, representing different pieces of information in heterogeneous forms that need to be consolidated before being used for multimedia problems. On the other hand, the advancement in technologies such as mobile and sensing devices drive the needs for revisiting the existing models for not only dealing with audio-visual cues but also incorporating various sensory modalities that have potential in providing cheaper and simpler solutions. The selected papers in this special issue were extended by their authors to a journal version and then went through a rigorous review process that included at least three anonymous referees. The first paper entitled “Multimedia Classification and Event Detection Using Double Fusion”, co-authored by Zhen-zong Lei, Lei Bao, Shoou-I Yu, Wei Liu and Alexander G. Hauptmann from Carnegie Mellon University, investigates the issue of information fusion for generic and complex event detection. Detecting events such as “making a sandwich” requires modeling large sources of information from audio-visual, textual and concept detection, thus the issue of fusing diverse features comes into picture. The paper proposes a double fusion Multimed Tools Appl (2014) 71:331–332 DOI 10.1007/s11042-013-1775-3


Multimedia Tools and Applications | 2017

Interactive video search tools: a detailed analysis of the video browser showdown 2015

Claudiu Cobârzan; Klaus Schoeffmann; Werner Bailer; Adam BlaźEk; Jakub Lokoăź; Stefanos Vrochidis; Kai Uwe Barthel; Luca Rossetto

Interactive video retrieval tools developed over the past few years are emerging as powerful alternatives to automatic retrieval approaches by giving the user more control as well as more responsibilities. Current research tries to identify the best combinations of image, audio and text features that combined with innovative UI design maximize the tools performance. We present the last installment of the Video Browser Showdown 2015 which was held in conjunction with the International Conference on MultiMedia Modeling 2015 (MMM 2015) and has the stated aim of pushing for a better integration of the user into the search process. The setup of the competition including the used dataset and the presented tasks as well as the participating tools will be introduced . The performance of those tools will be thoroughly presented and analyzed. Interesting highlights will be marked and some predictions regarding the research focus within the field for the near future will be made.


Multimedia Tools and Applications | 2015

Keyframe extraction in endoscopic video

Klaus Schoeffmann; Manfred Del Fabro; Tibor Szkaliczki; Laszlo Böszörmenyi; Jörg Keckstein

In medical endoscopy more and more surgeons archive the recorded video streams in a long-term storage. One reason for this development, which is enforced by law in some countries, is to have evidence in case of lawsuits from patients. Another more practical reason is to allow later inspection of previous procedures and also to use parts of such videos for research and for training. However, due to the dramatic amount of video data recorded in a hospital on a daily basis, it is very important to have good preview images for these videos in order to allow for quick filtering of undesired content and for easier browsing through such a video archive. Unfortunately, common shot detection and keyframe extraction methods cannot be used for that video data, because these videos contain unedited and highly similar content, especially in terms of color and texture, and no shot boundaries at all. We propose a new keyframe extraction approach for this special video domain and show that our method is significantly better than a previously proposed approach.


international conference on multimedia and expo | 2009

Visualization of video motion in context of video browsing

Klaus Schoeffmann; Mathias Lux; Mario Taschwer; Laszlo Boeszoermenyi

We present a new approach for video browsing using visualization of motion direction and motion intensity statistics by color and brightness variations. Statistics are collected from motion vectors of H.264/AVC encoded video streams, so full video decoding is not required. By interpreting visualized motion patterns of video segments, users are able to quickly identify scenes similar to a prototype scene or identify potential scenes of interest. We give some examples of motion patterns with different semantic value, including camera zooms, hill jumps of ski-jumpers, and the repeated appearance of a news speaker. In a user study we show that certain scenes of interest can be found significantly faster using our video browsing tool than using a video player with VCR-like controls.

Collaboration


Dive into the Klaus Schoeffmann's collaboration.

Top Co-Authors

Avatar

Bernd Münzer

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Manfred Jürgen Primus

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Laszlo Böszörmenyi

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Andreas Leibetseder

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Sabrina Kletz

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

David Ahlström

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Manfred Del Fabro

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudiu Cobârzan

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Stefan Petscharnig

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge