Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sabrina Kletz is active.

Publication


Featured researches published by Sabrina Kletz.


conference on multimedia modeling | 2016

Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection

Marco A. Hudelist; Claudiu Cobârzan; Christian Beecks; Rob van de Werken; Sabrina Kletz; Klaus Schoeffmann

We propose a novel video browsing approach that aims at optimally integrating traditional, machine-based retrieval methods with an interface design optimized for human browsing performance. Advanced video retrieval and filtering (e.g., via color and motion signatures, and visual concepts) on a desktop is combined with a storyboard-based interface design on a tablet optimized for quick, brute-force visual inspection. Both modules run independently but exchange information to significantly minimize the data for visual inspection and compensate mistakes made by the search algorithms.


ieee international conference on multimedia big data | 2017

A Tool to Support Surgical Quality Assessment

Marco A. Hudelist; Heinrich Husslein; Bernd Münzer; Sabrina Kletz; Klaus Schoeffmann

In the domain of medical endoscopy an increasing number of surgeons nowadays store video recordings of their interventions in a huge video archive. Among some other purposes, the videos are used for post-hoc surgical quality assessment, since objective assessment of surgical procedures has been identified as essential component for improvement of surgical quality. Currently, such assessment is performed manually and for selected procedures only, since the amount of data and cumbersome interaction is very time-consuming. In the future, quality assessment should be carried out comprehensively and systematically by means of automated assessment algorithms. In this demo paper, we present a tool that supports human assessors in collecting manual annotations and therefore should help them to deal with the huge amount of visual data more efficiently. These annotations will be analyzed and used as training data in the future.


acm sigmm conference on multimedia systems | 2018

Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology

Andreas Leibetseder; Stefan Petscharnig; Manfred Jürgen Primus; Sabrina Kletz; Bernd Münzer; Klaus Schoeffmann; Jörg Keckstein

Modern imaging technology enables medical practitioners to perform minimally invasive surgery (MIS), i.e. a variety of medical interventions inflicting minimal trauma upon patients, hence, greatly improving their recoveries. Not only patients but also surgeons can benefit from this technology, as recorded media can be utilized for speeding-up tedious and time-consuming tasks such as treatment planning or case documentation. In order to improve the predominantly manually conducted process of analyzing said media, with this work we publish four datasets extracted from gynecologic, laparoscopic interventions with the intend on encouraging research in the field of post-surgical automatic media analysis. These datasets are designed with the following use cases in mind: medical image retrieval based on a query image, detection of instrument counts, surgical actions and anatomical structures, as well as distinguishing on which anatomical structure a certain action is performed. Furthermore, we provide suggestions for evaluation metrics and first baseline experiments.


acm multimedia | 2016

A Tablet Annotation Tool for Endoscopic Videos

Marco A. Hudelist; Sabrina Kletz; Klaus Schoeffmann

We present a tool for mobile browsing and annotation tailored for endoscopic videos. Professional users can utilize this tablet app for patients debriefings or educational purposes. It supports text input, free-hand and shape drawing as well as angle measurements, e.g. for comparing instrument orientation. It is possible to annotate single frames as well as user-defined video sections. Moreover, it provides easy and efficient navigation via a zoom-able navigation bar that is based on frame stripes. Frame stripes are created by extracting a single, one pixel wide vertical stripe from every keyframe. The stripes are then arranged next to each other to form a uniform bar. This gives users great overview of the content of a given video. Furthermore, the app supports creation of custom reports based on the entered annotations that can be directly mailed or printed for further usage.


Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge | 2018

lifeXplore at the Lifelog Search Challenge 2018

Bernd Münzer; Andreas Leibetseder; Sabrina Kletz; Manfred Jürgen Primus; Klaus Schoeffmann

With the growing hype for wearable devices recording biometric data comes the readiness to capture and combine even more personal information as a form of digital diary - lifelogging today is practiced ever more and can be categorized anywhere between an informative hobby and a life-changing experience. From an information processing point of view, analyzing the entirety of such multi-source data is immensely challenging, which is why the first Lifelog Search Challenge 2018 competition is brought into being, as to encourage the development of efficient interactive data retrieval systems. Answering this call, we present a retrieval system based on our video search system diveXplore, which has successfully been used in the Video Browser Showdown 2017 and 2018. Due to the different task definition and available data corpus, the base system was adapted and extended to this new challenge. The resulting lifeXplore system is a flexible retrieval and exploration tool that offers various easy-to-use, yet still powerful search and browsing features that have been optimized for lifelog data and for usage by novice users. Besides efficient presentation and summarization of lifelog data, it includes searchable feature maps, concept and metadata filters, similarity search and sketch search.


Multimedia Tools and Applications | 2018

Video retrieval in laparoscopic video recordings with dynamic content descriptors

Klaus Schoeffmann; Heinrich Husslein; Sabrina Kletz; Stefan Petscharnig; Bernd Muenzer; Christian Beecks

In the domain of gynecologic surgery an increasing number of surgeries are performed in a minimally invasive manner. These laparoscopic surgeries require specific psychomotor skills of the operating surgeon, which are difficult to learn and teach. This is the reason why an increasing number of surgeons promote checking video recordings of laparoscopic surgeries for the occurrence of technical errors with surgical actions. This manual surgical quality assessment (SQA) process, however, is very cumbersome and time-consuming when carried out without any support from content-based video retrieval. In this work we propose a video content descriptor called MIDD (Motion Intensity and Direction Descriptor) that can be effectively used to find similar segments in a laparoscopic video database and thereby help surgeons to more quickly inspect other instances of a given error scene. We evaluate the retrieval performance of MIDD with surgical actions from gynecologic surgery in direct comparison to several other dynamic content descriptors. We show that the MIDD descriptor significantly outperforms the state-of-the-art in terms of retrieval performance as well as in terms of runtime performance. Additionally, we release the manually created video dataset of 16 classes of surgical actions from medical laparoscopy to the public, for further evaluations.


conference on multimedia modeling | 2018

Evaluation of Visual Content Descriptors for Supporting Ad-Hoc Video Search Tasks at the Video Browser Showdown

Sabrina Kletz; Andreas Leibetseder; Klaus Schoeffmann

Since 2017 the Video Browser Showdown (VBS) collaborates with TRECVID and interactively evaluates Ad-Hoc Video Search (AVS) tasks, in addition to Known-Item Search (KIS) tasks. In this video search competition the participants have to find relevant target scenes to a given textual query within a specific time limit, in a large dataset consisting of 600 h of video content. Since usually the number of relevant scenes for such an AVS query is rather high, the teams at the VBS 2017 could find only a small portion of them. One way to support them at the interactive search would be to automatically retrieve other similar instances of an already found target scene. However, it is unclear which content descriptors should be used for such an automatic video content search, using a query-by-example approach. Therefore, in this paper we investigate several different visual content descriptors (CNN Features, CEDD, COMO, HOG, Feature Signatures and HOF) for the purpose of similarity search in the TRECVID IACC.3 dataset, used for the VBS. Our evaluation shows that there is no single descriptor that works best for every AVS query, however, when considering the total performance over all 30 AVS tasks of TRECVID 2016, CNN features provide the best performance.


conference on multimedia modeling | 2018

Sketch-Based Similarity Search for Collaborative Feature Maps

Andreas Leibetseder; Sabrina Kletz; Klaus Schoeffmann

Past editions of the annual Video Browser Showdown (VBS) event have brought forward many tools targeting a diverse amount of techniques for interactive video search, among which sketch-based search showed promising results. Aiming at exploring this direction further, we present a custom approach for tackling the problem of finding similarities in the TRECVID IACC.3 dataset via hand-drawn pictures using color compositions together with contour matching. The proposed methodology is integrated into the established Collaborative Feature Maps (CFM) system, which has first been utilized in the VBS 2017 challenge.


acm multimedia | 2016

A Multi-Video Browser for Endoscopic Videos on Tablets

Marco A. Hudelist; Sabrina Kletz; Klaus Schoeffmann

We present a browser for endoscopic videos that is designed to easily navigate and compare scenes on a tablet. It utilizes frame stripes of different levels of detail to quickly switch between fast and detailed navigation. Moreover, it uses saliency methods to determine which areas of a given keyframe contain the most information to further improve the visualization of the frame stripes. As scenes with much movement can be non-relevant out-of-patient scenes, the tool supports filtering for scenes of low, medium and high motion. The tool can be especially useful for patient debriefings as well as for educational purposes.


ieee international conference on multimedia big data | 2017

Large-Scale Endoscopic Image and Video Linking with Gradient-Based Signatures

Christian Beecks; Sabrina Kletz; Klaus Schoeffmann

Collaboration


Dive into the Sabrina Kletz's collaboration.

Top Co-Authors

Avatar

Klaus Schoeffmann

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Andreas Leibetseder

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Bernd Münzer

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Manfred Jürgen Primus

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Stefan Petscharnig

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Muenzer

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Heinrich Husslein

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar

Claudiu Cobârzan

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge