Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred Jürgen Primus is active.

Publication


Featured researches published by Manfred Jürgen Primus.


content based multimedia indexing | 2013

Segmentation of recorded endoscopic videos by detecting significant motion changes

Manfred Jürgen Primus; Klaus Schoeffmann; Laszlo Böszörmenyi

In the medical domain it has become common to store recordings of endoscopic surgeries or procedures. The storage of these endoscopic videos provides not only evidence of the work of the surgeons but also facilitates research, the training of new surgeons and supports explanations to the patients. However, an endoscopic video archive, where tens or hundreds of new videos are added each day, needs content-based analysis in order to provide content-based search. A fundamental first step in content analysis is the segmentation of the video. We propose a new method for segmentation of endoscopic videos, based on spatial and temporal differences of motion in these videos. Through an evaluation with 20 videos we show that our approach provides reasonable performance.


content based multimedia indexing | 2016

Temporal segmentation of laparoscopic videos into surgical phases

Manfred Jürgen Primus; Klaus Schoeffmann; Laszlo Böszörmenyi

Videos of laparoscopic surgeries need to be segmented temporally into phases so that surgeons can use the recordings efficiently in their everyday work. In this paper we investigate the performance of an automatic phase segmentation method based on instrument detection and recognition. Contrary to known methods that dynamically align phases to an annotated dataset, our method is not limited to standardized or unvarying endoscopic procedures. Phases of laparoscopic procedures show a high correlation to the presence of one or a group of certain instruments. Therefore, the first step of our procedure is the definition of a set of rules that describe these correlations. The next step is the spatial detection of instruments using a color-based segmentation method and a rule-based interpretation of image moments for the refinement of the detections. Finally, the detected regions are recognized with SVM classifiers and ORB features. The evaluation shows that the proposed technique find phases in laparoscopic videos of cholecystectomies reliably.


content based multimedia indexing | 2015

Instrument classification in laparoscopic videos

Manfred Jürgen Primus; Klaus Schoeffmann; Laszlo Böszörmenyi

In medical endoscopy more and more surgeons record videos of their interventions in a long-term storage archive for later retrieval. In order to allow content-based search in such endoscopic video archives, the video data needs to be indexed first. However, even the very basic step of content-based indexing, namely content segmentation, is already very challenging due to the special characteristics of such video data. Therefore, we propose to use instrument classification to enable semantic segmentation of laparoscopic videos. In this paper, we evaluate the performance of such an instrument classification approach. Our results show satisfying performance for all instruments used in our evaluation.


conference on multimedia modeling | 2017

Collaborative Feature Maps for Interactive Video Search

Klaus Schoeffmann; Manfred Jürgen Primus; Bernd Muenzer; Stefan Petscharnig; Christof Karisch; Qing Xu; Wolfgang Huerst

This extended demo paper summarizes our interface used for the Video Browser Showdown (VBS) 2017 competition, where visual and textual known-item search (KIS) tasks, as well as ad-hoc video search (AVS) tasks in a 600-h video archive need to be solved interactively. To this end, we propose a very flexible distributed video search system that combines many ideas of related work in a novel and collaborative way, such that several users can work together and explore the video archive in a complementary manner. The main interface is a perspective Feature Map, which shows keyframes of shots arranged according to a selected content similarity feature (e.g., color, motion, semantic concepts, etc.). This Feature Map is accompanied by additional views, which allow users to search and filter according to a particular content feature. For collaboration of several users we provide a cooperative heatmap that shows a synchronized view of inspection actions of all users. Moreover, we use collaborative re-ranking of shots (in specific views) based on retrieved results of other users.


conference on multimedia modeling | 2018

The ITEC Collaborative Video Search System at the Video Browser Showdown 2018

Manfred Jürgen Primus; Bernd Münzer; Andreas Leibetseder; Klaus Schoeffmann

We present our video search system for the Video Browser Showdown (VBS) 2018 competition. It is based on the collaborative system used in 2017, which already performed well but also revealed high potential for improvement. Hence, based on our experience we introduce several major improvements, particularly (1) a strong optimization of similarity search, (2) various improvements for concept-based search, (3) a new flexible video inspector view, and (4) extended collaboration features, as well as numerous minor adjustments and enhancements, mainly concerning the user interface and means of user interaction. Moreover, we present a spectator view that visualizes the current activity of the team members to the audience to make the competition more attractive.


acm multimedia | 2017

Real-Time Image-based Smoke Detection in Endoscopic Videos

Andreas Leibetseder; Manfred Jürgen Primus; Stefan Petscharnig; Klaus Schoeffmann

The nature of endoscopy as a type of minimally invasive surgery (MIS) requires surgeons to perform complex operations by merely inspecting a live camera feed. Inherently, a successful intervention depends upon ensuring proper working conditions, such as skillful camera handling, adequate lighting and removal of confounding factors, such as fluids or smoke. The latter is an undesirable byproduct of cauterizing tissue and not only constitutes a health hazard for the medical staff as well as the treated patients, it can also considerably obstruct the operating physicians field of view. Therefore, as a standard procedure the gaseous matter is evacuated by using specialized smoke suction systems that typically are activated manually whenever considered appropriate. We argue that image-based smoke detection can be employed to undertake such a decision, while as well being a useful indicator for relevant scenes in post-procedure analyses. This work represents a continued effort to previously conducted studies utilizing pre-trained convolutional neural networks (CNNs) and threshold-based saturation analysis. Specifically, we explore further methodologies for comparison and provide as well as evaluate a public dataset comprising over 100K smoke/non-smoke images extracted from the Cholec80 dataset, which is composed of 80 different cholecystectomy procedures. Having applied deep learning to merely 20K images of a custom dataset, we achieve Receiver Operating Characteristic (ROC) curves enclosing areas of over 0.98 for custom datasets and over 0.77 for the public dataset. Surprisingly, a fixed threshold for saturation-based histogram analysis still yields areas of over 0.78 and 0.75.


conference on multimedia modeling | 2015

Mobile Image Analysis: Android vs. iOS

Claudiu Cobârzan; Marco A. Hudelist; Klaus Schoeffmann; Manfred Jürgen Primus

Currently, computer vision applications are becoming more common on mobile devices due to the constant increase in raw processing power coupled with extended battery life. The OpenCV framework is a popular choice when developing such applications on desktop computers as well as on mobile devices, but there are few comparative performance studies available. We know of only one such study that evaluates a set of typical OpenCV operations on iOS devices. In this paper we look at the same operations, spanning from simple image manipulation like grayscaling and blurring to keypoint detection and descriptor extraction but on flagship Android devices as well as on iOS devices and with different image resolutions. We compare the results of the same tests running on the two platforms on the same datasets and provide extended measurements on completion time and battery usage.


acm multimedia | 2014

Segmentation and Indexing of Endoscopic Videos

Manfred Jürgen Primus

Over the last few years it has become common to archive video recordings of endoscopic surgeries. These videos are of high value for medics, junior doctors, patients and hospital management but currently they are used rarely or not at all. Each day tens to hundreds of hours of new videos are added to archives without metadata that would support content-based search. In order to fully utilize these videos it is necessary to analyze the content of the recordings. Endoscopic videos are in some aspects fundamentally different to other types of videos. Therefore, pre-existing content-based analysis methods must be tested for their ability to operate with this kind of video and, if required, they must be adopted or new methods must be found. Especially, we address video segmentation and indexing in this work. We present our preliminary work and ideas for future work to add content-based information to endoscopic videos.


acm sigmm conference on multimedia systems | 2018

Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology

Andreas Leibetseder; Stefan Petscharnig; Manfred Jürgen Primus; Sabrina Kletz; Bernd Münzer; Klaus Schoeffmann; Jörg Keckstein

Modern imaging technology enables medical practitioners to perform minimally invasive surgery (MIS), i.e. a variety of medical interventions inflicting minimal trauma upon patients, hence, greatly improving their recoveries. Not only patients but also surgeons can benefit from this technology, as recorded media can be utilized for speeding-up tedious and time-consuming tasks such as treatment planning or case documentation. In order to improve the predominantly manually conducted process of analyzing said media, with this work we publish four datasets extracted from gynecologic, laparoscopic interventions with the intend on encouraging research in the field of post-surgical automatic media analysis. These datasets are designed with the following use cases in mind: medical image retrieval based on a query image, detection of instrument counts, surgical actions and anatomical structures, as well as distinguishing on which anatomical structure a certain action is performed. Furthermore, we provide suggestions for evaluation metrics and first baseline experiments.


Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge | 2018

lifeXplore at the Lifelog Search Challenge 2018

Bernd Münzer; Andreas Leibetseder; Sabrina Kletz; Manfred Jürgen Primus; Klaus Schoeffmann

With the growing hype for wearable devices recording biometric data comes the readiness to capture and combine even more personal information as a form of digital diary - lifelogging today is practiced ever more and can be categorized anywhere between an informative hobby and a life-changing experience. From an information processing point of view, analyzing the entirety of such multi-source data is immensely challenging, which is why the first Lifelog Search Challenge 2018 competition is brought into being, as to encourage the development of efficient interactive data retrieval systems. Answering this call, we present a retrieval system based on our video search system diveXplore, which has successfully been used in the Video Browser Showdown 2017 and 2018. Due to the different task definition and available data corpus, the base system was adapted and extended to this new challenge. The resulting lifeXplore system is a flexible retrieval and exploration tool that offers various easy-to-use, yet still powerful search and browsing features that have been optimized for lifelog data and for usage by novice users. Besides efficient presentation and summarization of lifelog data, it includes searchable feature maps, concept and metadata filters, similarity search and sketch search.

Collaboration


Dive into the Manfred Jürgen Primus's collaboration.

Top Co-Authors

Avatar

Klaus Schoeffmann

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Andreas Leibetseder

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Bernd Münzer

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Stefan Petscharnig

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Laszlo Böszörmenyi

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Sabrina Kletz

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Bernd Muenzer

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudiu Cobârzan

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge