Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Konstantin Pogorelov is active.

Publication


Featured researches published by Konstantin Pogorelov.


IEEE Transactions on Medical Imaging | 2017

Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge

Jorge Bernal; Nima Tajkbaksh; Francisco Javier Sánchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjørn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier-Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Córdova; Cristina Sánchez-Montes; Suryakanth R. Gurudu; Gloria Fernández-Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace

Colonoscopy is the gold standard for colon cancer screening though some polyps are still missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection sub-challenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks are the state of the art. Nevertheless, it is also demonstrated that combining different methodologies can lead to an improved overall performance.


acm multimedia | 2016

Multimedia and Medicine: Teammates for Better Disease Detection and Survival

Michael Riegler; Mathias Lux; Carsten Griwodz; Concetto Spampinato; Thomas de Lange; Sigrun Losada Eskeland; Konstantin Pogorelov; Wallapak Tavanapong; Peter T. Schmidt; Cathal Gurrin; Dag Johansen; Håvard D. Johansen; Pål Halvorsen

Health care has a long history of adopting technology to save lives and improve the quality of living. Visual information is frequently applied for disease detection and assessment, and the established fields of computer vision and medical imaging provide essential tools. It is, however, a misconception that disease detection and assessment are provided exclusively by these fields and that they provide the solution for all challenges. Integration and analysis of data from several sources, real-time processing, and the assessment of usefulness for end-users are core competences of the multimedia community and are required for the successful improvement of health care systems. We have conducted initial investigations into two use cases surrounding diseases of the gastrointestinal (GI) tract, where the detection of abnormalities provides the largest chance of successful treatment if the initial observation of disease indicators occurs before the patient notices any symptoms. Although such detection is typically provided visually by applying an endoscope, we are facing a multitude of new multimedia challenges that differ between use cases. In real-time assistance for colonoscopy, we combine sensor information about camera position and direction to aid in detecting, investigate means for providing support to doctors in unobtrusive ways, and assist in reporting. In the area of large-scale capsular endoscopy, we investigate questions of scalability, performance and energy efficiency for the recording phase, and combine video summarization and retrieval questions for analysis.


acm sigmm conference on multimedia systems | 2017

KVASIR: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection

Konstantin Pogorelov; Kristin Ranheim Randel; Carsten Griwodz; Sigrun Losada Eskeland; Thomas de Lange; Dag Johansen; Concetto Spampinato; Duc-Tien Dang-Nguyen; Mathias Lux; Peter T. Schmidt; Michael Riegler; Pål Halvorsen

Automatic detection of diseases by use of computers is an important, but still unexplored field of research. Such innovations may improve medical practice and refine health care systems all over the world. However, datasets containing medical images are hardly available, making reproducibility and comparison of approaches almost impossible. In this paper, we present KVASIR, a dataset containing images from inside the gastrointestinal (GI) tract. The collection of images are classified into three important anatomical landmarks and three clinically significant findings. In addition, it contains two categories of images related to endoscopic polyp removal. Sorting and annotation of the dataset is performed by medical doctors (experienced endoscopists). In this respect, KVASIR is important for research on both single- and multi-disease computer aided detection. By providing it, we invite and enable multimedia researcher into the medical domain of detection and retrieval.


content based multimedia indexing | 2016

EIR — Efficient computer aided diagnosis framework for gastrointestinal endoscopies

Michael Riegler; Konstantin Pogorelov; Pål Halvorsen; Thomas de Lange; Carsten Griwodz; Peter T. Schmidt; Sigrun Losada Eskeland; Dag Johansen

Analysis of medical videos for detection of abnormalities like lesions and diseases requires both high precision and recall but also real-time processing for live feedback during standard colonoscopies and scalability for massive population based screening, which can be done using a capsular video endoscope. Existing related work in this field does not provide the necessary combination of detection accuracy and performance. In this paper, a multimedia system is presented where the aim is to tackle automatic analysis of videos from the human gastrointestinal (GI) tract. The system includes the whole pipeline from data collection, processing and analysis, to visualization. The system combines filters using machine learning, image recognition and extraction of global and local image features, and it is built in a modular way, so that it can easily be extended. At the same time, it is developed for efficient processing in order to provide real-time feedback to the doctor. Initial experiments show that our system has detection and localisation accuracy at least as good as existing systems, but it stands out in terms of real-time performance and low resource consumption for scalability.


computer-based medical systems | 2016

GPU-Accelerated Real-Time Gastrointestinal Diseases Detection

Konstantin Pogorelov; Michael Riegler; Pål Halvorsen; Peter T. Schmidt; Carsten Griwodz; Dag Johansen; Sigrun Losada Eskeland; Thomas de Lange

The process of finding diseases and abnormalities during live medical examinations has for a long time depended mostly on the medical personnel, with a limited amount of computer support. However, computer-based medical systems are currently emerging in domains like endoscopies of the gastrointestinal (GI) tract. In this context, we aim for a system that enables automatic analysis of endoscopy videos, where one use case is live computer-assisted endoscopy that increases disease-and abnormality-detection rates. In this paper, a system that tackles live automatic analysis of endoscopy videos is presented with a particular focus on the systems ability to perform in real time. The presented system utilizes different parts of a heterogeneous architecture and can be used for automatic analysis of high-definition colonoscopy videos (and a fully automated analysis of video from capsular endoscopy devices). We describe our implementation and report the system performance of our GPU-based processing framework. The experimental results show real-time stream processing and low resource consumption, and a detection precision and recall level at least as good as existing related work.


Multimedia Tools and Applications | 2017

Efficient disease detection in gastrointestinal videos – global features versus neural networks

Konstantin Pogorelov; Michael Riegler; Sigrun Losada Eskeland; Thomas de Lange; Dag Johansen; Carsten Griwodz; Peter T. Schmidt; Pål Halvorsen

Analysis of medical videos from the human gastrointestinal (GI) tract for detection and localization of abnormalities like lesions and diseases requires both high precision and recall. Additionally, it is important to support efficient, real-time processing for live feedback during (i) standard colonoscopies and (ii) scalability for massive population-based screening, which we conjecture can be done using a wireless video capsule endoscope (camera-pill). Existing related work in this field does neither provide the necessary combination of accuracy and performance for detecting multiple classes of abnormalities simultaneously nor for particular disease localization tasks. In this paper, a complete end-to-end multimedia system is presented where the aim is to tackle automatic analysis of GI tract videos. The system includes an entire pipeline ranging from data collection, processing and analysis, to visualization. The system combines deep learning neural networks, information retrieval, and analysis of global and local image features in order to implement multi-class classification, detection and localization. Furthermore, it is built in a modular way, so that it can be easily extended to deal with other types of abnormalities. Simultaneously, the system is developed for efficient processing in order to provide real-time feedback to the doctors and for scalability reasons when potentially applied for massive population-based algorithmic screenings in the future. Initial experiments show that our system has multi-class detection accuracy and polyp localization precision at least as good as state-of-the-art systems, and provides additional novelty in terms of real-time performance, low resource consumption and ability to extend with support for new classes of diseases.


acm multimedia | 2016

Computer aided disease detection system for gastrointestinal examinations

Michael Riegler; Konstantin Pogorelov; Jonas Markussen; Mathias Lux; Håkon Kvale Stensland; Thomas de Lange; Carsten Griwodz; Pål Halvorsen; Dag Johansen; Peter T. Schmidt; Sigrun Losada Eskeland

In this paper, we present the computer-aided diagnosis part of the EIR system [9], which can support medical experts in the task of detecting diseases and anatomical landmarks in the gastrointestinal (GI) system. This includes automatic detection of important findings in colonoscopy videos and marking them for the doctors. EIR is designed in a modular way so that it can easily be extended for other diseases. For this demonstration, we will focus on polyp detection, as our system is trained with the ASU-Mayo Clinic polyp database [5].


acm multimedia | 2016

LIRE: open source visual information retrieval

Mathias Lux; Michael Riegler; Pål Halvorsen; Konstantin Pogorelov; Nektarios Anagnostopoulos

With an annual growth rate of 16.2% of taken photos a year, researchers predict an almost unbelievable number of 4.9 trillion stored images in 2017. Nearly 80% of these photos in 2017 will be taken with mobile phones. To be able to cope with this immense amount of visual data in a fast and accurate way, a visual information retrieval systems are needed for various domains and applications. LIRE, short for Lucene Image Retrieval, is a light weight and easy to use Java library for visual information retrieval. It allows developers and researchers to integrate common content based image retrieval approaches in their applications and research projects. LIRE supports global and local image features and can cope with millions of images using approximate search and distributing indexes on the cloud. In this demo we present a novel tool called F-search that emphasize the core strengths of LIRE: lightness, speed and accuracy.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

From Annotation to Computer-Aided Diagnosis: Detailed Evaluation of a Medical Multimedia System

Michael Riegler; Konstantin Pogorelov; Sigrun Losada Eskeland; Peter T. Schmidt; Zeno Albisser; Dag Johansen; Carsten Griwodz; Pål Halvorsen; Thomas de Lange

Holistic medical multimedia systems covering end-to-end functionality from data collection to aided diagnosis are highly needed, but rare. In many hospitals, the potential value of multimedia data collected through routine examinations is not recognized. Moreover, the availability of the data is limited, as the health care personnel may not have direct access to stored data. However, medical specialists interact with multimedia content daily through their everyday work and have an increasing interest in finding ways to use it to facilitate their work processes. In this article, we present a novel, holistic multimedia system aiming to tackle automatic analysis of video from gastrointestinal (GI) endoscopy. The proposed system comprises the whole pipeline, including data collection, processing, analysis, and visualization. It combines filters using machine learning, image recognition, and extraction of global and local image features. The novelty is primarily in this holistic approach and its real-time performance, where we automate a complete algorithmic GI screening process. We built the system in a modular way to make it easily extendable to analyze various abnormalities, and we made it efficient in order to run in real time. The conducted experimental evaluation proves that the detection and localization accuracy are comparable or even better than existing systems, but it is by far leading in terms of real-time performance and efficient resource consumption.


acm multimedia | 2016

Right inflight?: a dataset for exploring the automatic prediction of movies suitable for a watching situation

Michael Riegler; Martha Larson; Concetto Spampinato; Pål Halvorsen; Mathias Lux; Jonas Markussen; Konstantin Pogorelov; Carsten Griwodz; Håkon Kvale Stensland

In this paper, we present the dataset Right Inflight developed to support the exploration of the match between video content and the situation in which that content is watched. Specifically, we look at videos that are suitable to be watched on an airplane, where the main assumption is that that viewers watch movies with the intent of relaxing themselves and letting time pass quickly, despite the inconvenience and discomfort of flight. The aim of the dataset is to support the development of recommender systems, as well as computer vision and multimedia retrieval algorithms capable of automatically predicting which videos are suitable for inflight consumption. Our ultimate goal is to promote a deeper understanding of how people experience video content, and of how technology can support people in finding or selecting video content that supports them in regulating their internal states in certain situations. Right Inflight consists of 318 human-annotated movies, for which we provide links to trailers, a set of pre-computed low-level visual, audio and text features as well as user ratings. The annotation was performed by crowdsourcing workers, who were asked to judge the appropriateness of movies for inflight consumption.

Collaboration


Dive into the Konstantin Pogorelov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mathias Lux

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter T. Schmidt

Karolinska University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge