Michal Drozdzal
University of Barcelona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michal Drozdzal.
international conference of the ieee engineering in medicine and biology society | 2012
Santi Seguí; Michal Drozdzal; Fernando Vilariño; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitrià
Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content - clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.
Neurogastroenterology and Motility | 2012
Carolina Malagelada; F. De lorio; Santi Seguí; Sara Mendez; Michal Drozdzal; Jordi Vitrià; Petia Radeva; Javier Santos; Anna Accarino; J.-R. Malagelada; Fernando Azpiroz
Background This study aimed to determine the proportion of cases with abnormal intestinal motility among patients with functional bowel disorders. To this end, we applied an original method, previously developed in our laboratory, for analysis of endoluminal images obtained by capsule endoscopy. This novel technology is based on computer vision and machine learning techniques.
American Journal of Physiology-gastrointestinal and Liver Physiology | 2015
Carolina Malagelada; Michal Drozdzal; Santi Seguí; Sara Mendez; Jordi Vitrià; Petia Radeva; Javier Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Computers in Biology and Medicine | 2016
Santi Segu; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitri
The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase).
IEEE Journal of Biomedical and Health Informatics | 2014
Santi Seguí; Michal Drozdzal; Ekaterina Zaytseva; Carolina Malagelada; Fernando Azpiroz; Petia Radeva; Jordi Vitrià
Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper, we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state-of-the-art performance for this task.
Computers in Biology and Medicine | 2015
Michal Drozdzal; Santi Seguí; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitrià
Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information.
iberian conference on pattern recognition and image analysis | 2011
Michal Drozdzal; Santi Seguí; Carolina Malagelada; Fernando Azpiroz; Jordi Vitrià; Petia Radeva
A high quality labeled training set is necessary for any supervised machine learning algorithm. Labeling of the data can be a very expensive process, specially while dealing with data of high variability and complexity. A good example of such data are the videos from Wireless Capsule Endoscopy. Building a representative WCE data set means many videos to be labeled by an expert. The problem that occurs is the data diversity, in the space of the features, from different WCE studies. That means that when new data arrives it is highly probable that it will not be represented in the training set, thus getting a high probability of performing an error when applying machine learning schemes. In this paper an interactive labeling scheme that allows reducing expert effort in the labeling process is presented. It is shown that the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of the WCE video with less than 100 clicks.
international conference on image processing | 2014
Michal Drozdzal; Jordi Vitrià; Santi Seguí; Carolina Malagelada; Fernando Azpiroz; Petia Radeva
In this paper, we tackle the problem of unsupervised segmentation of intestinal events using various motility descriptors extracted from the endoluminal video. The segments of constant intestinal activity are detected with a robust statistical test that is based on Hoeffdings inequality. The qualitative analysis of the results shows that the segments adjust well to the motility information and thus, this segmentation can constitute basic units for higher-level motility description.
computer vision and pattern recognition | 2010
Michal Drozdzal; Laura Igual; Jordi Vitrià; Carolina Malagelada; Fernando Azpiroz; Petia Radeva
Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.
iberoamerican congress on pattern recognition | 2016
Santi Seguí; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitrià
The interpretation and analysis of wireless capsule endoscopy images is a complex task which requires sophisticated computer aided decision (CAD) systems in order to help physicians with the video screening and, finally, with the diagnosis. Most of the CAD systems for capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, new CAD system has to be designed from scratch. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which avoids the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed by using state of the art hand-crafted features.