Dimitrios Markonis
University of Applied Sciences Western Switzerland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dimitrios Markonis.
Methods of Information in Medicine | 2012
Dimitrios Markonis; Markus Holzer; Sebastian Dungs; A. Vargas; Georg Langs; Sascha Kriewel; Henning Müller
OBJECTIVES The main objective of this study is to learn more on the image use and search requirements of radiologists. These requirements will then be taken into account to develop a new search system for images and associated meta data search in the Khresmoi project. METHODS Observations of the radiology workflow, case discussions and a literature review were performed to construct a survey form that was given online and in paper form to radiologists. Eye tracking was performed on a radiology viewing station to analyze typical tasks and to complement the survey. RESULTS In total 34 radiologists answered the survey online or on paper. Image search was mentioned as a frequent and common task, particularly for finding cases of interest for differential diagnosis. Sources of information besides the Internet are books and discussions with colleagues. Search for images is unsuccessful in around 25% of the cases, stopping the search after around 10 minutes. The most common reason for failure is that target images are considered rare. Important additions for search requested in the survey are filtering by pathology and modality, as well as search for visually similar images and cases. Few radiologists are familiar with visual retrieval but they desire the option to upload images for searching similar ones. CONCLUSIONS Image search is common in radiology but few radiologists are fully aware of visual information retrieval. Taking into account the many unsuccessful searches and time spent for this, a good image search could improve the situation and help in clinical practice.
ieee international conference on healthcare informatics, imaging and systems biology | 2012
Dimitrios Markonis; Roger Schaer; Ivan Eggel; Henning Müller; Adrien Depeursinge
In this paper, MapReduce is used to speed up and make possible three large-scale medical image processing use-cases: (i) parameter optimization for lung texture classification using support vector machines (SVM), (ii) content-based medical image indexing, and (iii) three-dimensional directional wavelet analysis for solid texture classification.
MCBR-CDS'12 Proceedings of the Third MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2012
Alba Garcia Seco de Herrera; Dimitrios Markonis; Henning Müller
The number of biomedical publications has increased noticeably in the last 30 years. Clinicians and medical researchers regularly have unmet information needs but require more time for searching than is usually available to find publications relevant to a clinical situation. The techniques described in this article are used to classify images from the biomedical open access literature into categories, which can potentially reduce the search time. Only the visual information of the images is used to classify images based on a benchmark database of ImageCLEF 2011 created for the task of image classification and image retrieval. We evaluate particularly the importance of color in addition to the frequently used texture and grey level features. Results show that bags---of---colors in combination with the Scale Invariant Feature Transform (SIFT) provide an image representation allowing to improve the classification quality. Accuracy improved from 69.75% of the best system in ImageCLEF 2011 using visual information, only, to 72.5% of the system described in this paper. The results highlight the importance of color for the classification of biomedical images.
International Journal of Medical Informatics | 2015
Dimitrios Markonis; Markus Holzer; Frederic Baroz; Rafael Luis Ruiz De Castaneda; Célia Boyer; Georg Langs; Henning Müller
PURPOSE This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the systems added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. METHODS User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. RESULTS In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the systems functionalities while the user feedback helped identifying the systems weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. CONCLUSION The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems.
international conference on multimedia retrieval | 2014
Antoine Widmer; Roger Schaer; Dimitrios Markonis; Henning Müller
Large amounts of medical images are being produced to help physicians in diagnosis and treatment planning. These images are then archived in PACS (Picture Archival and Communication Systems) and usually they are only reused in the context of the same patient during further visits. Medical image retrieval systems allow medical professionals to search for images in institutional archives, the Internet or in the scientific literature. The goal of the search can be in diagnosis but often as well for teaching and research. A large body of research has investigated efficient and effective algorithms to retrieve a set of images to fulfil a specific information need. However, much less research has been done on studying simple and engaging interaction for users of medical image retrieval systems. In this paper we propose an intuitive and engaging web--based interface targeted to be used by a large range of users with gesture control. This interface allows users to retrieve medical images by accessing a system called Parallel Distributed Image Search Engine (ParaDISE), a text-- and content--based image retrieval system. Accepting search with keywords and example images, this interface uses simple gestures to get random example images and mark examples as positive and negative relevance feedback with results being updated after each interaction.
international conference of the ieee engineering in medicine and biology society | 2014
Antoine Widmer; Roger Schaer; Dimitrios Markonis; Henning Müller
Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.
Computerized Medical Imaging and Graphics | 2015
Alba Garcia Seco de Herrera; Roger Schaer; Dimitrios Markonis; Henning Müller
Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task.
Revised Selected Papers from the First International Workshop on Multimodal Retrieval in the Medical Domain - Volume 9059 | 2015
Alba Garcia Seco de Herrera; Dimitrios Markonis; Ranveer Joyseeree; Roger Schaer; Antonio Foncubierta-Rodríguez; Henning Müller
Searching for medical image content is a regular task for many physicians, especially in radiology. Retrieval of medical images from the scientific literature can benefit from automatic modality classification to focus the search and filter out non---relevant items. Training datasets are often unevenly distributed regarding the classes resulting sometimes in a less than optimal classification performance. This article proposes a semi---supervised learning approach applied using a k---Nearest Neighbours k---NN classifier to exploit unlabelled data and to expand the training set. The algorithmic implementation is described and the method is evaluated on the ImageCLEFmed modality classification benchmark. Results show that this approach achieves an improved performance over supervised k---NN and Random Forest classifiers. Moreover, medical case---based retrieval also obtains higher performance when using the classified modalities as filter. This shows that image types can be classified well using visual information and they can then be used in a variety of applciations.
Proceedings of SPIE | 2012
Dimitrios Markonis; Alba Garcia Seco de Herrera; Ivan Eggel; Henning Müller
The biomedical literature published regularly has increased strongly in past years and keeping updated even in narrow domains is difficult. Images represent essential information of their articles and can help to quicker browse through large volumes of articles in connection with keyword search. Content-based image retrieval is helping the retrieval of visual content. To facilitate retrieval of visual information, image categorisation can be an important first step. To represent scientific articles visually, medical images need to be separated from general images such as flowcharts or graphs to facilitate browsing, as graphs contain little information. Medical modality classification is a second step to focus search. The techniques described in this article first classify images into broad categories. In a second step the images are further classified into the exact medical modalities. The system combines the Scale-Invariant Feature Transform (SIFT) and density-based clustering (DENCLUE). Visual words are first created globally to differentiate broad categories and then within each category a new visual vocabulary is created for modality classification. The results show the difficulties to differentiate between some modalities by visual means alone. On the other hand the improvement of the accuracy of the two-step approach shows the usefulness of the method. The system is currently being integrated into the Goldminer image search engine of the ARRS (American Roentgen Ray Society) as a web service, allowing concentrating image search onto clinically relevant images automatically.
Proceedings of SPIE | 2013
Ajad Chhatkuli; Antonio Foncubierta-Rodríguez; Dimitrios Markonis; Fabrice Meriaudeau; Henning Müller