Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivan Eggel is active.

Publication


Featured researches published by Ivan Eggel.


ieee international conference on healthcare informatics, imaging and systems biology | 2012

Using MapReduce for Large-Scale Medical Image Analysis

Dimitrios Markonis; Roger Schaer; Ivan Eggel; Henning Müller; Adrien Depeursinge

In this paper, MapReduce is used to speed up and make possible three large-scale medical image processing use-cases: (i) parameter optimization for lung texture classification using support vector machines (SVM), (ii) content-based medical image indexing, and (iii) three-dimensional directional wavelet analysis for solid texture classification.


IEEE Transactions on Medical Imaging | 2016

Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks

Oscar Jimenez-del-Toro; Henning Müller; Markus Krenn; Katharina Gruenberg; Abdel Aziz Taha; Marianne Winterstein; Ivan Eggel; Antonio Foncubierta-Rodríguez; Orcun Goksel; András Jakab; Georgios Kontokotsios; Georg Langs; Bjoern H. Menze; Tomas Salas Fernandez; Roger Schaer; Anna Walleyo; Marc-André Weber; Yashin Dicente Cid; Tobias Gass; Mattias P. Heinrich; Fucang Jia; Fredrik Kahl; Razmig Kéchichian; Dominic Mai; Assaf B. Spanier; Graham Vincent; Chunliang Wang; Daniel Wyeth; Allan Hanbury

Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.


international conference of the ieee engineering in medicine and biology society | 2012

Mobile Medical Visual Information Retrieval

Adrien Depeursinge; Samuel Duc; Ivan Eggel; Henning Müller

In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.


Proceedings of SPIE | 2012

Multi-scale visual words for hierarchical medical image categorisation

Dimitrios Markonis; Alba Garcia Seco de Herrera; Ivan Eggel; Henning Müller

The biomedical literature published regularly has increased strongly in past years and keeping updated even in narrow domains is difficult. Images represent essential information of their articles and can help to quicker browse through large volumes of articles in connection with keyword search. Content-based image retrieval is helping the retrieval of visual content. To facilitate retrieval of visual information, image categorisation can be an important first step. To represent scientific articles visually, medical images need to be separated from general images such as flowcharts or graphs to facilitate browsing, as graphs contain little information. Medical modality classification is a second step to focus search. The techniques described in this article first classify images into broad categories. In a second step the images are further classified into the exact medical modalities. The system combines the Scale-Invariant Feature Transform (SIFT) and density-based clustering (DENCLUE). Visual words are first created globally to differentiate broad categories and then within each category a new visual vocabulary is created for modality classification. The results show the difficulties to differentiate between some modalities by visual means alone. On the other hand the improvement of the accuracy of the two-step approach shows the usefulness of the method. The system is currently being integrated into the Goldminer image search engine of the ARRS (American Roentgen Ray Society) as a web service, allowing concentrating image search onto clinically relevant images automatically.


Proceedings of SPIE | 2011

Mobile medical image retrieval

Samuel Duc; Adrien Depeursinge; Ivan Eggel; Henning Müller

Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in the text. Problems with the many, often incompatible mobile platforms were discovered and are listed in the text. Mobile information access is a quickly growing domain and the constraints of mobile access also need to be taken into account for image retrieval. The demonstrated access to the medical literature is most relevant as the medical literature and their images are clearly the largest knowledge source in the medical field.


Cloud-Based Benchmarking of Medical Image Analysis | 2017

Using the Cloud as a Platform for Evaluation and Data Preparation

Ivan Eggel; Roger Schaer; Henning Müller

This chapter gives a brief overview of the VISCERAL Registration System that is used for all the VISCERAL Benchmarks and is released as open source on GitHub. The system can be accessed by both participants and administrators, reducing the direct participant–organizer interaction and handling the documentation available for each of the benchmarks organized by VISCERAL. Also, the upload of the VISCERAL usage and participation agreements is integrated, as well as the attribution of virtual machines that allow participation in the VISCERAL Benchmarks. In the second part, a summary of the various steps in the continuous evaluation chain mainly consisting of the submission, algorithm execution and storage as well as the evaluation of results is given. The final part consists of the cloud infrastructure detail, describing the process of defining requirements, selecting a cloud solution provider, setting up the infrastructure and running the benchmarks. This chapter concludes with a short experience report outlining the encountered challenges and lessons learned.


Proceedings of SPIE | 2013

Determining the relative importance of figures in journal articles to find representative images

Henning Müller; Antonio Foncubierta-Rodríguez; Chang Lin; Ivan Eggel

When physicians are searching for articles in the medical literature, images of the articles can help determining relevance of the article content for a specific information need. The visual image representation can be an advantage in effectiveness (quality of found articles) and also in efficiency (speed of determining relevance or irrelevance) as many articles can likely be excluded much quicker by looking at a few representative images. In domains such as medical information retrieval, allowing to determine relevance quickly and accurately is an important criterion. This becomes even more important when small interfaces are used as it is frequently the case on mobile phones and tablets to access scientific data whenever information needs arise. In scientific articles many figures are used and particularly in the biomedical literature only a subset may be relevant for determining the relevance of a specific article to an information need. In many cases clinical images can be seen as more important for visual appearance than graphs or histograms that require looking at the context for interpretation. To get a clearer idea of image relevance in articles, a user test with a physician was performed who classified images of biomedical research articles into categories of importance that can subsequently be used to evaluate algorithms that automatically select images as representative examples. The manual sorting of images of 50 journal articles of BioMedCentral with each containing more than 8 figures by importance also allows to derive several rules that determine how to choose images and how to develop algorithms for choosing the most representative images of specific texts. This article describes the user tests and can be a first important step to evaluate automatic tools to select representative images for representing articles and potentially also images in other contexts, for example when representing patient records or other medical concepts when selecting images to represent RadLex terms in tutorials or interactive interfaces for example. This can help to make the image retrieval process more efficient and effective for physicians.


world congress on medical and health informatics, medinfo | 2010

Retrieving similar cases from the medical literature - The ImageCLEF experience

Jayashree Kalpathy-Cramer; Steven Bedrick; Saïd Radhouani; William R. Hersh; Ivan Eggel; Charles E. Kahn; Henning Müller

An increasing number of clinicians, researchers, educators and patients routinely search for relevant medical images using search engines on the internet as well as in image archives and PACS systems. However, image retrieval is far less understood and developed compared to text-based searching. The ImageCLEF medical image retrieval task is an international challenge evaluation that enables researchers to assess and compare techniques for medical image retrieval using test collections. In this paper, we describe the development of the ImageCLEF medical image test collection, consisting of a database of images and their associated annotations, as well as a set of realistic search topics and relevance judgments obtained using a set of experts. 2009 was the sixth year for the ImageCLEF medical retrieval task and had strong participation from research groups across the globe. We will provide results from this years evaluation and discuss the successes that we have had as well as challenges going forward.


cross-language evaluation forum | 2009

The ImageCLEF management system

Ivan Eggel; Henning Müller

The ImageCLEF image retrieval track has been part of CLEF (Cross Language Evaluation Forum) since 2003. Organizing ImageCLEF and its large participation of research groups involves a considerable amount of work and data to manage. Goal of the management system described in this paper was to create a system for the organization of ImageCLEF to reduce manual work and professionalize the structures. All ImageCLEF sub tracks having a page in a single run submission system reduces work of organizers and makes submissions easier for participants. The system was developed as a web application using Java and JavaServer Faces (JSF) on Glassfish with a Postgres 8.3 database. The main functionality consists of user, collection and subtrack management as well as run submissions. The system has two main user groups, participants and administrators. The main task for participants is to register for subtasks and then submit runs. Administrators create collections for the sub tasks and can define the data and constraints for submissions. The described system was used for ImageCLEF 2009 with 86 subscribed users and more than 300 submitted runs in 7 subtracks. The system has proved to significantly reduce manual work and will be used for upcoming ImageCLEF events and other evaluation campaigns.


cross language evaluation forum | 2009

Overview of the CLEF 2009 medical image retrieval track

Henning Müller; Jayashree Kalpathy-Cramer; Ivan Eggel; Steven Bedrick; Saïd Radhouani; Brian Bakke; Charles E. Kahn; William R. Hersh

Collaboration


Dive into the Ivan Eggel's collaboration.

Top Co-Authors

Avatar

Henning Müller

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Allan Hanbury

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Roger Schaer

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alba Garcia Seco de Herrera

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Antonio Foncubierta-Rodríguez

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles E. Kahn

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Abdel Aziz Taha

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Georg Langs

Medical University of Vienna

View shared research outputs
Researchain Logo
Decentralizing Knowledge