Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henning Müller is active.

Publication


Featured researches published by Henning Müller.


cross language evaluation forum | 2004

The CLEF 2004 cross-language image retrieval track

Paul D. Clough; Henning Müller; Thomas Deselaers; Michael Grubinger; Thomas Martin Lehmann; Jeffery R. Jensen; William R. Hersh

The purpose of this paper is to outline efforts from the 2004 CLEF cross–language image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content–based retrieval methods for cross–language image retrieval. Three tasks were offered in the ImageCLEF track: a TREC–style ad-hoc retrieval task, retrieval from a medical collection, and a user–centered (interactive) evaluation task. Eighteen research groups from a variety of backgrounds and nationalities participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main findings.


international conference on pattern recognition | 2000

Strategies for positive and negative relevance feedback in image retrieval

Henning Müller; Wolfgang Müller; Stéphane Marchand-Maillet; Thierry Pun; David McG. Squire

Relevance feedback has been shown to be a very effective tool for enhancing retrieval results in text retrieval. It has also been increasingly used in content-based image retrieval and very good results have been obtained. However, too much negative feedback may destroy a query as good features get negative weightings. This paper compares a variety of strategies for positive and negative feedback. The performance evaluation of feedback algorithms is a hard problem. To solve this, we obtain judgments from several users and employ an automated feedback scheme. We then evaluate different techniques using the same judgements. Using automated feedback, the ability of a system to adapt to the users needs can be measured very effectively. Our study highlights the utility of negative feedback, especially over several feedback steps.


International Journal of Computer Vision | 2007

The CLEF 2005 Automatic Medical Image Annotation Task

Thomas Deselaers; Henning Müller; Paul D. Clough; Hermann Ney; Thomas Martin Lehmann

In this paper, the automatic annotation task of the 2005 CLEF cross-language image retrieval campaign (ImageCLEF) is described. This paper focuses on the database used, the task setup, and the plans for further medical image annotation tasks in the context of ImageCLEF. Furthermore, a short summary of the results of 2005 is given. The automatic annotation task was added to ImageCLEF in 2005 and provides the first international evaluation of state-of-the-art methods for completely automatic annotation of medical images based on visual properties.The aim of this task is to explore and promote the use of automatic annotation techniques to allow for extracting semantic information from little-annotated medical images. A database of 10.000 images was established and annotated by experienced physicians resulting in 57 classes, each with at least 10 images. Detailed analysis is done regarding the (i) image representation, (ii) classification method, and (iii) learning method. Based on the strong participation of the 2005 campain, future benchmarks are planned.


Computerized Medical Imaging and Graphics | 2006

Assessment of Internet-based tele-medicine in Africa (the RAFT project)

Cheick Oumar Bagayoko; Henning Müller; Antoine Geissbuhler

The objectives of this paper on the Réseau Afrique Francophone de Télémédecine (RAFT) project are the evaluation of feasibility, potential, problems and risks of an Internet-based tele-medicine network in developing countries of Africa. The RAFT project was started in Western African countries 5 years ago and has now extended to other regions of Africa as well (i.e. Madagascar, Rwanda). A project for the development of a national tele-medicine network in Mali was initiated in 2001, extended to Mauritania in 2002 and to Morocco in 2003. By 2006, a total of nine countries are connected. The entire technical infrastructure is based on Internet technologies for medical distance learning and tele-consultations. The results are a tele-medicine network that has been in productive use for over 5 years and has enabled various collaboration channels, including North-to-South (from Europe to Africa), South-to-South (within Africa), and South-to-North (from Africa to Europe) distance learning and tele-consultations, plus many personal exchanges between the participating hospitals and Universities. It has also unveiled a set of potential problems: (a) the limited importance of North-to-South collaborations when there are major differences in the available resources or the socio-cultural contexts between the collaborating parties; (b) the risk of an induced digital divide if the periphery of the health system in developing countries is not involved in the development of the network; and (c) the need for the development of local medical content management skills. Particularly point (c) is improved through the collaboration between the various countries as professionals from the medical and the computer science field are sharing courses and resources. Personal exchanges between partners in the project are frequent, and several persons received an education at one of the partner Universities. As conclusion, we can say that the identified risks have to be taken into account when designing large-scale tele-medicine projects in developing countries. These problems can be mitigated by fostering South-South collaboration channels, by the use of satellite-based Internet connectivity in remote areas, the appreciation of local knowledge and its publication on-line. The availability of such an infrastructure also facilitates the development of other projects, courses, and local content creation.


Medical Imaging 2004: PACS and Imaging Informatics | 2004

Comparing features sets for content-based image retrieval in a medical-case database

Henning Müller; Antoine Rosset; Jean-Paul Vallée; Antoine Geissbuhler

Content-based image retrieval systems (CBIRSs) have frequently been proposed for the use in medical image databases and PACS. Still, only few systems were developed and used in a real clinical environment. It rather seems that medical professionals define their needs and computer scientists develop systems based on data sets they receive with little or no interaction between the two groups. A first study on the diagnostic use of medical image retrieval also shows an improvement in diagnostics when using CBIRSs which underlines the potential importance of this technique. This article explains the use of an open source image retrieval system (GIFT - GNU Image Finding Tool) for the retrieval of medical images in the medical case database system CasImage that is used in daily, clinical routine in the university hospitals of Geneva. Although the base system of GIFT shows an unsatisfactory performance, already little changes in the feature space show to significantly improve the retrieval results. The performance of variations in feature space with respect to color (gray level) quantizations and changes in texture analysis (Gabor filters) is compared. Whereas stock photography relies mainly on colors for retrieval, medical images need a large number of gray levels for successful retrieval, especially when executing feedback queries. The results also show that a too fine granularity in the gray levels lowers the retrieval quality, especially with single-image queries. For the evaluation of the retrieval peformance, a subset of the entire case database of more than 40,000 images is taken with a total of 3752 images. Ground truth was generated by a user who defined the expected query result of a perfect system by selecting images relevant to a given query image. The results show that a smaller number of gray levels (32 - 64) leads to a better retrieval performance, especially when using relevance feedback. The use of more scales and directions for the Gabor filters in the texture analysis also leads to improved results but response time is going up equally due to the larger feature space. CBIRSs can be of great use in managing large medical image databases. They allow to find images that might otherwise be lost for research and publications. They also give students students the possibility to navigate within large image repositories. In the future, CBIR might also become more important in case-based reasoning and evidence-based medicine to support the diagnostics because first studies show good results.


Medical Imaging 2008: PACS and Imaging Informatics | 2008

A classification framework for lung tissue categorization

Adrien Depeursinge; Jimison Iavindrasana; Asmâa Hidki; Gilles Cohen; Antoine Geissbuhler; Alexandra Platon; Pierre-Alexandre Alois Poletti; Henning Müller

We compare five common classifier families in their ability to categorize six lung tissue patterns in high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD) but also normal tissue. The evaluated classifiers are Naive Bayes, k-Nearest Neighbor (k-NN), J48 decision trees, Multi-Layer Perceptron (MLP) and Support Vector Machines (SVM). The dataset used contains 843 regions of interest (ROI) of healthy and five pathologic lung tissue patterns identified by two radiologists at the University Hospitals of Geneva. Correlation of the feature space composed of 39 texture attributes is studied. A grid search for optimal parameters is carried out for each classifier family. Two complementary metrics are used to characterize the performances of classification. Those are based on McNemars statistical tests and global accuracy. SVM reached best values for each metric and allowed a mean correct prediction rate of 87.9% with high class-specific precision on testing sets of 423 ROIs.


Medical Imaging 2007: Computer-Aided Diagnosis | 2007

Image-based diagnostic aid for interstitial lung disease with secondary data integration

Adrien Depeursinge; Henning Müller; Asmâa Hidki; Pierre-Alexandre Alois Poletti; Alexandra Platon; Antoine Geissbuhler

Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database can bring quick and precious information for example for emergency radiologists. The experience from a pilot project highlighted the need for detailed database containing high-quality annotations in addition to clinical data. The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung disease with secondary data integration. The data acquisition steps are detailed. The selection of the most relevant clinical parameters is done in collaboration with lung specialists from current literature, along with knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions of interest annotated. The available data was used to test primary visual features for the classification of lung tissue patterns. These features show good discriminative properties for the separation of five classes of visual observations.


Medical Imaging and Informatics | 2008

Learning a Frequency---Based Weighting for Medical Image Classification

Tobias Gass; Adrien Depeursinge; Antoine Geissbuhler; Henning Müller

This article describes the use of a frequency---based weighting developed for image retrieval to perform automatic annotation of images (medical and non---medical). The techniques applied are based on a simple tf/idf(term frequency, inverse document frequency) weighting scheme of GIFT (GNU Image Finding Tool), which is augmented by feature weights extracted from training data. The additional weights represent a measure of discrimination by taking into account the number of occurrences of the features in pairs of images of the same class or in pairs of images from different classes. The approach is fit to the image classification task by pruning parts of the training data. Further investigations were performed showing that weightings lead to significantly worse classification quality in certain feature domains. A classifier using a mixture of tf/idfweighted scoring, learned feature weights, and regular Euclidean distance gave best results using only the simple features. Using the aspect---ratio of images as feature improved results significantly.


Medical Imaging 2018: Digital Pathology | 2018

Tumor proliferation assessment of whole slide images.

Oscar Jimenez-del-Toro; Mikael Rousson; Martin Hedlund; Mats Andersson; Ludwig Jacobsson; Gunnar Läthén; Björn Norell; Henning Müller; Manfredo Atzori

Grading whole slide images (WSIs) from patient tissue samples is an important task in digital pathology, particularly for diagnosis and treatment planning. However, this visual inspection task, performed by pathologists, is inherently subjective and has limited reproducibility. Moreover, grading of WSIs is time consuming and expensive. Designing a robust and automatic solution for quantitative decision support can improve the objectivity and reproducibility of this task. This paper presents a fully automatic pipeline for tumor proliferation assessment based on mitosis counting. The approach consists of three steps: i) region of interest selection based on tumor color characteristics, ii) mitosis counting using a deep network based detector, and iii) grade prediction from ROI mitosis counts. The full strategy was submitted and evaluated during the Tumor Proliferation Assessment Challenge (TUPAC) 2016. TUPAC is the first digital pathology challenge grading whole slide images, thus mimicking more closely a real case scenario. The pipeline is extremely fast and obtained the 2nd place for the tumor proliferation assessment task and the 3rd place in the mitosis counting task, among 17 participants. The performance of this fully automatic method is similar to the performance of pathologists and this shows the high quality of automatic solutions for decision support.


Archive | 2017

Cloud-Based Benchmarking of Medical Image Analysis

Allan Hanbury; Henning Müller; Georg Langs

Systematic evaluation has had a strong impact on many data analysis domains, for example, TREC and CLEF in information retrieval, ImageCLEF in image retrieval, and many challenges in conferences such as MICCAI for medical imaging and ICPR for pattern recognition. With Kaggle, a platform for machine learning challenges has also had a significant success in crowdsourcing solutions. This shows the importance to systematically evaluate algorithms and that the impact is far larger than simply evaluating a single system. Many of these challenges also showed the limits of the commonly used paradigm to prepare a data collection and tasks, distribute these and then evaluate the participants’ submissions. Extremely large datasets are cumbersome to download, while shipping hard disks containing the data becomes impractical. Confidential data can often not be shared, for example medical data, and also data from company repositories. Real-time data will never be available via static data collections as the data change over time and data preparation often takes much time. The Evaluation-as-a-Service (EaaS) paradigm tries to find solutions formany of these problems and has been applied in theVISCERALproject. In EaaS, the data are not moved but remain on a central infrastructure. In the case of VISCERAL, all data were made available in a cloud environment. Participants were provided with virtual machines on which to install their algorithms. Only a small part of the data, the training data, was visible to participants. The major part of the data, the test data, was only accessible to the organizers who ran the algorithms in the participants’ virtual machines on the test data to obtain impartial performance measures. A. Hanbury (B) TU Wien, Institute of Software Technology and Interactive Systems, Favoritenstraße 9-11/188, 1040 Vienna, Austria e-mail: [email protected] H. Müller Information Systems Institute, HES-SO Valais, Rue du Technopole 3, 3960 Sierre, Switzerland e-mail: [email protected]

Collaboration


Dive into the Henning Müller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yashin Dicente Cid

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Manfredo Atzori

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Patrick Ruch

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Oscar Alfonso Jiménez del Toro

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Allan Hanbury

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Gobeill

University of Applied Sciences Western Switzerland

View shared research outputs
Researchain Logo
Decentralizing Knowledge