Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas K. Maier is active.

Publication


Featured researches published by Andreas K. Maier.


Speech Communication | 2009

PEAKS - A system for the automatic evaluation of voice and speech disorders

Andreas K. Maier; Tino Haderlein; Ulrich Eysholdt; Frank Rosanowski; Anton Batliner; Maria Schuster; Elmar Nöth

We present a novel system for the automatic evaluation of speech and voice disorders. The system can be accessed via the internet platform-independently. The patient reads a text or names pictures. His or her speech is then analyzed by automatic speech recognition and prosodic analysis. For patients who had their larynx removed due to cancer and for children with cleft lip and palate we show that we can achieve significant correlations between the automatic analysis and the judgment of human experts in a leave-one-out experiment (p<.001). A correlation of .90 for the evaluation of the laryngectomees and .87 for the evaluation of the childrens data was obtained. This is comparable to human inter-rater correlations.


international conference on acoustics, speech, and signal processing | 2007

Towards More Reality in the Recognition of Emotional Speech

Björn W. Schuller; Dino Seppi; Anton Batliner; Andreas K. Maier; Stefan Steidl

As automatic emotion recognition based on speech matures, new challenges can be faced. We therefore address the major aspects in view of potential applications in the field, to benchmark todays emotion recognition systems and bridge the gap between commercial interest and current performances: acted vs. spontaneous speech, realistic emotions, noise and microphone conditions, and speaker independence. Three different data-sets are used: the Berlin Emotional Speech Database, the Danish Emotional Speech Database, and the spontaneous AIBO Emotion Corpus. By using different feature types such as word- or turn-based statistics, manual versus forced alignment, and optimization techniques we show how to best cope with this demanding task and how noise addition or different microphone positions affect emotion recognition.


international conference on acoustics, speech, and signal processing | 2008

Age and gender recognition for telephone applications based on GMM supervectors and support vector machines

Tobias Bocklet; Andreas K. Maier; Josef Bauer; Felix Burkhardt; Elmar Nöth

This paper compares two approaches of automatic age and gender classification with 7 classes. The first approach are Gaussian mixture models (GMMs) with universal background models (UBMs), which is well known for the task of speaker identification/verification. The training is performed by the EM algorithm or MAP adaptation respectively. For the second approach for each speaker of the test and training set a GMM model is trained. The means of each model are extracted and concatenated, which results in a GMM supervector for each speaker. These supervectors are then used in a support vector machine (SVM). Three different kernels were employed for the SVM approach: a polynomial kernel (with different polynomials), an RBF kernel and a linear GMM distance kernel, based on the KL divergence. With the SVM approach we improved the recognition rate to 74% (p < 0.001) and are in the same range as humans.


Medical Physics | 2013

CONRAD--a software framework for cone-beam imaging in radiology.

Andreas K. Maier; Hannes G. Hofmann; Martin Berger; Peter Fischer; Chris Schwemmer; Haibo Wu; Kerstin Müller; Joachim Hornegger; Jang-Hwan Choi; Christian Riess; Andreas Keil; Rebecca Fahrig

PURPOSE In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects. METHODS CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source. RESULTS A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size. CONCLUSIONS As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.


International Journal of Biomedical Imaging | 2013

Robust Vessel Segmentation in Fundus Images

Attila Budai; Rüdiger Bock; Andreas K. Maier; Joachim Hornegger; Georg Michelson

One of the most common modalities to examine the human eye is the eye-fundus photograph. The evaluation of fundus photographs is carried out by medical experts during time-consuming visual inspection. Our aim is to accelerate this process using computer aided diagnosis. As a first step, it is necessary to segment structures in the images for tissue differentiation. As the eye is the only organ, where the vasculature can be imaged in an in vivo and noninterventional way without using expensive scanners, the vessel tree is one of the most interesting and important structures to analyze. The quality and resolution of fundus images are rapidly increasing. Thus, segmentation methods need to be adapted to the new challenges of high resolutions. In this paper, we present a method to reduce calculation time, achieve high accuracy, and increase sensitivity compared to the original Frangi method. This method contains approaches to avoid potential problems like specular reflexes of thick vessels. The proposed method is evaluated using the STARE and DRIVE databases and we propose a new high resolution fundus database to compare it to the state-of-the-art algorithms. The results show an average accuracy above 94% and low computational needs. This outperforms state-of-the-art methods.


Physics in Medicine and Biology | 2013

Including oxygen enhancement ratio in ion beam treatment planning: model implementation and experimental verification

Emanuele Scifoni; W Tinganelli; W K Weyrather; Marco Durante; Andreas K. Maier; Michael S. Kramer

We present a method for adapting a biologically optimized treatment planning for particle beams to a spatially inhomogeneous tumor sensitivity due to hypoxia, and detected e.g., by PET functional imaging. The TRiP98 code, established treatment planning system for particles, has been extended for including explicitly the oxygen enhancement ratio (OER) in the biological effect calculation, providing the first set up of a dedicated ion beam treatment planning approach directed to hypoxic tumors, TRiP-OER, here reported together with experimental tests. A simple semi-empirical model for calculating the OER as a function of oxygen concentration and dose averaged linear energy transfer, generating input tables for the program is introduced. The code is then extended in order to import such tables coming from the present or alternative models, accordingly and to perform forward and inverse planning, i.e., predicting the survival response of differently oxygenated areas as well as optimizing the required dose for restoring a uniform survival effect in the whole irradiated target. The multiple field optimization results show how the program selects the best beam components for treating the hypoxic regions. The calculations performed for different ions, provide indications for the possible clinical advantages of a multi-ion treatment. Finally the predictivity of the code is tested through dedicated cell culture experiments on extended targets irradiation using specially designed hypoxic chambers, providing a qualitative agreement, despite some limits in full survival calculations arising from the RBE assessment. The comparison of the predictions resulting by using different model tables are also reported.


Scientific Reports | 2015

Kill-painting of hypoxic tumours in charged particle therapy.

Walter Tinganelli; Marco Durante; Ryoichi Hirayama; Michael S. Kramer; Andreas K. Maier; Wilma Kraft-Weyrather; Yoshiya Furusawa; Thomas Friedrich; Emanuele Scifoni

Solid tumours often present regions with severe oxygen deprivation (hypoxia), which are resistant to both chemotherapy and radiotherapy. Increased radiosensitivity as a function of the oxygen concentration is well described for X-rays. It has also been demonstrated that radioresistance in anoxia is reduced using high-LET radiation rather than conventional X-rays. However, the dependence of the oxygen enhancement ratio (OER) on radiation quality in the regions of intermediate oxygen concentrations, those normally found in tumours, had never been measured and biophysical models were based on extrapolations. Here we present a complete survival dataset of mammalian cells exposed to different ions in oxygen concentration ranging from normoxia (21%) to anoxia (0%). The data were used to generate a model of the dependence of the OER on oxygen concentration and particle energy. The model was implemented in the ion beam treatment planning system to prescribe uniform cell killing across volumes with heterogeneous radiosensitivity. The adaptive treatment plans have been validated in two different accelerator facilities, using a biological phantom where cells can be irradiated simultaneously at three different oxygen concentrations. We thus realized a hypoxia-adapted treatment plan, which will be used for painting by voxel of hypoxic tumours visualized by functional imaging.


IEEE Transactions on Medical Imaging | 2013

Automatic Cell Detection in Bright-Field Microscope Images Using SIFT, Random Forests, and Hierarchical Clustering

Firas Mualla; Simon Schöll; Björn Sommerfeldt; Andreas K. Maier; Joachim Hornegger

We present a novel machine learning-based system for unstained cell detection in bright-field microscope images. The system is fully automatic since it requires no manual parameter tuning. It is also highly invariant with respect to illumination conditions and to the size and orientation of cells. Images from two adherent cell lines and one suspension cell line were used in the evaluation for a total number of more than 3500 cells. Besides real images, simulated images were also used in the evaluation. The detection error was between approximately zero and 15.5% which is a significantly superior performance compared to baseline approaches.


Medical Physics | 2016

Helium ions for radiotherapy? Physical and biological verifications of a novel treatment modality

Michael Krämer; Emanuele Scifoni; C. Schuy; M. Rovituso; Walter Tinganelli; Andreas K. Maier; Robert Kaderka; Wilma Kraft-Weyrather; Stephan Brons; Thomas Tessonnier; Katia Parodi; Marco Durante

PURPOSE Modern facilities for actively scanned ion beam radiotherapy allow in principle the use of helium beams, which could present specific advantages, especially for pediatric tumors. In order to assess the potential use of these beams for radiotherapy, i.e., to create realistic treatment plans, the authors set up a dedicated (4)He beam model, providing base data for their treatment planning system TRiP98, and they have reported that in this work together with its physical and biological validations. METHODS A semiempirical beam model for the physical depth dose deposition and the production of nuclear fragments was developed and introduced in TRiP98. For the biological effect calculations the last version of the local effect model was used. The model predictions were experimentally verified at the HIT facility. The primary beam attenuation and the characteristics of secondary charged particles at various depth in water were investigated using (4)He ion beams of 200 MeV/u. The nuclear charge of secondary fragments was identified using a ΔE/E telescope. 3D absorbed dose distributions were measured with pin point ionization chambers and the biological dosimetry experiments were realized irradiating a Chinese hamster ovary cells stack arranged in an extended target. RESULTS The few experimental data available on basic physical processes are reproduced by their beam model. The experimental verification of absorbed dose distributions in extended target volumes yields an overall agreement, with a slight underestimation of the lateral spread. Cell survival along a 4 cm extended target is reproduced with remarkable accuracy. CONCLUSIONS The authors presented a simple simulation model for therapeutical (4)He beams which they introduced in TRiP98, and which is validated experimentally by means of physical and biological dosimetries. Thus, it is now possible to perform detailed treatment planning studies with (4)He beams, either exclusively or in combination with other ion modalities.


IEEE Transactions on Computational Imaging | 2016

A Comparative Error Analysis of Current Time-of-Flight Sensors

Peter Fürsattel; Simon Placht; Michael Balda; Christian Schaller; Hannes G. Hofmann; Andreas K. Maier; Christian Riess

Time-of-flight (ToF) cameras suffer from systematic errors, which can be an issue in many application scenarios. In this paper, we investigate the error characteristics of eight different ToF cameras. Our survey covers both well established and recent cameras including the Microsoft Kinect V2. We present up to six experiments for each camera to quantify different types of errors. For each experiment, we outline the basic setup, present comparable data for each camera, and discuss the respective results. The results discussed in this paper enable the community to make appropriate decisions in choosing the best matching camera for a certain application. This work also lays the foundation for a framework to benchmark future ToF cameras. Furthermore, our results demonstrate the necessity for correcting characteristic measurement errors. We believe that the presented findings will allow 1) the development of novel correction methods for specific errors and 2) the development of general data processing algorithms that are able to robustly operate on a wider range of cameras and scenes.

Collaboration


Dive into the Andreas K. Maier's collaboration.

Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elmar Nöth

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Christian Riess

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Steidl

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Tino Haderlein

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Vincent Christlein

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

James G. Fujimoto

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bastian Bier

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge