Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Reinhard Kneser is active.

Publication


Featured researches published by Reinhard Kneser.


international conference on acoustics, speech, and signal processing | 1995

Improved backing-off for M-gram language modeling

Reinhard Kneser; Hermann Ney

In stochastic language modeling, backing-off is a widely used method to cope with the sparse data problem. In case of unseen events this method backs off to a less specific distribution. In this paper we propose to use distributions which are especially optimized for the task of backing-off. Two different theoretical derivations lead to distributions which are quite different from the probability distributions that are usually used for backing-off. Experiments show an improvement of about 10% in terms of perplexity and 5% in terms of word error rate.


Medical Image Analysis | 2009

Automated model-based vertebra detection, identification, and segmentation in CT images

Tobias Klinder; Jörn Ostermann; Matthias Ehm; Astrid Franz; Reinhard Kneser; Cristian Lorenz

For many orthopaedic, neurological, and oncological applications, an exact segmentation of the vertebral column including an identification of each vertebra is essential. However, although bony structures show high contrast in CT images, the segmentation and labelling of individual vertebrae is challenging. In this paper, we present a comprehensive solution for automatically detecting, identifying, and segmenting vertebrae in CT images. A framework has been designed that takes an arbitrary CT image, e.g., head-neck, thorax, lumbar, or whole spine, as input and provides a segmentation in form of labelled triangulated vertebra surface models. In order to obtain a robust processing chain, profound prior knowledge is applied through the use of various kinds of models covering shape, gradient, and appearance information. The framework has been tested on 64 CT images even including pathologies. In 56 cases, it was successfully applied resulting in a final mean point-to-surface segmentation error of 1.12+/-1.04mm. One key issue is a reliable identification of vertebrae. For a single vertebra, we achieve an identification success of more than 70%. Increasing the number of available vertebrae leads to an increase in the identification rate reaching 100% if 16 or more vertebrae are shown in the image.


international conference on acoustics, speech, and signal processing | 1993

On the dynamic adaptation of stochastic language models

Reinhard Kneser; Volker Steinbiss

A simple and general scheme for the adaptation of stochastic language models to changing text styles is introduced. For each word in the running text, the adapted model is a linear combination of specific models, the interpolation parameters being estimated on the preceding text passage. Experiments on a 1.1-million English word corpus show the validity of the approach. The adaptation method improves a bigram language model by 10% in terms of test-set perplexity.<<ETX>>


Medical Image Analysis | 2010

Optimizing boundary detection via Simulated Search with applications to multi-modal heart segmentation

Jochen Peters; Olivier Ecabert; Carsten Meyer; Reinhard Kneser; Jürgen Weese

Segmentation of medical images can be achieved with the help of model-based algorithms. Reliable boundary detection is a crucial component to obtain robust and accurate segmentation results and to enable full automation. This is especially important if the anatomy being segmented is too variable to initialize a mean shape model such that all surface regions are close to the desired contours. Several boundary detection algorithms are widely used in the literature. Most use some trained image appearance model to characterize and detect the desired boundaries. Although parameters of the boundary detection can vary over the model surface and are trained on images, their performance (i.e., accuracy and reliability of boundary detection) can only be assessed as an integral part of the entire segmentation algorithm. In particular, assessment of boundary detection cannot be done locally and independently on model parameterization and internal energies controlling geometric model properties. In this paper, we propose a new method for the local assessment of boundary detection called Simulated Search. This method takes any boundary detection function and evaluates its performance for a single model landmark in terms of an estimated geometric boundary detection error. In consequence, boundary detection can be optimized per landmark during model training. We demonstrate the success of the method for cardiac image segmentation. In particular we show that the Simulated Search improves the capture range and the accuracy of the boundary detection compared to a traditional training scheme. We also illustrate how the Simulated Search can be used to identify suitable classes of features when addressing a new segmentation task. Finally, we show that the Simulated Search enables multi-modal heart segmentation using a single algorithmic framework. On computed tomography and magnetic resonance images, average segmentation errors (surface-to-surface distances) for the four chambers and the trunks of the large vessels are in the order of 0.8 mm. For 3D rotational X-ray angiography images of the left atrium and pulmonary veins, the average error is 1.3 mm. In all modalities, the locally optimized boundary detection enables fully automatic segmentation.


medical image computing and computer assisted intervention | 2007

Automatic whole heart segmentation in static magnetic resonance image volumes

Jochen Peters; Olivier Ecabert; Carsten Meyer; Hauke Schramm; Reinhard Kneser; Alexandra Groth; Jürgen Weese

We present a fully automatic segmentation algorithm for the whole heart (four chambers, left ventricular myocardium and trunks of the aorta, the pulmonary artery and the pulmonary veins) in cardiac MR image volumes with nearly isotropic voxel resolution, based on shape-constrained deformable models. After automatic model initialization and reorientation to the cardiac axes, we apply a multi-stage adaptation scheme with progressively increasing degrees of freedom. Particular attention is paid to the calibration of the MR image intensities. Detailed evaluation results for the various anatomical heart regions are presented on a database of 42 patients. On calibrated images, we obtain an average segmentation error of 0.76mm.


international conference on spoken language processing | 1996

Statistical language modeling using a variable context length

Reinhard Kneser

In this paper we investigate statistical language models with a variable context length. For such models the number of relevant words in a context is not fixed as in conventional M-gram models but depends on the context itself. We develop a measure for the quality of variable-length models and present a pruning algorithm for the creation of such models, based on this measure. Further we address the question how the use of a special backing-off distribution can improve the language models. Experiments were performed on two data bases, the ARPANAB corpus and the German Verbmobil corpus, respectively. The results show that variable-length models outperform conventional models of the same size. Furthermore it can be seen that if a moderate loss in performance is acceptable, the size of a language model can be reduced drastically by using the presented pruning algorithm.


international conference on acoustics, speech, and signal processing | 1997

Semantic clustering for adaptive language modeling

Reinhard Kneser; Jochen Peters

In this paper we present efficient clustering algorithms for two novel class-based approaches to adaptive language modeling. In contrast to bigram and trigram class models, the proposed classes are related to the distribution and co-occurrence of words within complete text units and are thus mostly of a semantic nature. We introduce adaptation techniques such as the adaptive linear interpolation and an approximation to the minimum discriminant estimation and show how to use the automatically derived semantic structure in order to allow a fast adaptation to some special topic or style. In experiments performed on the Wall-Street-Journal corpus, intuitively convincing semantic classes were obtained. The resulting adaptive language models were significantly better than a standard cache model. Compared to a static model a reduction in perplexity of up to 31% could be achieved.


medical image computing and computer assisted intervention | 2010

Patient specific models for planning and guidance of minimally invasive aortic valve implantation

I. Waechter; Reinhard Kneser; G. Korosoglou; Jochen Peters; N. H. Bakker; R. v. d. Boomen; Jürgen Weese

Recently, new techniques for minimally invasive aortic valve implantation have been developed generating a need for planning tools that assess valve anatomy and guidance tools that support implantation under x-ray guidance. Extracting the aortic valve anatomy from CT images is essential for such tools and we present a model-based method for that purpose. In addition, we present a new method for the detection of the coronary ostia that exploits the model-based segmentation and show, how a number of clinical measurements such as diameters and the distances between aortic valve plane and coronary ostia can be derived that are important for procedure planning. Validation results are based on accurate reference annotations of 20 CT images from different patients and leave-one-out tests. They show that model adaptation can be done with a mean surface-to-surface error of 0.5mm. For coronary ostia detection a success rate of 97.5% is achieved. Depending on the measured quantity, the segmentation translates into a root-mean-square error between 0.4 - 1.2mm when comparing clinical measurements derived from automatic segmentation and from reference annotations.


Archive | 1993

FORMING WORD CLASSES BY STATISTICAL CLUSTERING FOR STATISTICAL LANGUAGE MODELLING

Reinhard Kneser; Hermann Ney

In statistical language modelling there is always a problem of sparse data. A way to reduce this problem is to form groups of words in order to get equivalence classes. In this paper we present a clustering algorithm that builds abstract word equivalence classes. The algorithm finds a local optimum according to a maximum-likelihood criterion. Experiments were made on an English 1.1-million word corpus and a German 100,000-word corpus. Compared to a word bigram model, the use of clustered equivalence classes in a bigram class model leads to a significant improvement, as measured by the perplexity. Depending on the size of the training material, the automatically clustered word classes are even better than manually determined categories.


Philips Journal of Research | 1995

The Philips Research system for continuous-speech recognition

Volker Steinbiss; Hermann Ney; Xavier L. Aubert; Stefan Besling; Christian Dugast; Ute Essen; Dieter Geller; Reinhard Kneser; H.-G. Meier; Martin Oerder; Bach-Hiep Tran

This paper gives an overview of the Philips Research system for continuous-speech recognition. The recognition architecture is based on an integrated statistical approach. The system has been successfully applied to various tasks in American English and German, ranging from small vocabulary tasks to very large vocabulary tasks and from recognition only to speech understanding. Here, we concentrate on phoneme-based continuous-speech recognition for large vocabulary recognition as used for dictation, which covers a significant part of our research work on speech recognition. We describe this task and report on experimental results. In order to allow a comparison with the performance of other systems, a section with an evaluation on the standard North American Business news (NAB2) task (dictation of American English newspaper text) is supplied.

Collaboration


Dive into the Reinhard Kneser's collaboration.

Researchain Logo
Decentralizing Knowledge