Peter Michael Goebel
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Michael Goebel.
IEEE/SP 13th Workshop on Statistical Signal Processing, 2005 | 2005
Peter Michael Goebel; Ahmed Nabil Belbachir; M. Truppe
This paper presents an appropriate approach for the robust estimation of the noise statistics in dental panoramic X-ray images. To achieve maximum image quality after denoising, a semi-empirical scatter model is presented, leading to a local adaptive Gaussian scale mixture (GSM) model. State of the art methods use multiscale filtering of images to reduce the irrelevant part of information, based on generic estimation of noise. The usual assumption of a distribution of Gaussian and Poisson statistics only leads to overestimation of the noise variance in regions of low intensity (small photon counts), but to underestimation in regions of high intensity and therefore to non-optimal results. The analysis approach is tested on a database of 50 panoramic X-ray images and the results are cross-validated by medical experts. It is shown that the local standard deviation (SDEV) in images, stemming from homogeneous phantoms (AI, PMMA), follows a generalized Nakagami distribution (GND). The heavily tailed distribution is not covered entirely by the GND. The error density function, is hypothesized to stem from scatter-glare, degrading the image. A beam stop method, for estimation of the scatter-glare amount, verifies that hypothesis. Finally, the application of the method for a phantom image, is shown with denoising results for comparison purpose, followed by the conclusion
international conference on industrial informatics | 2007
Peter Michael Goebel; Markus Vincze
Vision, as a key perceptional capability for cognitive systems relates to rather difficult problems -such as visual object recognition, representation, categorization, and scene understanding. State-of-the-art solutions, using object appearance based models, already reached certain maturity. They achieve excellent recognition performance and provide learning structures that are subsequently utilized for object recognition and tracking. However, in context of object topology understanding for cognitive tasks, these methods cannot be directly compared with human performance, because it is obvious that appearance based methods do not contribute to understanding of structures in 3D. Research findings from infant psychology and animal investigation give evidence for using hierarchical models of object representation, based on image primitives e.g. such as edges, corners, shading or homogeneity of object colors. It is the objective of this paper to present an approach based on both, findings from biological studies and cognitive science, as enablers for autonomous cognitive investigation of natural scenes and their understanding. We present the architecture of a compound cognitive framework and its first behavioral level with the implementation of a vision model of the mammalian striate visual cortex in five layers. The proposed implementation is exemplified with an object similar to the Necker cube.
european conference on circuit theory and design | 2005
Peter Michael Goebel; Ahmed Nabil Belbachir; M. Truppe
Dental panoramic X-ray images are images having complex content, because several layers of tissue, bone, fat, etc. are superimposed. Nonuniform illumination, stemming from the X-ray source, gives extra modulation to the image, which causes spatially varying X-ray photon density. The interaction of the X-ray photons with the density of matter causes spatially coherent varying noise contribution. Many algorithms exist to compensate background effects, by pixel based or global methods. However, if the image is contaminated by a nonnegligible amount of noise, that is usually nonGaussian, the methods cannot approximate the background efficiently. In this paper, a dedicated approach for the removal of a multiplying background is presented using polynomial scaling and the A-Trous multiresolution transform. The new method uses a background image and a diagnostic image together to estimate the density of the diagnostic content. It assumes a locally Gaussian statistic scaled by a hidden factor, where the hidden factor represents the variance of the nonGaussian process of image generation. Because, the method also removes noise from the compound signal, a comparison to a standard denoising method is given. This approach has been tested on 50 images from a database of panoramic X-ray images where the results are cross validated by medical experts.
advanced concepts for intelligent vision systems | 2007
Peter Michael Goebel; Markus Vincze
Object recognition has developed to the most common approach for detecting arbitrary objects based on their appearance, where viewpoint dependency, occlusions, algorithmic constraints and noise are often hindrances for successful detection. Statistical pattern analysis methods, which are able to extract features from appearing images and enable the classification of the image content have reached a certain maturity and achieve excellent recognition on rather complex problems. However, these systems do not seem directly scalable to human performance in a cognitive sense and appearance does not contribute to understanding the structure of objects. Syntactical pattern recognition methods are able to deal with structured objects, which may be constructed from primitives that were generated from extracted image features. Here, an eminent problem is how to aggregate image primitives in order to (re-) construct objects from such primitives. In this paper, we propose a new approach to the aggregation of object prototypes by using geometric primitives derived from features out of image sequences and acquired from changing viewpoints. We apply syntactical rules for forming representations of the implicit object topology of object prototypes by a set of fuzzy graphs. Finally, we find a super-position of a prototype graph set, which can be used for updating and learning new object recipes in hippocampal like episodic memory that paves the way to cognitive understanding of natural scenes. The proposed implementation is exemplified with an object similar to the Necker cube.
dagm conference on pattern recognition | 2005
Ahmed Nabil Belbachir; Peter Michael Goebel
The efficiency of an image compression technique relies on the capability of finding sparse M-terms for best approximation with reduced visually significant quality loss. By ”visually significant” it is meant the information to which human observer can perceive. The Human Visual System (HVS) is generally sensitive to the contrast, color, spatial frequency...etc. This paper is concerned with the compression of color images where the psycho-visual representation is an important strategy to define the best M-term approximation technique. Digital color images are usually stored using the RGB space, television broadcast uses YUV (YIQ) space while the psycho-visual representation relies on 3 components: one for the luminance and two for the chrominance. In this paper, an analysis of the wavelet and contourlet representation of the color image both in RGB and YUV spaces is performed. A approximation technique is performed in order to investigate the performance of image compression technique using one of those transforms.
dagm conference on pattern recognition | 2005
Peter Michael Goebel; Nabil Ahmed Belbachir; Michael Truppe
Dental Panoramic X-ray images are images having complex content, because several layers of tissue, bone, fat, etc. are superimposed. Non-uniform illumination, stemming from the X-ray source, gives extra modulation to the image, which causes spatially varying X-ray photon density. The interaction of the X-ray photons with the density of matter causes spatially coherent varying noise contribution. Many algorithms exist to compensate background effects, by pixel based or global methods. However, if the image is contaminated by a non-negligible amount of noise, that is usually non-Gaussian, the methods cannot approximate the background efficiently. In this paper, a dedicated approach for background subtraction is presented, which operates blind, that means the separation of a set of independent signals from a set of mixed signals, with at least, only little a priori information about the nature of the signals, using the A-Trous multiresolution transform to alleviate this problem. The new method estimates the background bias from a reference scan, which is taken without a patient. The background values are rescaled by a polynomial compensation factor, given by mean square error criteria, thus subtracting the background will not produce additional artifacts in the image. The energy of the background estimate is subtracted from the energy of the mixture. The method is capable to remove spatially varying noise also, allocating an appropriate spatially noise estimate. This approach has been tested on 50 images from a database of panoramic X-ray images, where the results are cross validated by medical experts.
IEEE/SP 13th Workshop on Statistical Signal Processing, 2005 | 2005
Ahmed Nabil Belbachir; Peter Michael Goebel
Faint sources detection is one of the major issues during the reconstruction of an astronomical science image from a raw data sequence. This problem is a consequence of the detection limit of the infrared instruments as well as the number of cosmic ray impacts (glitches) that leads to the false detection. Astronomical images contain many objects with isotropic structures (e.g. point sources) but also plenty of anisotropic information (e.g. filamentary structures). The wavelet transform is usually applied to separate all these signal constituents in each pixel, then a map is built to represent the information of the associated noise before applying a source detection algorithm. Wavelets are well adapted to point singularities (discontinuities), however, they have a problem with orientation selectivity. Therefore, they do not represent anisotropic structures (e.g. smooth curves) effectively. This paper presents a combined approach conlourlet-wavelet for faint source extraction from infrared raw- images sequences. While the contourlet representation provides oriented support for efficient approximation of anisotropic structures, isotropic geometry is effectively captured by separable wavelets. This novel approach has been tested on real and simulated infrared images, stemming from the infrared space observatory database
international conference on pattern recognition | 2006
Ahmed Nabil Belbachir; Peter Michael Goebel
In this paper, the efficiency of the JPEG2000 scheme combined with a complementary denoising process is analyzed on simulated and real denial ortho-pantomographic images, where the simulation images are perturbed by Poisson noise. The case of dental radiography is investigated, because radiographic images are a combination between the relevant signal and a significant amount of acquisition noise, which is per definition not compressible. The noise behaves generally close to Poisson statistics, which generally affects the compression performance. The denoising process is supported by Monte Carlo noise modeling, which is introduced in the JPEG 2000 compression scheme to improve the compression efficiency of the medical images in terms of compression ratio and image quality. Fifty selected images are denoised and the compression ratio, using lossless and lossy JPEG 2000, is reported and evaluated
Archive | 2014
Stefan Erber; Markus Schachinger; Thomas Mandl; Peter Michael Goebel
When muscles are used during ordinary work, the fibers of a particular muscle are recruited by the corresponding motor-neurons of the spine. From the total number of fibers in the muscle, only parts are recruited to be active for work according to the required force which the cortical system has estimated. Thus, using the activated fibers by applying force over a period of time, the fibers get more and more fatigued and are replaced by newly activated ones. J. Z. Liu et al. developed a technically sound theoretical model for muscle activation, fatigue and recovery which describes how many motor units are in the states of rest, activation and fatigue. In this work, Liu’s theoretical model is adopted and extended in order to simulate and investigate fatigue. The actual state of a muscle is estimated by the measurement of surface Electro-Myo-Graphy (sEMG), force and torque. In the model, using the input parameters sEMG, or force and torque, with amputees the number of Motor Units (MU) are estimated in the states: rest, activation and fatigue. A method for muscle state estimation at amputees is presented in this work.
emerging technologies and factory automation | 2008
Peter Michael Goebel; Markus Vincze; Bernard Favre-Bulle
The categorization problem in object recognition is the assignment of semantic categories to objects or parts of objects. Here, the best categorization performance provide mammalian brain functions that motivate to partly mimic such a formation for the application to machine learning and automation. Curiosity and experimental willingness support human recognizing and detecting in relation to still unknown objects. In order to learn an understanding of the topology of such objects, objects are observed from a series of different viewpoints. This results in a collection of object views that afterward can be used by cognitive processes. State-of-the-art systems provide learning structures that are subsequently utilized for object recognition and tracking tasks. However, most of these systems aim at very specific goals in restricted domains, and therefore, it remains no room for learning and understanding of object structures. In the work of this paper, we propose a new robot vision model for neural categorization. This is close to our initial idea of mimicking mammalian brain functions for robot vision. We use the combinatorial solution of our cognitive framework with an embedding of a recently presented stochastic n-gram model, supported by a three-dimensional grammar model on a discrete three-dimensional lattice. Furthermore, we use ant colony optimization heuristics for collecting the transition probabilities of the n-gram model. The proposed solution is exemplified by applying the method to several object view series of 69 polytopes generated out of the five platonic solids by truncation; thus, generating images from 32 viewpoints each, yields an object set of 2208 images. These set is further expanded to four sets perturbed by Gaussian-noise with varying sigma = {0, 0.1, 0.5, 1, 2}. Finally, we show results for selected objects and conclude with an outlook on further work.