Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerald Krell is active.

Publication


Featured researches published by Gerald Krell.


computational intelligence | 1997

Locally Adaptive Fuzzy Image Enhancement

Hamid R. Tizhoosh; Gerald Krell; Bernd Michaelis

In recent years, some researchers have applied the concept of fuzziness to develop new enhancement algorithms. The global fuzzy image enhancement methods, however, fail occasionally to achieve satisfactory results. In this work, we introduce a locally adaptive version of two existing fuzzy image enhancement algorithms to overcome this problem.


International Journal of Medical Informatics | 1998

Enhancement and associative restoration of electronic portal images in radiotherapy

Gerald Krell; Hamid R. Tizhoosh; Tilo Lilienblum; C. J. Moore; Bernd Michaelis

Electronic portal imaging devices use the high energy treatment beam to project the body interior of the patient during radiation onto a fluorescent screen that is scanned by a camera. Because of the imaging physics, the unprocessed images of very poor quality, but they are the only available information during treatment for observation of the patients organs. This paper presents an approach that combines an associative restoration algorithm with a fuzzy image enhancement technique. By fusion of the electronic portal image (EPI) with a pre-treatment captured simulator image (SI) a higher image quality than by conventional techniques is achieved.


Three-Dimensional Microscopy: Image Acquisition and Processing IV | 1997

Restoration of three-dimensional quasi-binary images from confocal microscopy and its application to dendritic trees

Andreas Herzog; Gerald Krell; Bernd Michaelis; Jizhong Wang; Werner Zuschratter; Anna Katharina Braun

For the analysis of learning processes and the underlying changes of the shape of excitatory synapses (spines), 3-D volume samples of selected dendritic segments are scanned by a confocal laser scanning microscope. For a more detailed analysis, such as the classification of spine types, binary images of higher resolution are required. Simple threshold methods have disadvantages for small structures because the microscope point spread function (PSF) causes a darkening and a spread. The direction-dependent PSF leads to shape errors. To reconstruct structures and edge positions with a resolution smaller than one voxel a parametric model for the dendrite and the spines is created. In our application we use the known tree-like structure of the nerve cell as a- priori information. To create the model, simple geometrical elements (cylinders with hemispheres at the ends) are connected. The model can be adapted for size and position in sub-pixel domain. To estimate the quadratic error between the microscope image and the model, the model is sampled with the same resolution as the microscope image and convolved by the microscope PSF. During an iterative process the parameters of the model are optimized. In contrast to other pixel-based methods. the number of variable parameters is much slower. The influence of small deviations in the microscope image (caused by the inhomogeneous biological materials) is reduced.


Neurocomputing | 2015

Fusion paradigms in cognitive technical systems for human-computer interaction

Michael Glodek; Frank Honold; Thomas Geier; Gerald Krell; Florian Nothdurft; Stephan Reuter; Felix Schüssel; Thilo Hörnle; Klaus Dietmayer; Wolfgang Minker; Susanne Biundo; Michael Weber; Günther Palm; Friedhelm Schwenker

Recent trends in human-computer interaction (HCI) show a development towards cognitive technical systems (CTS) to provide natural and efficient operating principles. To do so, a CTS has to rely on data from multiple sensors which must be processed and combined by fusion algorithms. Furthermore, additional sources of knowledge have to be integrated, to put the observations made into the correct context. Research in this field often focuses on optimizing the performance of the individual algorithms, rather than reflecting the requirements of CTS. This paper presents the information fusion principles in CTS architectures we developed for Companion Technologies. Combination of information generally goes along with the level of abstractness, time granularity and robustness, such that large CTS architectures must perform fusion gradually on different levels - starting from sensor-based recognitions to highly abstract logical inferences. In our CTS application we sectioned information fusion approaches into three categories: perception-level fusion, knowledge-based fusion and application-level fusion. For each category, we introduce examples of characteristic algorithms. In addition, we provide a detailed protocol on the implementation performed in order to study the interplay of the developed algorithms.


international symposium on neural networks | 1997

Fuzzy image enhancement and associative feature matching in radiotherapy

Gerald Krell; Hamid R. Tizhoosh; T. Lilienblum; C. J. Moore; Bernd Michaelis

The electronic portal imaging device has become an important tool for the clinician to verify the shape and the location of the therapy beam with respect to the patients anatomy. Normally, a visual comparison of the real patient position related to the beam with the planned treatment field is performed. This treatment field is defined during diagnostics and treatment planning. For this purpose, a treatment simulation takes place as a result of which a simulator image (SI) is captured. Because of the imaging physics the unprocessed electronic portal images (EPIs) are very poor in quality compared with the SI that is usually an X-ray image from CT. The conventional EPI allows only a rough verification of patient position relative to bony structures. The state of the art conventional enhancement techniques can be applied to EPIs to give some improvement for further visual analysis after the treatment (off-line). This paper presents an approach that combines an associative restoration algorithm with a fuzzy image enhancement technique to reach a new quality. The main idea of the associative restoration is the merger of the EPI with the SI to generate a much better in-treatment image than obtained by simple enhancement and to allow a more reliable feature matching. Firstly, the images are enhanced by the fuzzy image enhancement as a result of which the visibility of structures like bones is improved. This is important also for the following alignment of corresponding structures in the images. A specially structured artificial neural network that we call modified associative memory is trained by the enhanced SI.


MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction | 2012

Fusion of fragmentary classifier decisions for affective state recognition

Gerald Krell; Michael Glodek; Axel Panning; Ingo Siegert; Bernd Michaelis; Andreas Wendemuth; Friedhelm Schwenker

Real human-computer interaction systems based on different modalities face the problem that not all information channels are always available at regular time steps. Nevertheless an estimation of the current user state is required at anytime to enable the system to interact instantaneously based on the available modalities. A novel approach to decision fusion of fragmentary classifications is therefore proposed and empirically evaluated for audio and video signals of a corpus of non-acted user behavior. It is shown that visual and prosodic analysis successfully complement each other leading to an outstanding performance of the fusion architecture.


computer analysis of images and patterns | 1993

Artificial Neural Networks for Image Improvement

Bernd Michaelis; Gerald Krell

Computer vision is an important field in industrial/automation processes. Inspection by visual means can be a powerful tool in automatic control procedures. When operating with video signals, irregularities of the optical system must often be compensated. In particular, blur, geometric distortions and the unequal brightness distribution can lead to difficulties during further processing of an image. In the following, it is shown how the theory of neural networks can be applied in image correction. The weights of one single layer are trained for calibration. Using a suitable optimisation criteria the correcting system for images superimposed by noise directly results in a Wiener Filter. A pipeline processor simulates a neural network and operates in real time. After theoretical considerations, experimental results are given in this paper.


International Scholarly Research Notices | 2013

Affine-Invariant Feature Extraction for Activity Recognition

Samy Sadek; Ayoub Al-Hamadi; Gerald Krell; Bernd Michaelis

We propose an innovative approach for human activity recognition based on affine-invariant shape representation and SVM-based feature classification. In this approach, a compact computationally efficient affine-invariant representation of action shapes is developed by using affine moment invariants. Dynamic affine invariants are derived from the 3D spatiotemporal action volume and the average image created from the 3D volume and classified by an SVM classifier. On two standard benchmark action datasets (KTH and Weizmann datasets), the approach yields promising results that compare favorably with those previously reported in the literature, while maintaining real-time performance.


international conference on signal processing | 2012

Multimodal affect recognition in spontaneous HCI environment

Axel Panning; Ingo Siegert; Ayoub Al-Hamadi; Andreas Wendemuth; Dietmar F. Rösner; Jörg Frommer; Gerald Krell; Bernd Michaelis

Human Computer Interaction (HCI) is known to be a multimodal process. In this paper we will show results of experiments for affect recognition, with non-acted, affective multimodal data from the new Last Minute Corpus (LMC). This corpus is more related to real HCI applications than other known data sets where affective behavior is elicited untypically for HCI.We utilize features from three modalities: facial expressions, prosody and gesture. The results show, that even simple fusion architectures can reach respectable results compared to other approaches. Further we could show, that probably not all features and modalities contribute substantially to the classification process, where prosody and eye blink frequency seem most contributing in the analyzed dataset.


Isprs Journal of Photogrammetry and Remote Sensing | 2002

Photogrammetric measurement of patients in radiotherapy

Roman Calow; Günther Gademann; Gerald Krell; R. Mecke; Bernd Michaelis; Nils Riefenstahl; Mathias Walke

Abstract The correct positioning of the patient is an important demand in radiotherapy. Optical measurements seem appropriate, but special requirements of speed and accuracy must be met. A new photogrammetric system that captures patient surface data in real-time is introduced. It allows reproducible patient set-up, and monitors the patients position during irradiation even when the patient is moving. It consists of two cameras and one projector and is based on the photogrammetric evaluation of stereo image pairs by projecting white light fringes onto the patients body. The raw data of the 3D measurement is a sequence of point clouds. It can be evaluated together with other data modes common in radiotherapy for diagnosis and treatment planning. The system can be used as an additional verification tool for the correct positioning of the patient, and completely new applications emerge.

Collaboration


Dive into the Gerald Krell's collaboration.

Top Co-Authors

Avatar

Bernd Michaelis

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ayoub Al-Hamadi

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Andreas Herzog

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Wendemuth

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ingo Siegert

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Mathias Walke

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Nils Riefenstahl

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge