Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hannes Stuke is active.

Publication


Featured researches published by Hannes Stuke.


Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling | 2008

Efficient fiber clustering using parameterized polynomials

Jan Klein; Hannes Stuke; Bram Stieltjes; Olaf Konrad; Horst K. Hahn; Heinz-Otto Peitgen

In the past few years, fiber clustering algorithms have shown to be a very powerful tool for grouping white matter connections tracked in DTI images into anatomically meaningful bundles. They improve the visualization and perception, and could enable robust quantification and comparison between individuals. However, most existing techniques perform a coarse approximation of the fibers due to the high complexity of the underlying clustering problem or do not allow for an efficient clustering in real time. In this paper, we introduce new algorithms and data structures which overcome both problems. The fibers are represented very precisely and efficiently by parameterized polynomials defining the x-, y-, and z-component individually. A two-step clustering method determines possible clusters having a Gaussian distributed structure within one component and, afterwards, verifies their existences by principal component analysis (PCA) with respect to the other two components. As the PCA has to be performed only n times for a constant number of points, the clustering can be done in linear time O(n), where n denotes the number of fibers. This drastically improves on existing techniques, which have a high, quadratic running time, and it allows for an efficient whole brain fiber clustering. Furthermore, our new algorithms can easily be used for detecting corresponding clusters in different brains without time-consuming registration methods. We show a high reliability, robustness and efficiency of our new algorithms based on several artificial and real fiber sets that include different elements of fiber architecture such as fiber kissing, crossing and nested fiber bundles.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Towards user-independent DTI quantification

Jan Klein; Hannes Stuke; Jan Rexilius; Bram Stieltjes; Horst K. Hahn; Heinz-Otto Peitgen

Quantification of diffusion tensor imaging (DTI) parameters has become an important role in the neuroimaging, neurosurgical, and neurological community as a method to identify major white matter tracts afflicted by pathology or tracts at risk for a given surgical approach. We introduce a novel framework for a reliable and robust quantification of DTI parameters, which overcomes problems of existing techniques introduced by necessary user inputs. In a first step, a hybrid clustering method is proposed that allows for extracting specific fiber bundles in a robust way. Compared to previous methods, our approach considers only local proximities of fibers and is insensitive to their global geometry. This is very useful in cases where a fiber tracking of the whole brain is not available. Our technique determines the overall number of clusters iteratively using a eigenvalue thresholding technique to detect disjoint clusters of independent fiber bundles. Afterwards, possible finer substructures based on an eigenvalue regression are determined within each bundle. In a second step, a quantification of DTI parameters of the extracted bundle is performed. We propose a method that automatically determines a 3D image where the voxel values encode the minimum distance to a reconstructed fiber. This image allows for calculating a 3D mask where each voxel within the mask corresponds to a voxel that lies in an isosurface around the fibers. The mask is used for an automatic classification between tissue classes (fiber, background, and partial volume) so that the quantification can be performed on one or more of such classes. This can be done per slice or a single DTI parameter can be determined for the whole volume which is covered by the isosurface. Our experimental tests confirm that major white matter fiber tracts may be robustly determined and can be quantified automatically. A great advantage of our framework is its easy integration into existing quantification applications so that uncertainties can be reduced, and higher intrarater- as well as interrater reliabilities can be achieved.


PLOS Computational Biology | 2017

Psychotic Experiences and Overhasty Inferences Are Related to Maladaptive Learning

Heiner Stuke; Hannes Stuke; Veith Weilnhammer; Katharina Schmack

Theoretical accounts suggest that an alteration in the brain’s learning mechanisms might lead to overhasty inferences, resulting in psychotic symptoms. Here, we sought to elucidate the suggested link between maladaptive learning and psychosis. Ninety-eight healthy individuals with varying degrees of delusional ideation and hallucinatory experiences performed a probabilistic reasoning task that allowed us to quantify overhasty inferences. Replicating previous results, we found a relationship between psychotic experiences and overhasty inferences during probabilistic reasoning. Computational modelling revealed that the behavioral data was best explained by a novel computational learning model that formalizes the adaptiveness of learning by a non-linear distortion of prediction error processing, where an increased non-linearity implies a growing resilience against learning from surprising and thus unreliable information (large prediction errors). Most importantly, a decreased adaptiveness of learning predicted delusional ideation and hallucinatory experiences. Our current findings provide a formal description of the computational mechanisms underlying overhasty inferences, thereby empirically substantiating theories that link psychosis to maladaptive learning.


Archive | 2010

On the Reliability of Diffusion Neuroimaging

Jan Klein; Sebastiano Barbieri; Hannes Stuke; Miriam H. A. Bauer; Jan Egger; Christopher Nimsky; Horst K. Hahn

Over the last years, diffusion imaging techniques like DTI, DSI or Q-Ball received increasing attention, especially in the neuroimaging, neurological, and neurosurgical community. An explicit geometrical reconstruction of major white matter tracts has become available by fiber tracking based on diffusion-weighted images. The goal of virtually all fiber tracking algorithms is to compute results which are analogous to what the physicians or radiologists are expecting and an extensive amount of research has therefore been focussed on this reconstruction. However, the results of fiber tracking and quantification algorithms are approximations of the reality due to limited spatial resolution (typically a few millimeters), model assumptions (e.g., diffusion assumed to be Gaussian distributed), user-defined parameter settings, and physical imaging artifacts resulting from diffusion sequences. In this book chapter, we will address the problem of uncertainty in diffusion imaging and we will show possible solutions for minimizing, measuring and visualizing the uncertainty. The possibility of fiber tracking (FT) and the quantification of diffusion parameters has established an abundance of new clinically useful applications and research studies that focus on neurosurgical planning (Nimsky et al., 2005), monitoring the progression of diseases such as amyotrophic lateral sclerosis (ALS) or multiple sclerosis (MS) (Griffin et al., 2001), establishing surrogate markers used in assessing the grade of brain tumors (Barboriak, 2003), or initiating therapies to ensure the best possible development of children (Pul et al., 2006). Several studies have shown that modified values of fractional anisotropy (FA), relative anisotropy, or diffusion strength (ADC) are indicators of diseases that affect white matter tissue. MS lesions have been investigated by ROI-based analysis and voxel-wise FA comparisons by which FA changes have been shown to occur in areas containing lesions and in areas of normalappearing white matter. Moreover, methods for tract-based quantification have been developed for which parameters are computed depending on the local curvature or geodesic distance from a user-defined origin. These methods allow to automatically determine DTIderived parameters along fiber bundles and have already been used to mirror disease progression and executive function in MS (Fink et al., 2009). Probabilistic methods (Friman et al., 2006) allow for tracking in regions of low anisotropy and are also used to provide a quantitative measure of the probability of the existence of a connection between two regions. These approaches aim at visualizing the uncertainty present 1


Frontiers in Aging Neuroscience | 2017

Neuropsychological Testing and Machine Learning Distinguish Alzheimer’s Disease from Other Causes for Cognitive Impairment

Pavel Gurevich; Hannes Stuke; Andreas Kastrup; Heiner Stuke; Helmut Hildebrandt

With promising results in recent treatment trials for Alzheimer’s disease (AD), it becomes increasingly important to distinguish AD at early stages from other causes for cognitive impairment. However, existing diagnostic methods are either invasive (lumbar punctures, PET) or inaccurate Magnetic Resonance Imaging (MRI). This study investigates the potential of neuropsychological testing (NPT) to specifically identify those patients with possible AD among a sample of 158 patients with Mild Cognitive Impairment (MCI) or dementia for various causes. Patients were divided into an early stage and a late stage group according to their Mini Mental State Examination (MMSE) score and labeled as AD or non-AD patients based on a post-mortem validated threshold of the ratio between total tau and beta amyloid in the cerebrospinal fluid (CSF; Total tau/Aβ(1–42) ratio, TB ratio). All patients completed the established Consortium to Establish a Registry for Alzheimer’s Disease—Neuropsychological Assessment Battery (CERAD-NAB) test battery and two additional newly-developed neuropsychological tests (recollection and verbal comprehension) that aimed at carving out specific Alzheimer-typical deficits. Based on these test results, an underlying AD (pathologically increased TB ratio) was predicted with a machine learning algorithm. To this end, the algorithm was trained in each case on all patients except the one to predict (leave-one-out validation). In the total group, 82% of the patients could be correctly identified as AD or non-AD. In the early group with small general cognitive impairment, classification accuracy was increased to 89%. NPT thus seems to be capable of discriminating between AD patients and patients with cognitive impairment due to other neurodegenerative or vascular causes with a high accuracy, and may be used for screening in clinical routine and drug studies, especially in the early course of this disease.


PLOS Computational Biology | 2017

Correction: Psychotic Experiences and Overhasty Inferences Are Related to Maladaptive Learning

Heiner Stuke; Hannes Stuke; Veith Weilnhammer; Katharina Schmack

[This corrects the article DOI: 10.1371/journal.pcbi.1005328.].


Discrete and Continuous Dynamical Systems - Series S | 2014

Special asymptotics for a critical fast diffusion equation

Hannes Stuke; Marek Fila


Archive | 2017

Learning uncertainty in regression tasks by artificial neural networks

Pavel Gurevich; Hannes Stuke


arXiv: Statistics Theory | 2018

Gradient conjugate priors and deep neural networks.

Pavel Gurevich; Hannes Stuke


publisher | None

title

author

Collaboration


Dive into the Hannes Stuke's collaboration.

Top Co-Authors

Avatar

Pavel Gurevich

Free University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Klein

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar

Bram Stieltjes

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marek Fila

Comenius University in Bratislava

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge