Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sascha Seifert is active.

Publication


Featured researches published by Sascha Seifert.


Proceedings of SPIE | 2009

Hierarchical parsing and semantic navigation of full body CT data

Sascha Seifert; Adrian Barbu; S. Kevin Zhou; David Liu; Johannes Feulner; Martin Huber; Michael Suehling; Alexander Cavallaro; Dorin Comaniciu

Whole body CT scanning is a common diagnosis technique for discovering early signs of metastasis or for differential diagnosis. Automatic parsing and segmentation of multiple organs and semantic navigation inside the body can help the clinician in efficiently obtaining accurate diagnosis. However, dealing with the large amount of data of a full body scan is challenging and techniques are needed for the fast detection and segmentation of organs, e.g., heart, liver, kidneys, bladder, prostate, and spleen, and body landmarks, e.g., bronchial bifurcation, coccyx tip, sternum, lung tips. Solving the problem becomes even more challenging if partial body scans are used, where not all organs are present. We propose a new approach to this problem, in which a network of 1D and 3D landmarks is trained to quickly parse the 3D CT data and estimate which organs and landmarks are present as well as their most probable locations and boundaries. Using this approach, the segmentation of seven organs and detection of 19 body landmarks can be obtained in about 20 seconds with state-of-the-art accuracy and has been validated on 80 CT full or partial body scans.


Medical Image Analysis | 2013

Spine detection in CT and MR using iterated marginal space learning

B. Michael Kelm; Michael Wels; S. Kevin Zhou; Sascha Seifert; Michael Suehling; Yefeng Zheng; Dorin Comaniciu

Examinations of the spinal column with both, Magnetic Resonance (MR) imaging and Computed Tomography (CT), often require a precise three-dimensional positioning, angulation and labeling of the spinal disks and the vertebrae. A fully automatic and robust approach is a prerequisite for an automated scan alignment as well as for the segmentation and analysis of spinal disks and vertebral bodies in Computer Aided Diagnosis (CAD) applications. In this article, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the spinal disks. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Finally, we propose an optional case-adaptive segmentation approach that allows to segment the spinal disks and vertebrae in MR and CT respectively. Since the proposed approaches are learning-based, they can be trained for MR or CT alike. Experimental results based on 42 MR and 30 CT volumes show that our system not only achieves superior accuracy but also is among the fastest systems of its kind in the literature. On the MR data set the spinal disks of a whole spine are detected in 11.5s on average with 98.6% sensitivity and 0.073 false positive detections per volume. On the CT data a comparable sensitivity of 98.0% with 0.267 false positives is achieved. Detected disks are localized with an average position error of 2.4 mm/3.2 mm and angular error of 3.9°/4.5° in MR/CT, which is close to the employed hypothesis resolution of 2.1 mm and 3.3°.


Proceedings of SPIE | 2010

Semantic annotation of medical images

Sascha Seifert; Michael Kelm; Manuel Moeller; Saikat Mukherjee; Alexander Cavallaro; Martin Huber; Dorin Comaniciu

Diagnosis and treatment planning for patients can be significantly improved by comparing with clinical images of other patients with similar anatomical and pathological characteristics. This requires the images to be annotated using common vocabulary from clinical ontologies. Current approaches to such annotation are typically manual, consuming extensive clinician time, and cannot be scaled to large amounts of imaging data in hospitals. On the other hand, automated image analysis while being very scalable do not leverage standardized semantics and thus cannot be used across specific applications. In our work, we describe an automated and context-sensitive workflow based on an image parsing system complemented by an ontology-based context-sensitive annotation tool. An unique characteristic of our framework is that it brings together the diverse paradigms of machine learning based image analysis and ontology based modeling for accurate and scalable semantic image annotation.


IEEE Transactions on Medical Imaging | 2009

A Knowledge-Based Approach to Soft Tissue Reconstruction of the Cervical Spine

Sascha Seifert; Irina Wächter; Gottfried Schmelzle; Rüdiger Dillmann

For surgical planning in spine surgery, the segmentation of anatomical structures is a prerequisite. Past efforts focussed on the segmentation of vertebrae from tomographic data, but soft tissue structures have, for the most part, been neglected. Only sparse research work has been done for the spinal cord and the trachea. However, as far as the author is aware, there is no work on segmenting intervertebral discs. Therefore, a totally automatic reconstruction algorithm for the most relevant cervical structures is presented. It is implemented as a straightforward process, using anatomical knowledge which is, in concept, transferrable to other tissues of the human body. No seed points are required since the discs, as initial landmarks, are located via an object recognition approach. The spinal musculature is reconstructed by surface analysis on already segmented vertebrae, thus it can be taken into account in a biomechanical simulation. The segmentation results of our approach showed 91% accordance with expert segmentations and the computation time is less than 1 min on a standard PC. Since the presented system follows some general concepts this approach may also be considered as a step towards full body segmentation of the human.


Proceedings of SPIE | 2011

Combined semantic and similarity search in medical image databases

Sascha Seifert; Marisa Thoma; Florian Stegmaier; Matthias Hammon; Martin Kramer; Martin Huber; Hans-Peter Kriegel; Alexander Cavallaro; Dorin Comaniciu

The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.


Proceedings of SPIE | 2009

Estimating the body portion of CT volumes by matching histograms of visual words

Johannes Feulner; S. Kevin Zhou; Sascha Seifert; Alexander Cavallaro; Joachim Hornegger; Dorin Comaniciu

Being able to automatically determine which portion of the human body is shown by a CT volume image offers various possibilities like automatic labeling of images or initializing subsequent image analysis algorithms. This paper presents a method that takes a CT volume as input and outputs the vertical body coordinates of its top and bottom slice in a normalized coordinate system whose origin and unit length are determined by anatomical landmarks. Each slice of a volume is described by a histogram of visual words: Feature vectors consisting of an intensity histogram and a SURF descriptor are first computed on a regular grid and then classified into the closest visual words to form a histogram. The vocabulary of visual words is a quantization of the feature space by offline clustering a large number of feature vectors from prototype volumes into visual words (or cluster centers) via the K-Means algorithm. For a set of prototype volumes whose body coordinates are known the slice descriptions are computed in advance. The body coordinates of a test volume are computed by a 1D rigid registration of the test volume with the prototype volumes in axial direction. The similarity of two slices is measured by comparing their histograms of visual words. Cross validation on a dataset of 44 volumes proved the robustness of the results. Even for test volumes of ca. 20cm height, the average error was 15.8mm.


IEEE Transactions on Medical Imaging | 2011

A Probabilistic Model for Automatic Segmentation of the Esophagus in 3-D CT Scans

Johannes Feulner; Shaohua Kevin Zhou; Matthias Hammon; Sascha Seifert; Martin Huber; Dorin Comaniciu; Joachim Hornegger; Alexander Cavallaro

Being able to segment the esophagus without user interaction from 3-D CT data is of high value to radiologists during oncological examinations of the mediastinum. The segmentation can serve as a guideline and prevent confusion with pathological tissue. However, limited contrast to surrounding structures and versatile shape and appearance make segmentation a challenging problem. This paper presents a multistep method. First, a detector that is trained to learn a discriminative model of the appearance is combined with an explicit model of the distribution of respiratory and esophageal air. In the next step, prior shape knowledge is incorporated using a Markov chain model. We follow a “detect and connect” approach to obtain the maximum a posteriori estimate of the approximate esophagus shape from hypothesis about the esophagus contour in axial image slices. Finally, the surface of this approximation is nonrigidly deformed to better fit the boundary of the organ. The method is compared to an alternative approach that uses a particle filter instead of a Markov chain to infer the approximate esophagus shape, to the performance of a human observer and also to state of the art methods, which are all semiautomatic. Cross-validation on 144 CT scans showed that the Markov chain based approach clearly outperforms the particle filter. It segments the esophagus with a mean error of 1.80 mm in less than 16 s on a standard PC. This is only 1 mm above the interobserver variability and can compete with the results of previously published semiautomatic methods.


MCBR-CDS'11 Proceedings of the Second MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2011

A discriminative distance learning---based CBIR framework for characterization of indeterminate liver lesions

Maria Jimena Costa; Alexey Tsymbal; Matthias Hammon; Alexander Cavallaro; Michael Sühling; Sascha Seifert; Dorin Comaniciu

In this paper we propose a novel learning---based CBIR method for fast content---based retrieval of similar 3D images based on the intrinsic Random Forest (RF) similarity. Furthermore, we allow the combination of flexible user---defined semantics (in the form of retrieval contexts and high---level concepts) and appearance---based (low---level) features in order to yield search results that are both meaningful to the user and relevant in the given clinical case. Due to the complexity and clinical relevance of the domain, we have chosen to apply the framework to the retrieval of similar 3D CT hepatic pathologies, where search results based solely on similarity of low---level features would be rarely clinically meaningful. The impact of high---level concepts on the quality and relevance of the retrieval results has been measured and is discussed for three different set---ups. A comparison study with the commonly used canonical Euclidean distance is presented and discussed as well.


Computerized Medical Imaging and Graphics | 2011

Comparing axial CT slices in quantized N-dimensional SURF descriptor space to estimate the visible body region

Johannes Feulner; S. Kevin Zhou; Elli Angelopoulou; Sascha Seifert; Alexander Cavallaro; Joachim Hornegger; Dorin Comaniciu

In this paper, a method is described to automatically estimate the visible body region of a computed tomography (CT) volume image. In order to quantify the body region, a body coordinate (BC) axis is used that runs in longitudinal direction. Its origin and unit length are patient-specific and depend on anatomical landmarks. The body region of a test volume is estimated by registering it only along the longitudinal axis to a set of reference CT volume images with known body coordinates. During these 1D registrations, an axial image slice of the test volume is compared to an axial slice of a reference volume by extracting a descriptor from both slices and measuring the similarity of the descriptors. A slice descriptor consists of histograms of visual words. Visual words are code words of a quantized feature space and can be thought of as classes of image patches with similar appearance. A slice descriptor is formed by sampling a slice on a regular 2D grid and extracting a Speeded Up Robust Features (SURF) descriptor at each sample point. The codebook, or visual vocabulary, is generated in a training step by clustering SURF descriptors. Each SURF descriptor extracted from a slice is classified into the closest visual word (or cluster center) and counted in a histogram. A slice is finally described by a spatial pyramid of such histograms. We introduce an extension of the SURF descriptors to an arbitrary number of dimensions (N-SURF). Here, we make use of 2-SURF and 3-SURF descriptors. Cross-validation on 84 datasets shows the robustness of the results. The body portion can be estimated with an average error of 15.5mm within 9s. Possible applications of this method are automatic labeling of medical image databases and initialization of subsequent image analysis algorithms.


Towards the Internet of Services | 2014

The THESEUS Use Cases

Florian Kuhlmann; Jan Hannemann; Myriam Traub; Christoph Böhme; Sonja Zillner; Alexander Cavallaro; Sascha Seifert; Björn Decker; Ralph Traphöner; Sven Kayser; Udo Lindemann; Stefan Prasse; Götz Marczinski; Ralf Grützner; Axel Fasse; Daniel Oberle

The THESEUS research program assembled key companies with market power from all types of sectors to jointly develop the innovative products that will enable the knowledge society. There have been carried out six use cases to demonstrate applications based on the developements of the THESEUS Core Technology Cluster. In this article, we will give a short overview of selected results of each use case.

Collaboration


Dive into the Sascha Seifert's collaboration.

Top Co-Authors

Avatar

Alexander Cavallaro

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Matthias Hammon

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Johannes Feulner

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge