Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kai Rothaus is active.

Publication


Featured researches published by Kai Rothaus.


Image and Vision Computing | 2009

Separation of the retinal vascular graph in arteries and veins based upon structural knowledge

Kai Rothaus; Xiaoyi Jiang; Paul Rhiem

The vascular structure of the retina consists of two kinds of vessels: arteries and veins. Together these vessels form the vascular graph. In this paper, we present an approach to separate arteries and veins based on a pre-segmentation and a few hand-labelled vessel segments. We use a rule-based method to propagate the vessel labels through the vascular graph. The anatomical characteristics of the vessels on the retina are modelled as a dual constraint graph. We embed this task as double-layered constrained search problem steered by a heuristical AC-3 algorithm to overcome the NP-hard computational complexity. Results are presented on vascular graphs generated from manual as well as on automatical segmentation.


international conference on pattern recognition | 2008

A random walker based approach to combining multiple segmentations

Pakaket Wattuya; Kai Rothaus; J.-S. Prassni; Xiaoyi Jiang

In this paper we propose an algorithm for combining multiple image segmentations to achieve a final improved segmentation. In contrast to previous works we consider the most general class of segmentation combination, i.e. each input segmentation has an arbitrary number of regions. Our approach is based on a random walker segmentation algorithm which is able to provide high-quality segmentation starting from manually specified seeds. We automatically generate such seeds from an input segmentation ensemble. An information-theoretic optimality criterion is proposed to automatically determine the final number of regions. The experimental results on 300 images with manual ground truth segmentation clearly show the effectiveness of our combination approach.


Anatomical Record-advances in Integrative Anatomy and Evolutionary Biology | 2007

Statistical Analysis of the Angle of Intrusion of Porcine Ventricular Myocytes from Epicardium to Endocardium Using Diffusion Tensor Magnetic Resonance Imaging

Peter Schmid; Paul P. Lunkenheimer; Klaus Redmann; Kai Rothaus; Xiaoyi Jiang; Colin W. Cryer; Thomas Jaermann; Peter Niederer; Peter Boesiger; Robert H. Anderson

Pairs of cylindrical knives were used to punch semicircular slices from the left basal, sub‐basal, equatorial, and apical ventricular wall of porcine hearts. The sections extended from the epicardium to the endocardium. Their semicircular shape compensated for the depth‐related changing orientation of the myocytes relative to the equatorial plane. The slices were analyzed by diffusion tensor magnetic resonance imaging. The primary eigenvector of the diffusion tensor was determined in each pixel to calculate the number and angle of intrusion of the long axis of the aggregated myocytes relative to the epicardial surface. Arrays of axially sectioned aggregates were found in which 53% of the approximately two million segments evaluated intruded up to ±15°, 40% exhibited an angle of intrusion between ±15° and ±45°, and 7% exceeded an angle of ±45°, the positive sign thereby denoting an epi‐ to endocardial spiral in clockwise direction seen from the apex, while a negative sign denotes an anticlockwise spiral from the epicardium to the endocardium. In the basal and apical slices, the greater number of segments intruded in positive direction, while in the sub‐basal and equatorial slices, negative angles of intrusion prevailed. The sampling of the primary eigenvectors was insensitive to postmortem decomposition of the tissue. In a previous histological study, we also documented the presence of large numbers of myocytes aggregated with their long axis intruding obliquely from the epicardial to the endocardial ventricular surfaces. We used magnetic resonance diffusion tensor imaging in this study to provide a comprehensive statistical analysis. Anat Rec, 290:1413–1423, 2007.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Recognition of Traffic Lights in Live Video Streams on Mobile Devices

Jan Roters; Xiaoyi Jiang; Kai Rothaus

A mobile computer vision system is presented that helps visually impaired pedestrians cross roads. The system detects pedestrian lights in the environment and gives feedback about the current phase of the crucial light. For this purpose the live video stream of a mobile phone is analyzed in four steps: localization, classification, video analysis, and time-based verification. In particular, the temporal analysis allows us to alleviate the inherent problems such as occlusions (by vehicles), falsified colors, and others, and to further increase the decision certainty over a period of time. Due to the limited resources of mobile devices very efficient and precise algorithms have to be developed to ensure the reliability and the interactivity of the system. A prototype system was implemented on a Nokia N95 mobile phone and tested in real environment. It was trained to detect German traffic lights. For the prototype training and testing, we generated image and video databases including manually specified ground truth meta-data. These databases described in this paper are publicly available for the research community. Quantitative performance analysis is provided to demonstrate the reliability and interactivity of the prototype system.


cyberworlds | 2009

Enhancing Presence in Head-Mounted Display Environments by Visual Body Feedback Using Head-Mounted Cameras

Gerd Bruder; Frank Steinicke; Kai Rothaus; Klaus H. Hinrichs

A fully-articulated visual representation of a user in an immersive virtual environment (IVE) can enhance the users subjective sense of feeling present in the virtual world. Usually this requires the user to wear a full-body motion capture suit to track real-world body movements and to map them to a virtual body model. In this paper we present an augmented virtuality approach that allows to incorporate a realistic view of oneself in virtual environments using cameras attached to head mounted displays. The described system can easily be integrated into typical virtual reality setups. Egocentric camera images captured by a video-see-through system are segmented in real-time into foreground, showing parts of the users body, e. g., her hands or feet, and background. The segmented foreground is then displayed as inset in the users current view of the virtual world. Thus the user is able to see her physical body in an arbitrary virtual world, including individual characteristics such as skin pigmentation and hairiness.


Journal of Applied Physiology | 2009

Three-dimensional alignment of the aggregated myocytes in the normal and hypertrophic murine heart

Boris Schmitt; Katsiaryna Fedarava; Jan Falkenberg; Kai Rothaus; Narendra Kuber Bodhey; Carolin Reischauer; Sebastian Kozerke; Bernhard Schnackenburg; Dirk Westermann; Paul P. Lunkenheimer; Robert H. Anderson; Felix Berger; Titus Kuehne

Several observations suggest that the transmission of myocardial forces is influenced in part by the spatial arrangement of the myocytes aggregated together within ventricular mass. Our aim was to assess, using diffusion tensor magnetic resonance imaging (DT-MRI), any differences in the three-dimensional arrangement of these myocytes in the normal heart compared with the hypertrophic murine myocardium. We induced ventricular hypertrophy in seven mice by infusion of angiotensin II through a subcutaneous pump, with seven other mice serving as controls. DT-MRI of explanted hearts was performed at 3.0 Tesla. We used the primary eigenvector in each voxel to determine the three-dimensional orientation of aggregated myocytes in respect to their helical angles and their transmural courses (intruding angles). Compared with controls, the hypertrophic hearts showed significant increases in myocardial mass and the outer radius of the left ventricular chamber (P < 0.05). In both groups, a significant change was noted from positive intruding angles at the base to negative angles at the ventricular apex (P < 0.01). Compared with controls, the hypertrophied hearts had significantly larger intruding angles of the aggregated myocytes, notably in the apical and basal slices (P < 0.001). In both groups, the helical angles were greatest in midventricular sections, albeit with significantly smaller angles in the mice with hypertrophied myocardium (P < 0.01). The use of DT-MRI revealed significant differences in helix and intruding angles of the myocytes in the mice with hypertrophied myocardium.


GbRPR'07 Proceedings of the 6th IAPR-TC-15 international conference on Graph-based representations in pattern recognition | 2007

Separation of the retinal vascular graph in arteries and veins

Kai Rothaus; Paul Rhiem; Xiaoyi Jiang

The vascular structure of the retina consists of two kinds of vessels: arteries and veins. Together these vessels form the vascular graph. In this paper we present an approach to separating arteries and veins based on a pre-segmentation and a few hand-labelled vessel segments. We use a rule-based method to propagate the vessel labels through the vascular graph. We embed this task as double-layered constrained search problem steered by a heuristical AC-3 algorithm to overcome the NP-hard computational complexity. Results are presented on vascular graphs generated from hand-made as well as on automatical segmentation.


symposium on 3d user interfaces | 2009

Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos

Frank Steinicke; Gerd Bruder; Kai Rothaus; Klaus H. Hinrichs

A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the users body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the users current view of the virtual world, and thus a high-fidelity virtual body can be visualized.


Pattern Recognition | 2011

Interactive segmentation of non-star-shaped contours by dynamic programming

Xiaoyi Jiang; Andree Groíe; Kai Rothaus

In this paper we present the Rack algorithm for the detection of optimal non-star-shaped contours in images. It is based on the combination of a user-driven image transformation and dynamic programming. The fundamental idea is to interactively specify and edit the general shape of the desired object by using a rack. This rack is used to model the image as a directed acyclic weighted graph that contains a path corresponding to the expected contour. In this graph, the shortest path with respect to an adequate cost function can be calculated efficiently via dynamic programming. The experimental results indicate the algorithms ability of combining an acceptable amount of user interaction with generally good segmentation results.


international conference on pattern recognition | 2008

Synthesizing 3D videos by a motion-conditioned background mosaic

Swenja Rothaus; Kai Rothaus; Xiaoyi Jiang

In this work we present an approach to generating depth image sequences for standard videos, which satisfy a proposed motion model. We take advantage that the background in a video scene is relatively fix in contrast to moving objects in the foreground. By robust methods the camera motion is eliminated automatically to extract a background mosaic. By means of this mosaic the moving objects are extracted and the depth is assigned by the user. We apply the DIBR approach on the depth information to render 3D videos. The results demonstrate the practicability of our approach and highlight the advantages of the proposed motion model against previous depth tracking methods.

Collaboration


Dive into the Kai Rothaus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerd Bruder

University of Würzburg

View shared research outputs
Top Co-Authors

Avatar

Jan Roters

University of Münster

View shared research outputs
Top Co-Authors

Avatar

Jan de Buhr

University of Münster

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge