Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ursula Gather is active.

Publication


Featured researches published by Ursula Gather.


Journal of the American Statistical Association | 1993

The identification of multiple outliers

Laurie Davies; Ursula Gather

Abstract One approach to identifying outliers is to assume that the outliers have a different distribution from the remaining observations. In this article we define outliers in terms of their position relative to the model for the good observations. The outlier identification problem is then the problem of identifying those observations that lie in a so-called outlier region. Methods based on robust statistics and outward testing are shown to have the highest possible breakdown points in a sense derived from Donoho and Huber. But a more detailed analysis shows that methods based on robust statistics perform better with respect to worst-case behavior. A concrete outlier identifier based on a suggestion of Hampel is given.


Critical Care Medicine | 2010

Intensive care unit alarms--how many do we need?

Sylvia Siebig; Silvia Kuhls; Michael Imhoff; Ursula Gather; Jürgen Schölmerich; Christian E. Wrede

Objective: To validate cardiovascular alarms in critically ill patients in an experimental setting by generating a database of physiologic data and clinical alarm annotations, and report the current rate of alarms and their clinical validity. Currently, monitoring of physiologic parameters in critically ill patients is performed by alarm systems with high sensitivity, but low specificity. As a consequence, a multitude of alarms with potentially negative impact on the quality of care is generated. Design: Prospective, observational, clinical study. Setting: Medical intensive care unit of a university hospital. Data Source: Data from different medical intensive care unit patients were collected between January 2006 and May 2007. Measurements and Main Results: Physiologic data at 1-sec intervals, monitor alarms, and alarm settings were extracted from the surveillance network. Video recordings were annotated with respect to alarm relevance and technical validity by an experienced physician. During 982 hrs of observation, 5934 alarms were annotated, corresponding to six alarms per hour. About 40% of all alarms did not correctly describe the patient condition and were classified as technically false; 68% of those were caused by manipulation. Only 885 (15%) of all alarms were considered clinically relevant. Most of the generated alarms were threshold alarms (70%) and were related to arterial blood pressure (45%). Conclusion: This study used a new approach of off-line, video-based physician annotations, showing that even with modern monitoring systems most alarms are not clinically relevant. As the majority of alarms are simple threshold alarms, statistical methods may be suitable to help reduce the number of false-positive alarms. Our study is also intended to develop a reference database of annotated monitoring alarms for further application to alarm algorithm research.


Neuropsychologia | 2002

Functional cerebral asymmetries during the menstrual cycle: a cross-sectional and longitudinal analysis

Markus Hausmann; Claudia Becker; Ursula Gather; Onur Güntürkün

This study aims at answering two basic questions regarding the mechanisms with which hormones modulate functional cerebral asymmetries. Which steroids or gonadotropins fluctuating during the menstrual cycle affect perceptual asymmetries? Can these effects be demonstrated in a cross-sectional (follicular and midluteal cycle phases analyzed) and a longitudinal design, in which the continuous hormone and asymmetry fluctuations were measured over a time course of 6 weeks? To answer these questions, 12 spontaneously cycling right-handed women participated in an experiment in which their levels of progesterone, estradiol, testosterone, LH, and FSH were assessed every 3 days by blood-sample based radioimmunoassays (RIAs). At the same points in time their asymmetries were analyzed with visual half-field (VHF) techniques using a lexical decision, a figure recognition, and a face discrimination task. Both cross-sectional and longitudinal analyzes showed that an increase of progesterone is related to a reduction in asymmetries in a figure recognition task by increasing the performance of the left-hemisphere which is less specialized for this task. Cross-sectionally, estradiol was shown to have significant relationships to the accuracy and the response speed of both hemispheres. However, since these effects were in the same direction, asymmetry was not affected. This was not the case in the longitudinal design, where estradiol affected the asymmetry in the lexical decision and the figural comparison task. Overall, these data show that hormonal fluctuations within the menstrual cycle have important impacts on functional cerebral asymmetries. The effect of progesterone was highly reliable and could be shown in both analysis schemes. By contrast, estradiol mainly, but not exclusively, affected both hemispheres in the same direction.


Journal of the American Statistical Association | 1999

The Masking Breakdown Point of Multivariate Outlier Identification Rules

Claudia Becker; Ursula Gather

Abstract In this article, we consider simultaneous outlier identification rules for multivariate data, generalizing the concept of so-called α outlier identifiers, as presented by Davies and Gather for the case of univariate samples. Such multivariate outlier identifiers are based on estimators of location and covariance. Therefore, it seems reasonable that characteristics of the estimators influence the behavior of outlier identifiers. Several authors mentioned that using estimators with low finite-sample breakdown point is not recommended for identifying outliers. To give a formal explanation, we investigate how the finite-sample breakdown points of estimators used in these identification rules influence the masking behavior of the rules.


Annals of Statistics | 2005

Breakdown and groups

P. Laurie Davies; Ursula Gather

The breakdown point has played an important role within robust statistics over the past 25–30 years. A large part of its appeal is that it is easy to explain and easy to understand. It is often interpreted as “the proportion of bad data a statistic can tolerate before becoming arbitrary or meaningless.” In this paper Professors Davies and Gather give us a much needed critical look at this seemingly simple concept, and are to be commended for doing1. Introductory remarks. It is a great pleasure for me to be invited to comment upon the nice and elegant and in parts thought-provoking paper by Davies and Gather. The authors also asked me specifically to comment upon the historical roots of the breakdown point (BP), and my thoughts about it. I shall try to do so, stressing in particular aspects and work that are not published. 2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to1. Breakdown, equivariance and invariance. The authors are to be congratulated for their excellent paper, which nicely clarifies the role of equivariance in finding upper bounds for the breakdown points of functionals. The breakdown point approach, with upper bounds showing how far one can go, has achieved great success in the univariate and multivariate location, scale, scatter and regression estimation problems. The authors justifiably argue that this is due to the fact that the acceptable, well-behaved estimates in these contexts have natural equivariance properties. In constructing reasonable estimates and test statistics, one therefore considers statistics satisfying certain conditions (invariance, equivariance, unbiasedness, consistency, etc.). If there are no restrictions, the upper bound is one as the breakdown point (using the common definition) of a “stupid” constant functional, for example, is one. The paper is clearly written with several illustrative examples. The constructive proof of the main Theorem 3.1 illustrates how one can concretely break down an equivariant estimate: 1. Pick a transformation g corresponding to the set with the supremumThe concept of breakdown point was introduced by Hodges (1967) and Hampel (1968, 1971) and still plays an important though at times a controversial role in robust statistics. It has proved most successful in the context of location, scale and regression problems. In this paper we argue that this success is intimately connected to the fact that the translation and affine groups act on the sample space and give rise to a definition of equivariance for statistical functionals. For such functionals a nontrivial upper bound for the breakdown point can be shown. In the absence of such a group structure a breakdown point of one is attainable and this is perhaps the decisive reason why the concept of breakdown point in other situations has not proved as successful. Even if a natural group is present it is often not sufficiently large to allow a nontrivial upper bound for the breakdown point. One exception to this is the problem of the autocorrelation structure of time series where we derive a nontrivial upper breakdown point using the group of realizable linear filters. The paper is formulated in an abstract manner to emphasize the role of the group and the resulting equivariance structure.The concept of breakdown point was introduced by Hampel [Ph.D. dissertation (1968), Univ. California, Berkeley; Ann. Math. Statist. 42 (1971) 1887-1896] and developed further by, among others, Huber [Robust Statistics (1981). Wiley, New York] and Donoho and Huber [In A Festschrift for Erich L. Lehmann (1983) 157-184. Wadsworth, Belmont, CA]. It has proved most successful in the context of location, scale and regression problems. Attempts to extend the concept to other situations have not met with general acceptance. In this paper we argue that this is connected to the fact that in the location, scale and regression problems the translation and affine groups give rise to a definition of equivariance for statistical functionals. Comparisons in terms of breakdown points seem only useful when restricted to equivariant functionals and even here the connection between breakdown and equivariance is a tenuous one.


Artificial Intelligence in Medicine | 2000

Knowledge Discovery and Knowledge Validation in Intensive Care

Katharina Morik; Michael Imhoff; Peter Brockhausen; Ursula Gather

Operational protocols are a valuable means for quality control. However, developing operational protocols is a highly complex and costly task. We present an integrated approach involving both intelligent data analysis and knowledge acquisition from experts that support the development of operational protocols. The aim is to ensure high quality standards for the protocol through empirical validation during the development, as well as lower development cost through the use of machine learning and statistical techniques. We demonstrate our approach of integrating expert knowledge with data driven techniques based on our effort to develop an operational protocol for the hemodynamic system.


Intensive Care Medicine | 1998

Statistical pattern detection in univariate time series of intensive care on-line monitoring data

Michael Imhoff; Marcus Bauer; Ursula Gather; Dietrich Löhlein

Objectives: To determine how different mathematical time series approaches can be implemented for the detection of qualitative patterns in physiologic monitoring data, and which of these approaches could be suitable as a basis for future bedside time series analysis. Design: Off-line time series analysis. Setting: Surgical intensive care unit of a teaching hospital. Patients: 19 patients requiring hemodynamic monitoring with a pulmonary artery catheter. Interventions: None. Measurements and results: Hemodynamic data were acquired in 1-min intervals from a clinical information system and exported into statistical software for further analysis. Altogether, 134 time series for heart rate, mean arterial pressure, and mean pulmonary artery pressure were visually classified by a senior intensivist into five patterns: no change, outlier, temporary level change, permanent level change, and trend. The same series were analyzed with low-order autoregressive (AR) models and with phase space (PS) models. The resulting classifications from both models were compared to the initial classification. Outliers and level changes were detected in most instances with both methods. Trend detection could only be done indirectly. Both methods were more sensitive to pattern changes than they were clinically relevant. Especially with outlier detection, 95 % confidence intervals were too close. AR models require direct user interaction, whereas PS models offer opportunities for fully automated time series analysis in this context. Conclusion: Statistical patterns in univariate intensive care time series can reliably be detected with AR models and with PS models. For most bedside problems both methods are too sensitive. AR models are highly interactive, and both methods require that users have an explicit knowledge of statistics. While AR models and PS models can be extremely useful in the scientific off-line analysis, routine bedside clinical use cannot yet be recommended.


Computational Statistics & Data Analysis | 2001

The largest nonindentifiable outlier: a comparison of multivariate simultaneous outlier identification rules

Claudia Becker; Ursula Gather

The aim of detecting outliers in a multivariate sample can be pursued in different ways. We investigate here the performance of several simultaneous multivariate outlier identification rules based on robust estimators of location and scale. It has been shown that the use of estimators with high finite sample breakdown point in such procedures yields a good behaviour with respect to the prevention of breakdown by the masking effect (Becker, Gather 1999, J. Amer. Statist. Assoc. 94, 947-955). In this article, we investigate by simulation, at which distance from the center of an underlying model distribution outliers can be placed until certain simultaneous identification rules will detect them as outliers. We consider identification procedures based on the minimum volume ellipsoid, the minimum covariance determinant, and S-estimators.


Statistics and Computing | 2006

Modified repeated median filters

Thorsten Bernholt; Roland Fried; Ursula Gather; Ingo Wegener

We discuss moving window techniques for fast extraction of a signal composed of monotonic trends and abrupt shifts from a noisy time series with irrelevant spikes. Running medians remove spikes and preserve shifts, but they deteriorate in trend periods. Modified trimmed mean filters use a robust scale estimate such as the median absolute deviation about the median (MAD) to select an adaptive amount of trimming. Application of robust regression, particularly of the repeated median, has been suggested for improving upon the median in trend periods. We combine these ideas and construct modified filters based on the repeated median offering better shift preservation. All these filters are compared w.r.t. fundamental analytical properties and in basic data situations. An algorithm for the update of the MAD running in time O(log n) for window width n is presented as well.


Statistics | 2002

A note On outlier sensitivity of Sliced Inverse Regression

Ursula Gather; Torsten Hilker; Claudia Becker

Sliced Inverse Regression (SIR) is a promising technique for the purpose of dimension reduction. Several properties of this method have been examined already, but little attention has been paid to robustness aspects. In this article, we focus on the sensitivity of SIR to outliers and show in what sense and how severely SIR can be influenced by outliers in the data.

Collaboration


Dive into the Ursula Gather's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roland Fried

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Karen Schettlinger

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Sonja Kuhnt

Dortmund University of Applied Sciences and Arts

View shared research outputs
Top Co-Authors

Avatar

Marcus Bauer

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

Silvia Kuhls

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sylvia Siebig

University of Regensburg

View shared research outputs
Top Co-Authors

Avatar

Jörg Pawlitschko

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar

P. Laurie Davies

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge