P. Laurie Davies
University of Duisburg-Essen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by P. Laurie Davies.
Annals of Statistics | 2005
P. Laurie Davies; Ursula Gather
The breakdown point has played an important role within robust statistics over the past 25–30 years. A large part of its appeal is that it is easy to explain and easy to understand. It is often interpreted as “the proportion of bad data a statistic can tolerate before becoming arbitrary or meaningless.” In this paper Professors Davies and Gather give us a much needed critical look at this seemingly simple concept, and are to be commended for doing1. Introductory remarks. It is a great pleasure for me to be invited to comment upon the nice and elegant and in parts thought-provoking paper by Davies and Gather. The authors also asked me specifically to comment upon the historical roots of the breakdown point (BP), and my thoughts about it. I shall try to do so, stressing in particular aspects and work that are not published. 2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to1. Breakdown, equivariance and invariance. The authors are to be congratulated for their excellent paper, which nicely clarifies the role of equivariance in finding upper bounds for the breakdown points of functionals. The breakdown point approach, with upper bounds showing how far one can go, has achieved great success in the univariate and multivariate location, scale, scatter and regression estimation problems. The authors justifiably argue that this is due to the fact that the acceptable, well-behaved estimates in these contexts have natural equivariance properties. In constructing reasonable estimates and test statistics, one therefore considers statistics satisfying certain conditions (invariance, equivariance, unbiasedness, consistency, etc.). If there are no restrictions, the upper bound is one as the breakdown point (using the common definition) of a “stupid” constant functional, for example, is one. The paper is clearly written with several illustrative examples. The constructive proof of the main Theorem 3.1 illustrates how one can concretely break down an equivariant estimate: 1. Pick a transformation g corresponding to the set with the supremumThe concept of breakdown point was introduced by Hodges (1967) and Hampel (1968, 1971) and still plays an important though at times a controversial role in robust statistics. It has proved most successful in the context of location, scale and regression problems. In this paper we argue that this success is intimately connected to the fact that the translation and affine groups act on the sample space and give rise to a definition of equivariance for statistical functionals. For such functionals a nontrivial upper bound for the breakdown point can be shown. In the absence of such a group structure a breakdown point of one is attainable and this is perhaps the decisive reason why the concept of breakdown point in other situations has not proved as successful. Even if a natural group is present it is often not sufficiently large to allow a nontrivial upper bound for the breakdown point. One exception to this is the problem of the autocorrelation structure of time series where we derive a nontrivial upper breakdown point using the group of realizable linear filters. The paper is formulated in an abstract manner to emphasize the role of the group and the resulting equivariance structure.The concept of breakdown point was introduced by Hampel [Ph.D. dissertation (1968), Univ. California, Berkeley; Ann. Math. Statist. 42 (1971) 1887-1896] and developed further by, among others, Huber [Robust Statistics (1981). Wiley, New York] and Donoho and Huber [In A Festschrift for Erich L. Lehmann (1983) 157-184. Wadsworth, Belmont, CA]. It has proved most successful in the context of location, scale and regression problems. Attempts to extend the concept to other situations have not met with general acceptance. In this paper we argue that this is connected to the fact that in the location, scale and regression problems the translation and affine groups give rise to a definition of equivariance for statistical functionals. Comparisons in terms of breakdown points seem only useful when restricted to equivariant functionals and even here the connection between breakdown and equivariance is a tenuous one.
Annals of Statistics | 2006
P. Laurie Davies; Ursula Gather
In his discussion of Davies and Gather [Ann. Statist. 33 (2005) 977–1035] Tyler pointed out that the theory developed there could not be applied to the case of directional data. He related the breakdown of directional functionals to the problem of definability. In this addendum we provide a concept of breakdown defined in terms of definability and not in terms of bias. If a group of finite order k acts on the sample space we show that the breakdown point can be bounded above by (k-1)/k. In the case of directional data there is a group of order k=2 giving an upper bound of 1/2.
Econometric Theory | 2003
P. Laurie Davies; Walter Krämer
We derive the probability limit of the standard Dickey-Fuller-test in the context of an exponential random walk. This result might be useful in interpreting tests for unit roots when the test is inadvertantly applied to the levels of the data when the true random walk is in the logs.
Technical reports | 2007
P. Laurie Davies; Ursula Gather; Daniel J. Nordman; Henrike Weinert
Even for a well-trained statistician the construction of a histogram for a given real-valued set is a sifficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the binss in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction methods by means of a simulation study. The study includes plug-in methods, cross-validation, penalized maximum likehood and the taut string procedure. Their performance on different test beds is measured by the Hellinger distance and the ability to identify the modes of the underlying density.
Technical reports | 2004
P. Laurie Davies; Ursula Gather
The notion of breakdown point was introduced by Hampel (1968, 1971) and has since played an important role in the theory and practice of robust statistics. In Davies and Gather (2004) it was argued that the success of the concept is connected to the existence of a group of transformations on the sample space and the linking of breakdown and equivariance. For example the highest breakdown point of any translation equivariant functional on the real line is 1/2 whereas without equivariance considerations the highest breakdown point is the trivial upper bound of 1.
Technical reports | 2002
P. Laurie Davies
It is argued that a main aim of statistics is to produce statistical procedures which in this article are defined as algorithms with inputs and outputs. The structure and properties of such procedures are investigated with special reference to topological and testing considerations. Procedures which work well in a large variety of situations are often based on robust statistical functionals. In the final section some aspects of robust statistics are discussed again with special reference to topology and continuity.
arXiv: Statistics Theory | 2005
P. Laurie Davies; Ursula Gather
The breakdown point has played an important role within robust statistics over the past 25–30 years. A large part of its appeal is that it is easy to explain and easy to understand. It is often interpreted as “the proportion of bad data a statistic can tolerate before becoming arbitrary or meaningless.” In this paper Professors Davies and Gather give us a much needed critical look at this seemingly simple concept, and are to be commended for doing1. Introductory remarks. It is a great pleasure for me to be invited to comment upon the nice and elegant and in parts thought-provoking paper by Davies and Gather. The authors also asked me specifically to comment upon the historical roots of the breakdown point (BP), and my thoughts about it. I shall try to do so, stressing in particular aspects and work that are not published. 2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to2. Some thoughts with the definition of the breakdown point. In my thesis [Hampel (1968)] I developed what was later also called the “infinitesimal approach to robustness,” based on one-step Taylor expansions of statistics viewed as functionals (the “influence curves” or “influence functions”), a technology which for ordinary functions has long been indispensable in engineering and the physical sciences, and also for much theoretical work. However, it was always clear to me that this technology needed to be supplemented by an indication up to what distance (from the model distribution around which the expansion takes place) the linear expansions would be numerically, or at least semiquantitatively, useful. The simplest idea that came to my mind (simplicity being a virtue, also in view of Ockham’s razor) was the distance of the nearest pole of the functional (if it was unbounded); see the graphs in Hampel, Ronchetti, Rousseeuw and Stahel [(1986), pages 42, 48, 177]. Thus, right after defining the “bias function” (without using this term) as the (more complicated) bridge between model and pole, I introduced the “break-down point” on page 27 (Chapter C.4) of my thesis and, in a slight variant (by not requiring qualitative robustness anymore and therefore treating it as a purely global concept), as “breakdown point” on page 1894 in Hampel (1971). I was, of course, clearly inspired by Hodges (1967), whose intuition went in a similar direction, and by his “tolerance of extreme values”; however, his concept is not only much more limited, it is formally not even a special case of the breakdown point. [And contrary to1. Breakdown, equivariance and invariance. The authors are to be congratulated for their excellent paper, which nicely clarifies the role of equivariance in finding upper bounds for the breakdown points of functionals. The breakdown point approach, with upper bounds showing how far one can go, has achieved great success in the univariate and multivariate location, scale, scatter and regression estimation problems. The authors justifiably argue that this is due to the fact that the acceptable, well-behaved estimates in these contexts have natural equivariance properties. In constructing reasonable estimates and test statistics, one therefore considers statistics satisfying certain conditions (invariance, equivariance, unbiasedness, consistency, etc.). If there are no restrictions, the upper bound is one as the breakdown point (using the common definition) of a “stupid” constant functional, for example, is one. The paper is clearly written with several illustrative examples. The constructive proof of the main Theorem 3.1 illustrates how one can concretely break down an equivariant estimate: 1. Pick a transformation g corresponding to the set with the supremumThe concept of breakdown point was introduced by Hodges (1967) and Hampel (1968, 1971) and still plays an important though at times a controversial role in robust statistics. It has proved most successful in the context of location, scale and regression problems. In this paper we argue that this success is intimately connected to the fact that the translation and affine groups act on the sample space and give rise to a definition of equivariance for statistical functionals. For such functionals a nontrivial upper bound for the breakdown point can be shown. In the absence of such a group structure a breakdown point of one is attainable and this is perhaps the decisive reason why the concept of breakdown point in other situations has not proved as successful. Even if a natural group is present it is often not sufficiently large to allow a nontrivial upper bound for the breakdown point. One exception to this is the problem of the autocorrelation structure of time series where we derive a nontrivial upper breakdown point using the group of realizable linear filters. The paper is formulated in an abstract manner to emphasize the role of the group and the resulting equivariance structure.The concept of breakdown point was introduced by Hampel [Ph.D. dissertation (1968), Univ. California, Berkeley; Ann. Math. Statist. 42 (1971) 1887-1896] and developed further by, among others, Huber [Robust Statistics (1981). Wiley, New York] and Donoho and Huber [In A Festschrift for Erich L. Lehmann (1983) 157-184. Wadsworth, Belmont, CA]. It has proved most successful in the context of location, scale and regression problems. Attempts to extend the concept to other situations have not met with general acceptance. In this paper we argue that this is connected to the fact that in the location, scale and regression problems the translation and affine groups give rise to a definition of equivariance for statistical functionals. Comparisons in terms of breakdown points seem only useful when restricted to equivariant functionals and even here the connection between breakdown and equivariance is a tenuous one.
Technical reports | 2005
P. Laurie Davies; Winfried Theis; Claus Weihs
Two models are proposed to roughly approximate the observed behavior of the amplitude of the drilling torque in the BTA-deep-hole-drilling process. It is schown that these models are closely connected.
Technical reports | 1999
P. Laurie Davies; Arne Kovac
The paper considers the problem of non-parametric regression with emphasis on controlling the number of local extrema. Two methods, the run method and the taut string-wavelet method, are introduced and analysed on standard test beds. It is shown that the number and location of local extreme values are consistently estimated. Rates of convergence are proved for both methods. The run method has a slow rate but can withstand blocks as well as a high proportion of isolated outliers. The rate of convergence of the taut string-wavelet method is almost optimal and the method is extremely sensitive being able to detect very low power peaks. Section 1 contains a short introduction with special reference to modality. The run method is described in Section 2 and the taut string-wavelet method in Section 3. Low power peaks are considered in Section 4. Section 5 contains a short conclusion and the proofs are given in Section 6.
Annals of Statistics | 1992
P. Laurie Davies