Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ulrich K. Müller is active.

Publication


Featured researches published by Ulrich K. Müller.


Econometrica | 2003

Tests for Unit Roots and the Initial Condition

Ulrich K. Müller; Graham Elliott

The paper analyzes the impact of the initial condition on the problem of testing for unit roots. To this end, we derive a family of optimal tests that maximize a weighted average power criterion with respect to the initial condition. We then investigate the relationship of this optimal family to popular tests. We find that many unit root tests are closely related to specific members of the optimal family, but the corresponding members employ very different weightings for the initial condition. The popular Dickey-Fuller tests, for instance, put a large weight on extreme deviations of the initial observation from the deterministic component, whereas other popular tests put more weight on moderate deviations. Since the power of unit root tests varies dramatically with the initial condition, this paper explains the results of comparative power studies of unit root tests. The results allow a much deeper understanding of the merits of particular tests in specific circumstances, and a guide to choosing which statistics to use in practice.


Journal of Business & Economic Statistics | 2010

T-Statistic Based Correlation and Heterogeneity Robust Inference

Rustam Ibragimov; Ulrich K. Müller

We develop a general approach to robust inference about a scalar parameter when the data is potentially heterogeneous and correlated in a largely unknown way. The key ingredient is the following result of Bakirov and Sz´ekely (2005) concerning the small sample properties of the standard t-test: For a significance level of 5% or lower, the t-test remains conservative for underlying observations that are independent and Gaussian with heterogenous variances. One might thus conduct robust large sample inference as follows: partition the data into q ≥ 2 groups, estimate the model for each group and conduct a standard t-test with the resulting q parameter estimators. This results in valid inference as long as the groups are chosen in a way that ensures the parameter estimators to be asymptotically independent, unbiased and Gaussian of possibly different variances. We provide examples of how to apply this approach to time series, panel, clustered and spatially correlated data.


Journal of Econometrics | 2005

Size and Power of Tests for Stationarity in Highly Autocorrelated Time Series

Ulrich K. Müller

Tests for stationarity are routinely applied to highly persistent time series. Following Kwiatkowski, Phillips, Schmidt and Shin (1992), standard stationarity employs a rescaling by an estimator of the long-run variance of the (potentially) stationary series. This paper analytically investigates the size and power properties of such tests when the series are strongly autocorrelated in a local-to-unity asymptotic framework. It is shown that the behavior of the tests strongly depends on the long-run variance estimator employed, but is in general highly undesirable. Either the tests fail to control for size even for strongly mean reverting series, or they are inconsistent against an integrated process and discriminate only poorly between stationary and integrated processes compared to optimal statistics.


Econometrica | 2006

Testing Models of Low-Frequency Variability

Ulrich K. Müller; Mark W. Watson

We develop a framework to assess how successfully standard time series models explain low-frequency variability of a data series. The low-frequency information is extracted by computing a finite number of weighted averages of the original data, where the weights are low-frequency trigonometric series. The properties of these weighted averages are then compared to the asymptotic implications of a number of common time series models. We apply the framework to twenty U.S. macroeconomic and financial time series using frequencies lower than the business cycle.


Journal of Business & Economic Statistics | 2014

HAC Corrections for Strongly Autocorrelated Time Series

Ulrich K. Müller

Applied work routinely relies on heteroscedasticity and autocorrelation consistent (HAC) standard errors when conducting inference in a time series setting. As is well known, however, these corrections perform poorly in small samples under pronounced autocorrelations. In this article, I first provide a review of popular methods to clarify the reasons for this failure. I then derive inference that remains valid under a specific form of strong dependence. In particular, I assume that the long-run properties can be approximated by a stationary Gaussian AR(1) model, with coefficient arbitrarily close to one. In this setting, I derive tests that come close to maximizing a weighted average power criterion. Small sample simulations show these tests to perform well, also in a regression context.


Econometrica | 2013

Risk of bayesian inference in misspecified models, and the sandwich covariance matrix

Ulrich K. Müller

It is well known that, in misspecified parametric models, the maximum likelihood estimator (MLE) is consistent for the pseudo-true value and has an asymptotically normal sampling distribution with “sandwich” covariance matrix. Also, posteriors are asymptotically centered at the MLE, normal, and of asymptotic variance that is, in general, different than the sandwich matrix. It is shown that due to this discrepancy, Bayesian inference about the pseudo-true parameter value is, in general, of lower asymptotic frequentist risk when the original posterior is substituted by an artificial normal posterior centered at the MLE with sandwich covariance matrix. An algorithm is suggested that allows the implementation of this artificial posterior also in models with high dimensional nuisance parameters which cannot reasonably be estimated by maximizing the likelihood.


Econometrica | 2015

NEARLY OPTIMAL TESTS WHEN A NUISANCE PARAMETER IS PRESENT UNDER THE NULL HYPOTHESIS

Graham Elliott; Ulrich K. Müller; Mark W. Watson

This paper considers nonstandard hypothesis testing problems that involve a nuisance parameter. We establish an upper bound on the weighted average power of all valid tests, and develop a numerical algorithm that determines a feasible test with power close to the bound. The approach is illustrated in six applications: inference about a linear regression coefficient when the sign of a control coefficient is known; small sample inference about the difference in means from two independent Gaussian samples from populations with potentially different variances; inference about the break date in structural break models with moderate break magnitude; predictability tests when the regressor is highly persistent; inference about an interval identified parameter; and inference about a linear regression coefficient when the necessity of a control is in doubt.


The Review of Economic Studies | 2010

Efficient Estimation of the Parameter Path in Unstable Time Series Models

Ulrich K. Müller; Philippe-Emmanuel. Petalas

The paper investigates asymptotically efficient inference in general likelihood models with time varying parameters. Parameter path estimators and tests of parameter constancy are evaluated by their weighted average risk and weighted average power, respectively. The weight function is proportional to the distribution of a Gaussian process, and focusses on local parameter instabilities that cannot be detected with certainty even in the limit. It is shown that asymptotically, the sample information about the parameter path is efficiently summarized by a Gaussian pseudo model. This approximation leads to computationally convenient formulas for efficient path estimators and test statistics, and unifies the theory of stability testing and parameter path estimation.


Econometric Theory | 2008

THE IMPOSSIBILITY OF CONSISTENT DISCRIMINATION BETWEEN I(0) AND I(1) PROCESSES

Ulrich K. Müller

An I(0) process is commonly defined as a process that satisfies a functional central limit theorem, i.e., whose scaled partial sums converge weakly to a Wiener process, and an I(1) process as a process whose first differences are I(0). This paper establishes that with this definition, it is impossible to consistently discriminate between I(0) and I(1) processes. At the same time, on a more constructive note, there exist consistent unit root tests and also nontrivial inconsistent stationarity tests with correct asymptotic size.


Journal of Econometrics | 2013

Low-Frequency Robust Cointegration Testing

Ulrich K. Müller; Mark W. Watson

Standard inference in cointegrating models is fragile because it relies on an assumption of an I(1) model for the common stochastic trends, which may not accurately describe the datas persistence. This paper discusses efficient low-frequency inference about cointegrating vectors that is robust to this potential misspecification. A simple test motivated by the analysis in Wright (2000) is developed and shown to be approximately optimal in the case of a single cointegrating vector.

Collaboration


Dive into the Ulrich K. Müller's collaboration.

Top Co-Authors

Avatar

Graham Elliott

University of California

View shared research outputs
Top Co-Authors

Avatar

Mark W. Watson

National Bureau of Economic Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge