Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nils Lid Hjort is active.

Publication


Featured researches published by Nils Lid Hjort.


International Encyclopedia of the Social & Behavioral Sciences (Second Edition) | 2008

Model Selection and Model Averaging

Gerda Claeskens; Nils Lid Hjort

Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer? Choosing a model is central to all statistical work with data. We have seen rapid advances in model fitting and in the theoretical understanding of model selection, yet this book is the first to synthesize research and practice from this active field. Model choice criteria are explained, discussed and compared, including the AIC, BIC, DIC and FIC. The uncertainties involved with model selection are tackled, with discussions of frequentist and Bayesian methods; model averaging schemes are presented. Real-data examples are complemented by derivations providing deeper insight into the methodology, and instructive exercises build familiarity with the methods. The companion website features Data sets and R code.


Journal of the American Statistical Association | 2003

Frequentist Model Average Estimators

Nils Lid Hjort; Gerda Claeskens

The traditional use of model selection methods in practice is to proceed as if the final selected model had been chosen in advance, without acknowledging the additional uncertainty introduced by model selection. This often means underreporting of variability and too optimistic confidence intervals. We build a general large-sample likelihood apparatus in which limiting distributions and risk properties of estimators post-selection as well as of model average estimators are precisely described, also explicitly taking modeling bias into account. This allows a drastic reduction in complexity, as competing model averaging schemes may be developed, discussed, and compared inside a statistical prototype experiment where only a few crucial quantities matter. In particular, we offer a frequentist view on Bayesian model averaging methods and give a link to generalized ridge estimators. Our work also leads to new model selection criteria. The methods are illustrated with real data applications.


Journal of the American Statistical Association | 2003

The Focused Information Criterion

Gerda Claeskens; Nils Lid Hjort

A variety of model selection criteria have been developed, of general and specific types. Most of these aim at selecting a single model with good overall properties, for example, formulated via average prediction quality or shortest estimated overall distance to the true model. The Akaike, the Bayesian, and the deviance information criteria, along with many suitable variations, are examples of such methods. These methods are not concerned, however, with the actual use of the selected model, which varies with context and application. The present article takes the view that the model selector should instead focus on the parameter singled out for interest; in particular, a model that gives good precision for one estimand may be worse when used for inference for another estimand. We develop a method that, for a given focus parameter, estimates the precision of any submodel-based estimator. The framework is that of large-sample likelihood inference. Using an unbiased estimate of limiting risk, we propose a focused information criterion for model selection. We investigate and discuss properties of the method, establish some connections to Akaikes information criterion, and illustrate its use in a variety of situations.


Annals of Statistics | 2009

Extending the scope of empirical likelihood

Nils Lid Hjort; Ian W. McKeague; Ingrid Van Keilegom

This article extends the scope of empirical likelihood methodology ill three directions: to allow for plug-in estimates Of nuisance parameters in estimating equations, slower than root n-rates of convergence, and settings in which there are a relatively large number of estimating equations compared to the sample size. Calibrating empirical likelihood confidence regions with plug-in is sometimes intractable due to the complexity of the asymptotics, so we introduce a bootstrap approximation that call be used in such situations. We provide a range of examples from survival analysis and nonparametric statistics to illustrate the main results.


Scandinavian Journal of Statistics | 2002

Confidence and Likelihood

Tore Schweder; Nils Lid Hjort

Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one-sided p-values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman-Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta-analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher-Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher-Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.


Journal of The Royal Statistical Society Series B-statistical Methodology | 2001

On Bayesian consistency

Stephen G. Walker; Nils Lid Hjort

We consider a sequence of posterior distributions based on a data-dependent prior (which we shall refer to as a pseudoposterior distribution) and establish simple conditions under which the sequence is Hellinger consistent. It is shown how investigations into these pseudo posteriors assist with the understanding of some true posterior distributions, including Polya trees, the infinite dimensional exponential family and mixture models.


Journal of Nonparametric Statistics | 2002

Tests For Constancy Of Model Parameters Over Time

Nils Lid Hjort; Alexander Koning

Suppose that a sequence of data points follows a distribution of a certain parametric form, but that one or more of the underlying parameters may change over time. This paper addresses various natural questions in such a framework. We construct canonical monitoring processes which under the hypothesis of no change converge in distribution to independent Brownian bridges, and use these to construct natural goodness-of-fit statistics. Weighted versions of these are also studied, and optimal weight functions are derived to give maximum local power against alternatives of interest. We also discuss how our results can be used to pinpoint where and what type of changes have occurred, in the event that initial screening tests indicate that such exist. Our unified large-sample methodology is quite general and applies to all regular parametric models, including regression, Markov chain and time series situations.


Econometric Theory | 2008

Minimizing Average Risk In Regression Models

Gerda Claeskens; Nils Lid Hjort

Most model selection mechanisms work in an “overall” modus, providing models without specific concern for how the selected model is going to be used afterward. The focused information criterion (FIC), on the other hand, is geared toward optimum model selection when inference is required for a given estimand. In this paper the FIC method is extended to weighted versions. This allows one to rank and select candidate models for the purpose of handling a range of similar tasks well, as opposed to being forced to focus on each task separately. Applications include selecting regression models that perform well for specified regions of covariate values. We derive these weighted focused information criteria (wFIC), give asymptotic results, and apply the methods to real data. Formulas for easy implementation are provided for the class of generalized linear models. We express our sincere thanks to all reviewers of this paper, including the special issue guest editors and editor Professor Phillips, whose comments and questions have contributed to significant improvements. We also thank Dr. Ronald Klein for kindly giving permission to use the WESDR data. The work of Claeskens has been supported in part by the Fund for Scientific Research Flanders (G.0542.06).


Advances in Applied Probability | 2003

FRAILTY MODELS BASED ON LEVY PROCESSES

Håkon K. Gjessing; Odd O. Aalen; Nils Lid Hjort

Generalizing the standard frailty models of survival analysis, we propose to model frailty as a weighted Lévy process. Hence, the frailty of an individual is not a fixed quantity, but develops over time. Formulae for the population hazard and survival functions are derived. The power variance function Lévy process is a prominent example. In many cases, notably for compound Poisson processes, quasi-stationary distributions of survivors may arise. Quasi-stationarity implies limiting population hazard rates that are constant, in spite of the continual increase of the individual hazards. A brief discussion is given of the biological relevance of this finding.


Archive | 1992

Semiparametric estimation of parametric hazard rates

Nils Lid Hjort; Mike West; Sue Leurgans

The best known methods for estimating hazard rate functions in survival analysis models are either purely parametric or purely nonparametric. The parametric ones are sometimes too biased while the nonparametric ones are sometimes too variable. There should therefore be scope for methods that somehow try to combine parametric and nonparametric features. In the present paper three semiparametric approaches to hazard rate estimation are presented. The first idea uses a dynamic local likelihood approach to fit the locally most suitable member in a given parametric class of hazard rates. Thus the parametric hazard rate estimate at time s inserts a parameter estimate that also depends on s. The second idea is to write the true hazard as a product of an initial parametric estimate times a correction factor, and then estimate this factor nonparametrically using orthogonal expansions. Finally the third idea is Bayesian in flavour and builds a larger nonparametric hazard process prior around a given parametric hazard model. The hazard estimate in this case is the posterior expectation. Properties of the resulting estimators are discussed.

Collaboration


Dive into the Nils Lid Hjort's collaboration.

Top Co-Authors

Avatar

Gerda Claeskens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Ongaro

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

N. G. Ushakov

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge