Jonathan B. Hill
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan B. Hill.
Econometric Theory | 2010
Jonathan B. Hill
In this paper we analyze the asymptotic properties of the popular distribution tail index estimator by B. Hill (1975) for possibly heavy- tailed, heterogenous, dependent processes. We prove the Hill estimator is weakly consistent for processes with extremes that form mixingale sequences, and asymptotically normal for processes with extremes that are near-epoch-dependent on the extremes of a mixing process. Our limit theory covers infinitely many ARFIMA and FIGARCH processes, stochastic recurrence equations, and simple bilinear processes. Moreover, we develop a simple non-parametric kernel estimator of the asymptotic variance of the Hill estimator, and prove consistency for extremal-NED processes.
Econometric Theory | 2011
Jonathan B. Hill
New notions of tail and nontail dependence are used to characterize separately extremal and nonextremal information, including tail log-exceedances and events, and tail-trimmed levels. We prove that near epoch dependence (McLeish, 1975; Gallant and White, 1988) and L 0 -approximability (Potscher and Prucha, 1991) are equivalent for tail events and tail-trimmed levels, ensuring a Gaussian central limit theory for important extreme value and robust statistics under general conditions. We apply the theory to characterize the extremal and nonextremal memory properties of possibly very heavy-tailed GARCH processes and distributed lags. This in turn is used to verify Gaussian limits for tail index, tail dependence, and tail-trimmed sums of these data, allowing for Gaussian asymptotics for a new tail-trimmed least squares estimator for heavy-tailed processes.
Bernoulli | 2014
Jonathan B. Hill
We develop two new estimators for a general class of stationary GARCH models with possibly heavy tailed asymmetrically distributed errors, covering processes with symmetric and asymmetric feedback like GARCH, Asymmetric GARCH, VGARCH and Quadratic GARCH. The first estimator arises from negligibly trimming QML criterion equations according to error extremes. The second imbeds negligibly transformed errors into QML score equations for a Method of Moments estimator. In this case we exploit a sub-class of redescending transforms that includes tail-trimming and functions popular in the robust estimation literature, and we re-center the transformed errors to minimize small sample bias. The negligible transforms allow both identification of the true parameter and asymptotic normality. We present a consistent estimator of the covariance matrix that permits classic inference without knowledge of the rate of convergence. A simulation study shows both of our estimators trump existing ones for sharpness and approximate normality including QML, Log-LAD, and two types of non-Gaussian QML (Laplace and Power-Law). Finally, we apply the tail-trimmed QML estimator to financial data.
Journal of Time Series Analysis | 2013
Jonathan B. Hill
We develop a robust least squares estimator for autoregressions with possibly heavy tailed errors. Robustness to heavy tails is ensured by negligibly trimming the squared error according to extreme values of the error and regressor. Tail-trimming ensures asymptotic normality and super-root(n)-convergence with a rate comparable to the highest achieved amongst M-estimators for stationary data. Moreover, tail-trimming ensures robustness to heavy tails in both small and large samples. By comparison, existing robust estimators are not as robust in small samples and have a slower rate of convergence when the variance is infinite, or are not asymptotically normal. We present a consistent estimator of the covariance matrix and treat classic inference without knowledge of the rate of convergence. A simulation study demonstrates the sharpness and approximate normality of the estimator, and we apply the estimator to financial returns data.
Archive | 2012
Jonathan B. Hill
We present asymptotic power-one tests of regression model functional form for heavy-tailed time series. Under the null hypothesis of correct specification the model errors must have a finite mean, and otherwise only need to have a fractional moment. If the errors have an infinite variance then in principle any consistent plug-in is allowed, depending on the model, including those with non-Gaussian limits and/or a sub-\(\sqrt{n}\)-convergence rate. One test statistic exploits an orthogonalized test equation that promotes plug-in robustness irrespective of tails. We derive chi-squared weak limits of the statistics, we characterize an empirical process method for smoothing over a trimming parameter, and we study the finite sample properties of the test statistics.
Statistica Sinica | 2015
Jonathan B. Hill
We prove Hills (1975) tail index estimator is asymptotically normal where the employed data are generated by a stationary parametric process {x(t)}. We assume x(t) is an unobservable function of a parameter q that is estimable. Natural applications include regression residuals and GARCH filters. Our main result extends Resnick and Stăricăs (1997) theory for estimated AR i.i.d. errors and Ling and Pengs (2004) theory for estimated ARMA i.i.d. errors to a wide range of filtered time series since we do not require x(t) to be i.i.d., nor generated by a linear process with geometric dependence. We assume x(t) is b-mixing with possibly hyperbolic dependence, covering ARMA-GARCH filters, ARMA filters with heteroscedastic errors of unknown form, nonlinear filters like threshold autoregressions, and filters based on mis-specified models, as well as i.i.d. errors in an ARMA model. Finally, as opposed to existing results we do not require the plug-in for q to be super-n1/2-convergent when x(t) has an infinite variance allowing a far greater variety of plug-ins including those that are slower than n1/2 , like QML-type estimators for GARCH models.
Proceedings of SPIE | 2013
Beatriz Paniagua; Omri Emodi; Jonathan B. Hill; James Fishbaugh; Luiz Pimenta; Stephen R. Aylward; Enquobahrie Andinet; Guido Gerig; John H. Gilmore; John A. van Aalst; Martin Styner
The skull of young children is made up of bony plates that enable growth. Craniosynostosis is a birth defect that causes one or more sutures on an infant’s skull to close prematurely. Corrective surgery focuses on cranial and orbital rim shaping to return the skull to a more normal shape. Functional problems caused by craniosynostosis such as speech and motor delay can improve after surgical correction, but a post-surgical analysis of brain development in comparison with age-matched healthy controls is necessary to assess surgical outcome. Full brain segmentations obtained from pre- and post-operative computed tomography (CT) scans of 8 patients with single suture sagittal (n=5) and metopic (n=3), nonsyndromic craniosynostosis from 41 to 452 days-of-age were included in this study. Age-matched controls obtained via 4D acceleration-based regression of a cohort of 402 full brain segmentations from healthy controls magnetic resonance images (MRI) were also used for comparison (ages 38 to 825 days). 3D point-based models of patient and control cohorts were obtained using SPHARM-PDM shape analysis tool. From a full dataset of regressed shapes, 240 healthy regressed shapes between 30 and 588 days-of-age (time step = 2.34 days) were selected. Volumes and shape metrics were obtained for craniosynostosis and healthy age-matched subjects. Volumes and shape metrics in single suture craniosynostosis patients were larger than age-matched controls for pre- and post-surgery. The use of 3D shape and volumetric measurements show that brain growth is not normal in patients with single suture craniosynostosis.
Annals of economics and statistics | 2008
Jonathan B. Hill
We develop a regression model specification test that directs maximal power toward smooth transition functional forms, and is consistent against any deviation from the null specification. We provide new details regarding whether consistent parametric tests of functional form are asymptotically degenerate: a test of linear autoregression against STAR alternatives is never degenerate. Moreover, a test of Exponential STAR has power attributes entirely associated with the choice of threshold. In a simulation experiment in which all parameters are randomly selected the proposed test has power nearly identical to a most-powerful test for true STAR, neural network and SETAR processes, and dominates popular tests. We apply the test to U.S. output, money, prices and interest rates.
Archive | 2017
Eric Ghysels; Jonathan B. Hill; Kaiji Motegi
This paper proposes a new test for a large set of zero restrictions in regression models based on a seemingly overlooked, but simple, dimension reduction technique. The procedure involves multiple parsimonious regression models where key regressors are split across simple regressions. Each parsimonious regression model has one key regressor and other regressors not associated with the null hypothesis. The test is based on the maximum of the squared parameters of the key regressors. Parsimony ensures sharper estimates and therefore improves power in small sample. We present the general theory of our test and focus on mixed frequency Granger causality as a prominent application involving many zero restrictions.This paper presents simple Granger causality tests applicable to any mixed frequency sampling data setting, which feature remarkable power properties even with a relatively small sample size. Our tests are based on a seemingly overlooked, but simple, dimension reduction technique for regression models. If the number of parameters of interest is large then in small or even large samples any of the trilogy test statistics may not be well approximated by their asymptotic distribution. A bootstrap method can be employed to improve empirical test size, but this generally results in a loss of power. A shrinkage estimator can be employed, including Lasso, Adaptive Lasso, or Ridge Regression, but these are valid only under a sparsity assumption which does not apply to Granger causality tests. The procedure, which is of general interest when testing potentially large sets of parameter restrictions, involves multiple parsimonious regression models where each model regresses a low frequency variable onto only one individual lag or lead of a high frequency series, where that lag or lead slope parameter is necessarily zero under the null hypothesis of non-causality. Our test is then based on a max test statistic that selects the largest squared estimator among all parsimonious regression models. Parsimony ensures sharper estimates and therefore improved power in small samples. We show via Monte Carlo simulations that the max test is particularly powerful for causality with a large time lag.
Journal of Multivariate Analysis | 2015
Jonathan B. Hill
We present a robust Generalized Empirical Likelihood estimator and confidence region for the parameters of an autoregression that may have a heavy tailed heteroscedastic error. The estimator exploits two transformations for heavy tail robustness: a redescending transformation of the error that robustifies against innovation outliers, and weighted least squares instruments that ensure robustness against heavy tailed regressors. Our estimator is consistent for the true parameter and asymptotically normally distributed irrespective of heavy tails.