Fred W. Huffer
Florida State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fred W. Huffer.
Biometrics | 1998
Fred W. Huffer; Hulin Wu
SUMMARY In this paper, we explore using autologistic regression models for spatial binary data with covariates. Autologistic regression models can handle binary responses exhibiting both spatial correlation and dependence on covariates. We use Markov chain Monte Carlo (MCMC) to estimate the parameters in these models. The distributional behavior of the MCMC maximum likelihood estimates (MCMC MLEs) is studied via simulation. We find that the MCMC MLEs are approximately normally distributed and that the MCMC estimates of Fisher information may be used to estimate the variance of the MCMC MLEs and to construct confidence intervals. Finally, we illustrate by example how our studies may be applied to model the distribution of plant species.
Journal of Geophysical Research | 2015
Ian R. MacDonald; Oscar Garcia-Pineda; Andrew R. Beet; S. Daneshgar Asl; Lian Feng; George Graettinger; D. French‐McCay; Jamie Holmes; Chuanmin Hu; Fred W. Huffer; Ira Leifer; Frank E. Muller-Karger; Andrew R. Solow; Mauricio Silva; Gregg A. Swayze
Abstract When wind speeds are 2–10 m s−1, reflective contrasts in the ocean surface make oil slicks visible to synthetic aperture radar (SAR) under all sky conditions. Neural network analysis of satellite SAR images quantified the magnitude and distribution of surface oil in the Gulf of Mexico from persistent, natural seeps and from the Deepwater Horizon (DWH) discharge. This analysis identified 914 natural oil seep zones across the entire Gulf of Mexico in pre‐2010 data. Their ∼0.1 µm slicks covered an aggregated average of 775 km2. Assuming an average volume of 77.5 m3 over an 8–24 h lifespan per oil slick, the floating oil indicates a surface flux of 2.5–9.4 × 104 m3 yr−1. Oil from natural slicks was regionally concentrated: 68%, 25%, 7%, and <1% of the total was observed in the NW, SW, NE, and SE Gulf, respectively. This reflects differences in basin history and hydrocarbon generation. SAR images from 2010 showed that the 87 day DWH discharge produced a surface‐oil footprint fundamentally different from background seepage, with an average ocean area of 11,200 km2 (SD 5028) and a volume of 22,600 m3 (SD 5411). Peak magnitudes of oil were detected during equivalent, ∼14 day intervals around 23 May and 18 June, when wind speeds remained <5 m s−1. Over this interval, aggregated volume of floating oil decreased by 21%; area covered increased by 49% (p < 0.1), potentially altering its ecological impact. The most likely causes were increased applications of dispersant and surface burning operations.
PALAIOS | 2006
Neal A. Doran; Anthony J. Arnold; William C. Parker; Fred W. Huffer
Abstract Age-dependent extinction is an observation with important biological implications. Van Valens Red Queen hypothesis triggered three decades of research testing its primary implication: that age is independent of extinction. In contrast to this, later studies with species-level data have indicated the possible presence of age dependence. Since the formulation of the Red Queen hypothesis, more powerful tests of survivorship models have been developed. This is the first report of the application of the Cox Proportional Hazards model to paleontological data. Planktonic foraminiferal morphospecies allow the taxonomic and precise stratigraphic resolution necessary for the Cox model. As a whole, planktonic foraminiferal morphospecies clearly show age-dependent extinction. In particular, the effect is attributable to the presence of shorter-ranged species (range < 4 myr) following extinction events. These shorter-ranged species also possess tests with unique morphological architecture. The morphological differences are probably epiphenomena of underlying developmental and heterochronic processes of shorter-ranged species that survived various extinction events. Extinction survivors carry developmental and morphological characteristics into postextinction recovery times, and this sets them apart from species populations established independently of extinction events.
Journal of the American Statistical Association | 1997
Fred W. Huffer; Chien-Tai Lin
Abstract Let X 1, X 2, …, Xn be randomly distributed points on the unit interval. Let Nx,x+d be the number of these points contained in the interval (x, x + d). The scan statistic Nd is defined as the maximum number of points in a window of length d; that is, Nd = sup x Nx,x+d. This statistic is used to test for the presence of nonrandom clustering. We say that m points form an m: d clump if these points are all contained in some interval of length d. Let Y denote the number of m: d clumps. In this article we show how to compute the lower-order moments of Y, and we use these moments to obtain approximations and bounds for the distribution of the scan statistic Nd. Our approximations are based on using the methods of moments technique to approximate the distribution of Y. We try two basic types of methods of moments approximations: one involving a simple Markov chain model and others using various different compound Poisson approximations. Our results compare favorably with other approximations and bounds ...
Journal of Computational and Graphical Statistics | 2003
Hani Doss; Fred W. Huffer
Consider the model in which the data consist of possibly censored lifetimes, and one puts a mixture of Dirichlet process priors on the common survival distribution. The exact computation of the posterior distribution of the survival function is in general impossible to obtain. This article develops and compares the performance of several simulation techniques, based on Markov chain Monte Carlo and sequential importance sampling, for approximating this posterior distribution. One scheme, whose derivation is based on sequential importance sampling, gives an exactly iid sample from the posterior for the case of right censored data. A second contribution of this article is a battery of programs that implement the various schemes discussed here. The programs and methods are illustrated on a dataset of interval-censored times arising from two treatments for breast cancer.
Computational Statistics & Data Analysis | 1997
Fred W. Huffer; Chien-Tai Lin
Abstract Consider the spacings between i.i.d. uniform observations in the interval (0,1). We develop a general method for evaluating the distribution of the minimum or the maximum of random variables which are sums of consecutive spacings. We then present some applications of this method. The main idea underlying our approach was given by Huffer (1988): a recursion is used to break up the joint distribution of several linear combinations of spacings into a sum of simpler components. We continue applying this recursion until we obtain components which are simple and easily expressed in closed form. In this paper we propose an algorithm for the systematic application of this recursion. Our method can be used to solve a variety of problems involving sums of spacings or exponential random variables. In particular, our method gives another way to obtain the distribution of the scan statistic. Because the output of our procedure is a polynomial whose coefficients are computed exactly, we can supply numerical answers which are accurate to any required degree of precision.
arXiv: Probability | 2008
Fred W. Huffer; Jayaram Sethuraman; Sunder Sethuraman
A sequence of random variables, each taking values 0 or 1, is called a Bernoulli sequence. We say that a string of length d occurs, in a Bernoulli sequence, if a success is followed by exactly (d 1) failures before the next success. The counts of such d-strings are of interest, and in specific independent Bernoulli sequences are known to correspond to asymptotic d- cycle counts in random permutations. In this note, we give a new framework, in terms of conditional Poisson processes, which allows for a quick characterization of the joint distribution of the counts of all d-strings, in a general class of Bernoulli sequences, as certain mixtures of the product of Poisson measures. In particular, this general class includes all Bernoulli sequences considered in the literature, as well as a host of new sequences.
Archive | 1999
Fred W. Huffer; Chien-Tai Lin
Consider the order statistics fromNi.i.d. random variables uniformly distributed on the interval (0,1]. We present a general method for computing probabilities involving differences of the order statistics or linear combinations of the spacings between the order statistics. This method is based on repeated use of a basic recursion to break up the joint distribution of linear combinations of spacings into simpler components which are easily evaluated. LetS w denote the (continuous conditional) scan statistic with window length w. Let Cwdenote the number of m: w clumps among theNrandom points, where an m: w clump is defined as m points falling within an interval of length w. We apply our general method to compute the distribution ofS w (for smallN)and the lower-order moments of Cw. The final answers produced by our approach are piecewise polynomials (in w) whose coefficients are computed exactly. These expressions can be stored and later used to rapidly compute numerical answers which are accurate to any required degree of precision.
Journal of Applied Probability | 1987
Fred W. Huffer; L. A. Shepp
Arcs of length lk, 0 < lk < 1, k = 1, 2, * , n, are thrown independently and uniformly on a circumference W having unit length. Let P(11, 12, * * *,) be the probability that W is completely covered by the n random arcs. We show that P(I?, 12,*. , 14) is a Schur-convex function and that it is convex in each argument when the others are held fixed. COVERAGE PROBABILITIES; SCHUR-CONVEX; GEOMETRICAL PROBABILITY
Computational Statistics & Data Analysis | 2013
Jingyong Su; Anuj Srivastava; Fred W. Huffer
The problems of detecting, classifying, and estimating shapes in point cloud data are important due to their general applicability in image analysis, computer vision, and graphics. They are challenging because the data is typically noisy, cluttered, and unordered. We study these problems using a fully statistical model where the data is modeled using a Poisson process on the objects boundary (curves or surfaces), corrupted by additive noise and a clutter process. Using likelihood functions dictated by the model, we develop a generalized likelihood ratio test for detecting a shape in a point cloud. This ratio test is based on optimizing over some unknown parameters, including the pose and scale associated with hypothesized objects, and an empirical evaluation of the log-likelihood ratio distribution. Additionally, we develop a procedure for estimating most likely shapes in observed point clouds under given shape hypotheses. We demonstrate this framework using examples of 2D and 3D shape detection and estimation in both real and simulated data, and a usage of this framework in shape retrieval from a 3D shape database.