Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luke Tierney is active.

Publication


Featured researches published by Luke Tierney.


Genome Biology | 2004

Bioconductor: open software development for computational biology and bioinformatics

Robert Gentleman; Vincent J. Carey; Douglas M. Bates; Ben Bolstad; Marcel Dettling; Sandrine Dudoit; Byron Ellis; Laurent Gautier; Yongchao Ge; Jeff Gentry; Kurt Hornik; Torsten Hothorn; Wolfgang Huber; Stefano M. Iacus; Rafael A. Irizarry; Friedrich Leisch; Cheng Li; Martin Maechler; Anthony Rossini; Gunther Sawitzki; Colin A. Smith; Gordon K. Smyth; Luke Tierney; Jean Yee Hwa Yang; Jianhua Zhang

The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.


Journal of the American Statistical Association | 1986

Accurate Approximations for Posterior Moments and Marginal Densities

Luke Tierney; Joseph B. Kadane

Abstract This article describes approximations to the posterior means and variances of positive functions of a real or vector-valued parameter, and to the marginal posterior densities of arbitrary (i.e., not necessarily positive) parameters. These approximations can also be used to compute approximate predictive densities. To apply the proposed method, one only needs to be able to maximize slightly modified likelihood functions and to evaluate the observed information at the maxima. Nevertheless, the resulting approximations are generally as accurate and in some cases more accurate than approximations based on third-order expansions of the likelihood and requiring the evaluation of third derivatives. The approximate marginal posterior densities behave very much like saddle-point approximations for sampling distributions. The principal regularity condition required is that the likelihood times prior be unimodal.


Journal of the American Statistical Association | 1989

Fully Exponential Laplace Approximations to Expectations and Variances of Nonpositive Functions

Luke Tierney; Robert E. Kass; Joseph B. Kadane

Abstract Tierney and Kadane (1986) presented a simple second-order approximation for posterior expectations of positive functions. They used Laplaces method for asymptotic evaluation of integrals, in which the integrand is written as f(θ)exp(-nh(θ)) and the function h is approximated by a quadratic. The form in which they applied Laplaces method, however, was fully exponential: The integrand was written instead as exp[− nh(θ) + log f(θ)]; this allowed first-order approximations to be used in the numerator and denominator of a ratio of integrals to produce a second-order expansion for the ratio. Other second-order expansions (Hartigan 1965; Johnson 1970; Lindley 1961, 1980; Mosteller and Wallace 1964) require computation of more derivatives of the log-likelihood function. In this article we extend the fully exponential method to apply to expectations and variances of nonpositive functions. To obtain a second-order approximation to an expectation E(g(θ)), we use the fully exponential method to approximate...


Statistics in Medicine | 1999

Some adaptive Monte Carlo methods for Bayesian inference

Luke Tierney; Antonietta Mira

Monte Carlo methods, in particular Markov chain Monte Carlo methods, have become increasingly important as a tool for practical Bayesian inference in recent years. A wide range of algorithms is available, and choosing an algorithm that will work well on a specific problem is challenging. It is therefore important to explore the possibility of developing adaptive strategies that choose and adjust the algorithm to a particular context based on information obtained during sampling as well as information provided with the problem. This paper outlines some of the issues in developing adaptive methods and presents some preliminary results.


Journal of the American Statistical Association | 1995

Regeneration in Markov Chain Samplers

Per A. Mykland; Luke Tierney; Bin Yu

Abstract Markov chain sampling has recently received considerable attention, in particular in the context of Bayesian computation and maximum likelihood estimation. This article discusses the use of Markov chain splitting, originally developed for the theoretical analysis of general state-space Markov chains, to introduce regeneration into Markov chain samplers. This allows the use of regenerative methods for analyzing the output of these samplers and can provide a useful diagnostic of sampler performance. The approach is applied to several samplers, including certain Metropolis samplers that can be used on their own or in hybrid samplers, and is illustrated in several examples.


Operations Research | 1983

Optimal Tests for Initialization Bias in Simulation Output

Lee W. Schruben; H. Singh; Luke Tierney

We present a family of tests for detecting initialization bias in the mean of a simulation output series using a hypothesis testing framework. The null hypothesis is that the output mean does not change throughout the simulation run. The alternative hypothesis specifies a general transient mean function. The tests are asymptotically optimal based on cumulative sums of deviations about the sample mean. A particular test in this family is applied to a variety of simulation models. The test requires very modest computation and appears to be both robust and powerful.


Journal of Computational and Graphical Statistics | 2007

Simple Parallel Statistical Computing in R

Anthony Rossini; Luke Tierney; Na Li

Theoretically, many modern statistical procedures are trivial to parallelize. However, practical deployment of a parallelized implementation which is robust and reliably runs on different computational cluster configurations and environments is far from trivial. We present a framework for the R statistical computing language that provides a simple yet powerful programming interface to a computational cluster of CPUs. This interface allows the rapid development of R functions that distribute independent computations across the nodes of the computational cluster. The approach can be extended to finer grain parallelization if needed. The resulting framework allows statisticians to obtain significant speed-ups for some computations at little additional development cost. The particular implementation can be deployed in ad-hoc heterogeneous computing environments.


Scandinavian Journal of Statistics | 2002

Efficiency and Convergence Properties of Slice Samplers

Antonietta Mira; Luke Tierney

The slice sampler (SS) is a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence Metropolis-Hastings algorithm (IMHA) it is always possible to construct a SS that dominates it in the Peskun sense. This means that the resulting SS produces estimates with a smaller asymptotic variance than the IMHA. Furthermore the SS has a smaller second-largest eigenvalue. This ensures faster convergence to the target distribution. A sufficient condition for uniform ergodicity of the SS is given and an upper bound for the rate of convergence to stationarity is provided.


International Journal of Parallel Programming | 2009

Snow: a parallel computing framework for the R system

Luke Tierney; A. J. Rossini; Na Li

This paper presents a simple parallel computing framework for the statistical programming language R. The system focuses on parallelization of familiar higher level mapping functions and emphasizes simplicity of use in order to encourage adoption by a wide range of R users. The paper describes the design and implementation of the system, outlines examples of its use, and presents some possible directions for future developments.


Advances in Applied Probability | 1982

Asymptotic bounds on the time to fatigue failure of bundles of fibers under local load sharing

Luke Tierney

A fiber bundle is a parallel arrangement of fibers. Under a steady tensile load, fibers fail randomly in time in a manner that depends on how they share the applied load. The bundle fails when all its fibers have failed in a specified region. In this paper we consider the fatigue failure of such a bundle in a fiber load-sharing setting appropriate for composite materials, that is, to bundles impregnated with a flexible matrix. The bundle is actually modelled as a chain of short bundles, and local load sharing is assumed for the fibers within each short bundle. The chain of bundles fails once all the fibers in one of the short bundles have failed. Reasonable assumptions are made on the stochastic failure of individual fibers. A general framework for describing fiber bundles is developed and is used to derive the limiting distribution of the time to the first appearance of a set of k or more adjacent failed fibers as the number of fibers in the bundle grows large. These results provide useful bounds on the distribution of the time to total bundle failure. Some implications and extensions of these results are discussed. FIBER BUNDLE; TIME TO FAILURE; LOCAL LOAD SHARING; RELIABILITY OF COMPOSITE MATERIALS

Collaboration


Dive into the Luke Tierney's collaboration.

Top Co-Authors

Avatar

Joseph B. Kadane

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert E. Kass

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas M. Bates

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Na Li

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge