Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert E. Kass is active.

Publication


Featured researches published by Robert E. Kass.


Journal of the American Statistical Association | 1996

The Selection of Prior Distributions by Formal Rules

Robert E. Kass; Larry Wasserman

Abstract Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreyss rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreyss rules and their variants remain reasonable choices. We also provide an annotated b...


Journal of the American Statistical Association | 1995

A Reference Bayesian Test for Nested Hypotheses and its Relationship to the Schwarz Criterion

Robert E. Kass; Larry Wasserman

Abstract To compute a Bayes factor for testing H 0: ψ = ψ0 in the presence of a nuisance parameter β, priors under the null and alternative hypotheses must be chosen. As in Bayesian estimation, an important problem has been to define automatic, or “reference,” methods for determining priors based only on the structure of the model. In this article we apply the heuristic device of taking the amount of information in the prior on ψ equal to the amount of information in a single observation. Then, after transforming β to be “null orthogonal” to ψ, we take the marginal priors on β to be equal under the null and alternative hypotheses. Doing so, and taking the prior on ψ to be Normal, we find that the log of the Bayes factor may be approximated by the Schwarz criterion with an error of order O p (n −½), rather than the usual error of order O p (1). This result suggests the Schwarz criterion should provide sensible approximate solutions to Bayesian testing problems, at least when the hypotheses are nested. When...


Nature Neuroscience | 2004

Multiple neural spike train data analysis: state-of-the-art and future challenges.

Emery N. Brown; Robert E. Kass; Partha P. Mitra

Multiple electrodes are now a standard tool in neuroscience research that make it possible to study the simultaneous activity of several neurons in a given brain region or across different regions. The data from multi-electrode studies present important analysis challenges that must be resolved for optimal use of these neurophysiological measurements to answer questions about how the brain works. Here we review statistical methods for the analysis of multiple neural spike-train data and discuss future challenges for methodology research.


The American Statistician | 1998

Markov chain Monte Carlo in practice : A roundtable discussion

Robert E. Kass; Bradley P. Carlin; Andrew Gelman; Radford M. Neal

Abstract Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various “tricks of the trade” This article is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC—and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding and assessing convergence, estimating standard error...


Neural Computation | 2002

The time-rescaling theorem and its application to neural spike train data analysis

Emery N. Brown; Riccardo Barbieri; Valérie Ventura; Robert E. Kass; Loren M. Frank

Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the models validity prior to using it to make inferences about a particular neural system. Assessing goodness-of-fit is a challenging problem for point process neural spike train models, especially for histogram-based models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The time-rescaling theorem is a well-known result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodness-of-fit tests for both parametric and histogram-based point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the supplementary eye field of a macque monkey and a comparison of temporal and spatial smoothers, inhomogeneous Poisson, inhomogeneous gamma, and inhomogeneous inverse gaussian models of rat hippocampal place cell spiking activity. To help make the logic behind the time-rescaling theorem more accessible to researchers in neuroscience, we present a proof using only elementary probability theory arguments.We also show how the theorem may be used to simulate a general point process model of a spike train. Our paradigm makes it possible to compare parametric and histogram-based neural spike train models directly. These results suggest that the time-rescaling theorem can be a valuable tool for neural spike train data analysis.


Journal of the American Statistical Association | 1989

Approximate Bayesian inference in conditionally independent hierarchical models (Parametric empirical Bayes models)

Robert E. Kass; Duane Steffey

Abstract We consider two-stage models of the kind used in parametric empirical Bayes (PEB) methodology, calling them conditionally independent hierarchical models. We suppose that there are k “units,” which may be experimental subjects, cities, study centers, etcetera. At the first stage, the observation vectors Yi for units i = 1, …, k are independently distributed with densities p(yi | θi ), or more generally, p(yi | θi, λ). At the second stage, the unit-specific parameter vectors θi are iid with densities p(θi | λ). The PEB approach proceeds by regarding the second-stage distribution as a prior and noting that, if λ were known, inference about θ could be based on its posterior. Since λ is not known, the simplest PEB methods estimate the parameter λ by maximum likelihood or some variant, and then treat λ as if it were known to be equal to this estimate. Although this procedure is sometimes satisfactory, a well-known defect is that it neglects the uncertainty due to the estimation of λ. In this article w...


Journal of the American Statistical Association | 1997

Computing Bayes Factors by Combining Simulation and Asymptotic Approximations

Thomas J. DiCiccio; Robert E. Kass; Adrian E. Raftery; Larry Wasserman

Abstract The Bayes factor is a ratio of two posterior normalizing constants, which may be difficult to compute. We compare several methods of estimating Bayes factors when it is possible to simulate observations from the posterior distributions, via Markov chain Monte Carlo or other techniques. The methods that we study are all easily applied without consideration of special features of the problem, provided that each posterior distribution is well behaved in the sense of having a single dominant mode. We consider a simulated version of Laplaces method, a simulated version of Bartlett correction, importance sampling, and a reciprocal importance sampling technique. We also introduce local volume corrections for each of these. In addition, we apply the bridge sampling method of Meng and Wong. We find that a simulated version of Laplaces method, with local volume correction, furnishes an accurate approximation that is especially useful when likelihood function evaluations are costly. A simple bridge sampli...


Journal of the American Statistical Association | 1989

Fully Exponential Laplace Approximations to Expectations and Variances of Nonpositive Functions

Luke Tierney; Robert E. Kass; Joseph B. Kadane

Abstract Tierney and Kadane (1986) presented a simple second-order approximation for posterior expectations of positive functions. They used Laplaces method for asymptotic evaluation of integrals, in which the integrand is written as f(θ)exp(-nh(θ)) and the function h is approximated by a quadratic. The form in which they applied Laplaces method, however, was fully exponential: The integrand was written instead as exp[− nh(θ) + log f(θ)]; this allowed first-order approximations to be used in the numerator and denominator of a ratio of integrals to produce a second-order expansion for the ratio. Other second-order expansions (Hartigan 1965; Johnson 1970; Lindley 1961, 1980; Mosteller and Wallace 1964) require computation of more derivatives of the log-likelihood function. In this article we extend the fully exponential method to apply to expectations and variances of nonpositive functions. To obtain a second-order approximation to an expectation E(g(θ)), we use the fully exponential method to approximate...


Proceedings of the National Academy of Sciences of the United States of America | 2008

Functional network reorganization during learning in a brain-computer interface paradigm

Beata Jarosiewicz; Steven M. Chase; George W. Fraser; Meel Velliste; Robert E. Kass; Andrew B. Schwartz

Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.


Journal of the American Statistical Association | 1999

Geometrical foundations of asymptotic inference

Bruce G. Lindsay; Robert E. Kass; Paul Vos

Overview and Preliminaries. ONE-PARAMETER CURVED EXPONENTIAL FAMILIES. First-Order Asymptotics. Second-Order Asymptotics. MULTIPARAMETER CURVED EXPONENTIAL FAMILIES. Extensions of Results from the One-Parameter Case. Exponential Family Regression and Diagnostics. Curvature in Exponential Family Regression. DIFFERENTIAL-GEOMETRIC METHODS. Information-Metric Riemannian Geometry. Statistical Manifolds. Divergence Functions. Recent Developments. Appendices. References. Indexes.

Collaboration


Dive into the Robert E. Kass's collaboration.

Top Co-Authors

Avatar

Emery N. Brown

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Vos

East Carolina University

View shared research outputs
Top Co-Authors

Avatar

Valérie Ventura

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Tarr

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Steven M. Chase

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ryan C. Kelly

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bin Yu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge