Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ori Rosen is active.

Publication


Featured researches published by Ori Rosen.


Sociological Methods & Research | 1999

Binomial-Beta Hierarchical Models for Ecological Inference

Gary King; Ori Rosen; Martin A. Tanner

The authors develop binomial-beta hierarchical models for ecological inference using insights from the literature on hierarchical models based on Markov chain Monte Carlo algorithms and Kings ecological inference model. The new approach reveals some features of the data that Kings approach does not, can be easily generalized to more complicated problems such as general R × C tables, allows the data analyst to adjust for covariates, and provides a formal evaluation of the significance of the covariates. It may also be better suited to cases in which the observed aggregate cells are estimated from very few observations or have some forms of measurement error. This article also provides an examples of a hierarchical model in which the statistical idea of “borrowing strength” is used not merely to increase the efficiency of the estimates but to enable the data analyst to obtain estimates.


Statistica Neerlandica | 2001

Bayesian and Frequentist Inference for Ecological Inference: The R×C Case

Ori Rosen; Wenxin Jiang; Gary King; Martin A. Tanner

In this paper we propose Bayesian and frequentist approaches to ecological inference, based on RxC contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by King, Rosen and Tanner (1999) from the 2x2 case to the RxC case. As in the 2x2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.


The Journal of Economic History | 2008

Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler

Gary King; Ori Rosen; Martin A. Tanner; Alexander F. Wagner

The enormous Nazi voting literature rarely builds on modern statistical or economic research. By adding these approaches, we find that the most widely accepted existing theories of this era cannot distinguish the Weimar elections from almost any others in any country. Via a retrospective voting account, we show that voters most hurt by the depression, and most likely to oppose the government, fall into separate groups with divergent interests. This explains why some turned to the Nazis and others turned away. The consequences of Hitler’s election were extraordinary, but the voting behavior that led to it was not.


Journal of the American Statistical Association | 2009

Local Spectral Analysis via a Bayesian Mixture of Smoothing Splines

Ori Rosen; David S. Stoffer; Sally Wood

In many practical problems, time series are realizations of nonstationary random processes. These processes can often be modeled as processes with slowly changing dynamics or as piecewise stationary processes. In these cases, various approaches to estimating the time-varying spectral density have been proposed. Our approach in this article is to estimate the log of the Dahlhaus local spectrum using a Bayesian mixture of splines. The basic idea of our approach is to first partition the data into small sections. We then assume that the log spectral density of the evolutionary process in any given partition is a mixture of individual log spectra. We use a mixture of smoothing splines model with time varying mixing weights to estimate the evolutionary log spectrum. The mixture model is fit using Markov chain Monte Carlo techniques that yield estimates of the log spectra of the individual subsections. In addition to an estimate of the local log spectral density, the method yields pointwise credible intervals. We use a reversible jump step to automatically determine the number of different spectral components.


The American Statistician | 2001

Fast and Stable Algorithms for Computing and Sampling From the Noncentral Hypergeometric Distribution

J. G Liao; Ori Rosen

Although the noncentral hypergeometric distribution underlies conditional inference for 2 × 2 tables, major statistical packages lack support for this distribution. This article introduces fast and stable algorithms for computing the noncentral hypergeometric distribution and for sampling from it. The algorithms avoid the expensive and explosive combinatorial numbers by using a recursive relation. The algorithms also take advantage of the sharp concentration of the distribution around its mode to save computing time. A modified inverse method substantially reduces the number of searches in generating a random deviate. The algorithms are implemented in a Java class, Hypergeometric, available on the World Wide Web.


Statistics in Medicine | 1999

Mixtures of proportional hazards regression models

Ori Rosen; Martin A. Tanner

This paper presents a mixture model which combines features of the usual Cox proportional hazards model with those of a class of models, known as mixtures-of-experts. The resulting model is more flexible than the usual Cox model in the sense that the log hazard ratio is allowed to vary non-linearly as a function of the covariates. Thus it provides a flexible approach to both modelling survival data and model checking. The method is illustrated with simulated data, as well as with multiple myeloma data.


Journal of the American Statistical Association | 2012

AdaptSPEC: Adaptive Spectral Estimation for Nonstationary Time Series

Ori Rosen; Sally Wood; David S. Stoffer

We propose a method for analyzing possibly nonstationary time series by adaptively dividing the time series into an unknown but finite number of segments and estimating the corresponding local spectra by smoothing splines. The model is formulated in a Bayesian framework, and the estimation relies on reversible jump Markov chain Monte Carlo (RJMCMC) methods. For a given segmentation of the time series, the likelihood function is approximated via a product of local Whittle likelihoods. Thus, no parametric assumption is made about the process underlying the time series. The number and lengths of the segments are assumed unknown and may change from one MCMC iteration to another. The frequentist properties of the method are investigated by simulation, and applications to electroencephalogram and the El Niño Southern Oscillation phenomenon are described in detail.


Ecological Inference: New Methodological Strategies | 2004

Information in ecological inference: An introduction

Gary King; Ori Rosen; Martin A. Tanner

Researchers in a diverse variety of fields often need to know about individual-level behavior and are not able to collect it directly. In these situations, where survey research or other means of individual-level data collection are infeasible, ecological inference is the best and often the only hope of making progress. Ecological inference is the process of extracting clues about individual behavior from information reported at the group or aggregate level. For example, sociologists and historians try to learn who voted for the Nazi party in Weimar Germany, where thoughts of survey research are seven decades too late. Marketing researchers study the effects of advertising on the purchasing behavior of individuals, where only zip-code-level purchasing and demographic information are available. Political scientists and politicians study precinct-level electoral data and U.S. Census demographic data to learn about the success of candidate appeals with different voter groups in numerous small areal units where surveys have been infeasible (for cost or confidentiality reasons). To determine whether the U.S. Voting Rights Act can be applied in redistricting cases, expert witnesses, attorneys, judges, and government officials must infer whether African Americans and other minority groups vote differently from whites, even though the secret ballot hinders the process and surveys in racially polarized contexts are known to be of little value. In these and numerous other fields of inquiry, scholars have no choice but to make ecological inferences. Fortunately for them, we have witnessed an explosion of statistical research into this problem in the last five years – both in substantive applications and in methodological innovations. In applications, the methods introduced by Duncan and Davis (1953) and by Goodman (1953) accounted for almost every use of ecological inference in any field for fifty years, but this stasis changed when King (1997) offered a model that combined and extended the approaches taken in these earlier works. His method now seems to dominate substantive research in academia, in private industry, and in voting rights litigation, where it was used in most American states in the redistricting period that followed the 2000 Census. The number and diversity of substantive application areas of ecological inference has soared recently as well. The speed of development of statistical research on ecological inference has paralleled the progress in applications, too, and in the last five years we have seen numerous new models, innovative methods, and novel computation schemes. This book offers a snapshot of some of the research at the cutting edge of this field in the hope of spurring statistical researchers to push out the frontiers and applied researchers to choose from a wider range of approaches. Ecological inference is an especially difficult special case of statistical inference. The difficulty comes because some information is generally lost in the process of aggregation, and that information is sometimes systematically related to the quantities of interest. Thus, progress


Communications in Statistics-theory and Methods | 1996

Comparison of estimation methods in extreme value theory

Ori Rosen; Ishay Weissman

In this study we compare three estimators of the extreme value index: Pickands estimator, the moment estimator and a maximum likelihood estimator. The estimators are explored both theoretically and by Monte Carlo simulation. We obtain two estimators for large quantiles using Pickands and the maximum likelihood estimators. The latter and one based on the moment estimator are then compared through simulation.


Journal of Computational and Graphical Statistics | 2011

Bayesian Mixtures of Autoregressive Models

Sally Wood; Ori Rosen; Robert Kohn

In this article we propose a class of time-domain models for analyzing possibly nonstationary time series. This class of models is formed as a mixture of time series models, whose mixing weights are a function of time. We consider specifically mixtures of autoregressive models with a common but unknown lag. To make the methodology work we show that it is necessary to first partition the data into small non-overlapping segments, so that all observations within one segment are always allocated to the same component. The model parameters, including the number of mixture components, are then estimated via Markov chain Monte Carlo methods. The methodology is illustrated with simulated and real data. Supplemental materials are available online.

Collaboration


Dive into the Ori Rosen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayala Cohen

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sally Wood

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Wenxin Jiang

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge