Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dean P. Foster is active.

Publication


Featured researches published by Dean P. Foster.


Theoretical Population Biology | 1990

Stochastic evolutionary game dynamics

Dean P. Foster; Peyton Young

Traditional game theory studies strategic interactions in which the agents make rational decisions. Evolutionary game theory differs in two key respects: the focus is on large populations of individuals who interact at random rather than on small numbers of players; and individuals are assumed to employ simple adaptive rules rather than to engage in perfectly rational behavior. In such a setting, an equilibrium is a rest point of the population-level dynamical process rather than a form of consistency between beliefs and strategies. This chapter shows how the theory of stochastic dynamical systems can be used to characterize the equilibria that are most likely to be selected when the evolutionary process is subject to small persistent perturbations. Such equilibria are said to be stochastically stable. The implications of stochastic stability are discussed in a variety of settings, including 2 A— 2 games, bargaining games, public-goods games, potential games, and network games. Stochastic stability often selects equilibria that are familiar from traditional game theory: in 2 A— 2 games one obtains the risk-dominant equilibrium, in bargaining games the Nash bargaining solution, and in potential games the potential-maximizing equilibrium. However, the justification for these solution concepts differs between the two approaches. In traditional game theory, equilibria are justified in terms of rationality, common knowledge of the game, and common knowledge of rationality. Evolutionary game theory dispenses with all three of these assumptions; nevertheless, some of the main solution concepts survive in a stochastic evolutionary setting.


Econometrica | 1996

Continuous Record Asymptotics for Rolling Sample Variance Estimators

Dean P. Foster; Daniel B. Nelson

It is widely known that conditional covariances of asset returns change over time. Researchers adopt many strategies to accommodate conditional heteroskedasticity. Among the most popular are: (a) chopping the data into short blocks of time and assuming homoskedasticity within the blocks, (b) performing one-sided rolling regressions, in which only data from, say, the preceding five year period is used to estimate the conditional covariance of returns at a given date, and (c) two-sided rolling regressions which use, say, five years of leads and five years of lags. GARCH amounts to a one-sided rolling regression with exponentially declining weights. We derive asymptotically optimal window lengths for standard rolling regressions and optimal weights for weighted rolling regressions. An empirical model of the S&P 500 stock index provides an example.


neural information processing systems | 2012

A Spectral Algorithm for Latent Dirichlet Allocation

Anima Anandkumar; Yi-Kai Liu; Daniel J. Hsu; Dean P. Foster; Sham M. Kakade

Topic modeling is a generalization of clustering that posits that observations (words in a document) are generated by multiple latent factors (topics), as opposed to just one. The increased representational power comes at the cost of a more challenging unsupervised learning problem for estimating the topic-word distributions when only words are observed, and the topics are hidden. This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of multi-view models and topic models, including latent Dirichlet allocation (LDA). For LDA, the procedure correctly recovers both the topic-word distributions and the parameters of the Dirichlet prior over the topic mixtures, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method is based on an efficiently computable orthogonal tensor decomposition of low-order moments.


Journal of Behavioral Decision Making | 1997

Precision and Accuracy of Judgmental Estimation

Ilan Yaniv; Dean P. Foster

Whereas probabilistic calibration has been a central normative concept of accuracy in previous research on interval estimates, we suggest here that normative approaches for the evaluation of judgmental estimates should consider the communicative interaction between the individuals who produce the judgments and those who receive or use them for making decisions. We analyze precision and error in judgment and consider the role of the accuracy‐informativeness trade-oA (Yaniv and Foster, 1995) in the communication of estimates. The results shed light on puzzling findings reported earlier in the literature concerning the calibration of subjective confidence intervals. * c 1997 by John Wiley & Sons, Ltd.


Journal of the American Statistical Association | 2004

Variable Selection in Data Mining: Building a Predictive Model for Bankruptcy

Dean P. Foster; Robert A. Stine

We predict the onset of personal bankruptcy using least squares regression. Although well publicized, only 2,244 bankruptcies occur in our dataset of 2.9 million months of credit-card activity. We use stepwise selection to find predictors of these from a mix of payment history, debt load, demographics, and their interactions. This combination of rare responses and over 67,000 possible predictors leads to a challenging modeling question: How does one separate coincidental from useful predictors? We show that three modifications turn stepwise regression into an effective methodology for predicting bankruptcy. Our version of stepwise regression (1) organizes calculations to accommodate interactions, (2) exploits modern decision theoretic criteria to choose predictors, and (3) conservatively estimates p-values to handle sparse data and a binary response. Omitting any one of these leads to poor performance. A final step in our procedure calibrates regression predictions. With these modifications, stepwise regression predicts bankruptcy as well as, if not better than, recently developed data-mining tools. When sorted, the largest 14,000 resulting predictions hold 1,000 of the 1,800 bankruptcies hidden in a validation sample of 2.3 million observations. If the cost of missing a bankruptcy is 200 times that of a false positive, our predictions incur less than 2/3 of the costs of classification errors produced by the tree-based classifier C4.5.


conference on learning theory | 2007

Multi-view regression via canonical correlation analysis

Sham M. Kakade; Dean P. Foster

In the multi-view regression problem, we have a regression problem where the input variable (which is a real vector) can be partitioned into two different views, where it is assumed that either view of the input is sufficient to make accurate predictions -- this is essentially (a significantly weaker version of) the co-training assumption for the regression problem. We provide a semi-supervised algorithm which first uses unlabeled data to learn a norm (or, equivalently, a kernel) and then uses labeled data in a ridge regression algorithm (with this induced norm) to provide the predictor. The unlabeled data is used via canonical correlation analysis (CCA, which is a closely related to PCA for two random variables) to derive an appropriate norm over functions. We are able to characterize the intrinsic dimensionality of the subsequent ridge regression problem (which uses this norm) by the correlation coefficients provided by CCA in a rather simple expression. Interestingly, the norm used by the ridge regression algorithm is derived from CCA, unlike in standard kernel methods where a special apriori norm is assumed (i.e. a Banach space is assumed). We discuss how this result shows that unlabeled data can decrease the sample complexity.


Operations Research | 1993

A randomization rule for selecting forecasts

Dean P. Foster; Rakesh V. Vohra

We propose a randomized strategy for selecting/combining forecasts that is better than the forecasts used to produce it in a sense made precise in this paper. Unlike traditional methods this approach requires that no assumptions be made about the distribution of the event being forecasted or the error distribution and stationarity of the constituent forecasts. The method is simple and easy to implement.


Siam Journal on Optimization | 2013

Stochastic Convex Optimization with Bandit Feedback

Alekh Agarwal; Dean P. Foster; Daniel J. Hsu; Sham M. Kakade; Alexander Rakhlin

This paper addresses the problem of minimizing a convex, Lipschitz function f over a convex, compact set χ under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value f(x) at any query point x ∈ χ. We demonstrate a generalization of the ellipsoid algorithm that incurs O(poly (d) √T) regret. Since any algorithm has regret at least Ω(√T) on this problem, our algorithm is optimal in terms of the scaling with T.


conference on learning theory | 2004

Deterministic Calibration and Nash Equilibrium

Sham M. Kakade; Dean P. Foster

We provide a natural learning process in which the joint frequency of empirical play converges into the set of convex combinations of Nash equilibria. In this process, all players rationally choose their actions using a public prediction made by a deterministic, weakly calibrated algorithm. Furthermore, the public predictions used in any given round of play are frequently close to some Nash equilibrium of the game.


Journal of the American Statistical Association | 2011

VIF Regression: A Fast Regression Algorithm for Large Data

Dongyu Lin; Dean P. Foster; Lyle H. Ungar

We propose a fast and accurate algorithm, VIF regression, for doing feature selection in large regression problems. VIF regression is extremely fast; it uses a one-pass search over the predictors and a computationally efficient method of testing each potential predictor for addition to the model. VIF regression provably avoids model overfitting, controlling the marginal false discovery rate. Numerical results show that it is much faster than any other published algorithm for regression with feature selection and is as accurate as the best of the slower algorithms.

Collaboration


Dive into the Dean P. Foster's collaboration.

Top Co-Authors

Avatar

Robert A. Stine

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Lyle H. Ungar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sham M. Kakade

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rakesh V. Vohra

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Rakhlin

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordan Rodu

University of Virginia

View shared research outputs
Researchain Logo
Decentralizing Knowledge