Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Tayman is active.

Publication


Featured researches published by Jeff Tayman.


Population Research and Policy Review | 1998

The role of population size in the determination and prediction of population forecast errors: An evaluation using conifidence intervals for subcounty areas

Jeff Tayman; Edward Schafer; Lawrence R. Carter

Producers of population forecasts acknowledge the uncertainty inherent in trying to predict the future and should warn about the likely error of their forecasts. Confidence intervals represent one way of quantifying population forecast error. Most of the work in this area relates to national forecasts; although, confidence intervals have been developed for state and county forecasts. A few studies have examined subcounty forecast error, however, they only measured point estimates of error. This paper describes a technique for making subcounty population forecasts and for generating confidence intervals around their forecast error. It also develops statistical equations for calculating point estimates and confidence intervals for areas with different population sizes. A non-linear, inverse relationship between population size and forecast accuracy was found and we demonstrate the ability to accurately predict average forecast error and confidence intervals based on this relationship.


Population Research and Policy Review | 1999

On the validity of MAPE as a measure of population forecast accuracy

Jeff Tayman; David A. Swanson

The mean absolute percent error (MAPE) is the summary measure most often used for evaluating the accuracy of population forecasts. While MAPE has many desirable criteria, we argue from both normative and relative standpoints that the widespread practice of exclusively using it for evaluating population forecasts should be changed. Normatively, we argue that MAPE does not meet the criterion of validity because as a summary measure it overstates the error found in a population forecast. We base this argument on logical grounds and support it empirically, using a sample of population forecasts for counties. From a relative standpoint, we examine two alternatives to MAPE, both sharing with it, the important conceptual feature of using most of the information about error. These alternatives are symmetrical MAPE (SMAPE) and a class of measures known as M-estimators. The empirical evaluation suggests M-estimators do not overstate forecast error as much as either MAPE or SMAPE and are, therefore, more valid measures of accuracy. We consequently recommend incorporating M-estimators into the evaluation toolkit. Because M-estimators do not meet the desired criterion of interpretative ease as well as MAPE, we also suggest another approach that focuses on nonlinear transformations of the error distribution.


Demography | 2003

An Evaluation of Population Projections by Age

Stanley K. Smith; Jeff Tayman

A number of studies have evaluated the accuracy of projections of the size of the total population, but few have considered the accuracy of projections by age group. For many purposes, however, the relevant variable is the population of a particular age group, rather than the population as a whole. We investigated the precision and bias of a variety of age-group projections at the national and state levels in the United States and for counties in Florida. We also compared the accuracy of state and county projections that were derived from full-blown applications of the cohort-component method with the accuracy of projections that were derived from a simpler, less data-intensive version of the method. We found that age-group error patterns are different for national projections than for subnational projections; that errors are substantially larger for some age groups than for others; that differences in errors among age groups decline as the projection horizon becomes longer; and that differences in methodological complexity have no consistent impact on the precision and bias of age-group projections.


Demography | 1996

On the utility of population forecasts

Jeff Tayman; David A. Swanson

Many customers demand population forecasts, particularly for small areas. Although the forecast evaluation literature is extensive, it is dominated by a focus on accuracy. We go beyond accuracy by examining the concept of forecast utility in an evaluation of a sample of 2,709 counties and census tracts. Wefind that forecasters provide “value-added” knowledge for areas experiencing rapid change or areas with relatively large populations. For other areas, reduced value is more common than added value. Our results suggest that new forecasting strategies and methods such as composite modeling may substantially improve forecast utility.


Population Research and Policy Review | 1995

Between a Rock and a Hard Place: The Evaluation of Demographic Forecasts

David A. Swanson; Jeff Tayman

Forecasting, in general, has been described as an unavoidable yet impossible task. This irony, which comprises the ‘rock’ and the ‘hard place’ in the title, creates a high level of cognitive dissonance, which, in turn, generates stress for those both making and using forecasts that have non-trivial impacts. Why? Because the forecasted numbers that are invariably accorded a high degree of precision inexorably reveal their inevitable imprecision when the numbers forming the actuality finally take place and the numbers comprising the forecasts errors are precisely measured. The current state of the art in demography for dealing with the ‘rock’ and the ‘hard place’ is a less-than-successful strategy because it is based on an acceptance of accuracy as the primary evaluation criterion, which is the source of cognitive dissonance. One way to reduce cognitive dissonance is to change the relationship of the very cognitive elements creating it. We argue that forecast evaluations currently focused on accuracy and based on measures like RMSE and MAPE be refocused to include utility and propose for this purpose the ‘Proportionate Reduction in Error’ (PRE) measure. We illustrate our proposal with examples and discuss its advantages. We conclude that including PRE as an evaluation criterion can reduce stress by reducing cognitive dissonance without, at the same time, either trivializing the evaluation process or substantively altering how forecasts are done and presented.


Demography | 2000

A Note on the Measurement of Accuracy for Subnational Demographic Estimates

David A. Swanson; Jeff Tayman; Charles F. Barr

Mean absolute percentage error (MAPE), the measure most often used for evaluating subnational demographic estimates, is not always valid. We describe guidelines for determining when MAPE is valid. Applying them to case study data, we find that MAPE understates accuracy because it is unduly influenced by outliers. To overcome this problem, we calculate a transformed MAPE (MAPET) using a modified Box-Cox method. Because MAPE-T is not in the same scale as the untransformed absolute percentage errors, we provide a procedure for calculating MAPE-R, a measure in the same scale as the original observations. We argue that MAPE-R is a more appropriate summary measure of average absolute percentage error when the guidelines indicate that MAPE is not valid.


Population Research and Policy Review | 2009

Empirical Prediction Intervals for County Population Forecasts

Stefan Rayer; Stanley K. Smith; Jeff Tayman

Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and test prediction intervals based on empirical analyses of past forecast errors for counties in the United States. Using decennial census data from 1900 to 2000, we apply trend extrapolation techniques to develop a set of county population forecasts; calculate forecast errors by comparing forecasts to subsequent census counts; and use the distribution of errors to construct empirical prediction intervals. We find that empirically-based prediction intervals provide reasonably accurate predictions of the precision of population forecasts, but provide little guidance regarding their tendency to be too high or too low. We believe the construction of empirically-based prediction intervals will help users of small-area population forecasts measure and evaluate the uncertainty inherent in population forecasts and plan more effectively for the future.


Demography | 1989

Postcensal estimates of household income distributions.

Lois Fonseca; Jeff Tayman

This article develops and evaluates a method for deriving postcensal estimates of household income distributions for counties. A modified lognormal probability curve is used as a model of income distribution. The function is closely related to the classical lognormal model, but it contains a nonlinear component in its derivation. Simulated postcensal estimates of household income distributions are compared with 1980 census data for the counties in California. The results indicate that the modified lognormal curve approximates observed income distributions well and produces reliable postcensal estimates for areas with a wide variety of median income levels and numbers of households.


Population Research and Policy Review | 2011

Evaluating Population Forecast Accuracy: A Regression Approach Using County Data

Jeff Tayman; Stanley K. Smith; Stefan Rayer

Many studies have evaluated the impact of differences in population size and growth rate on population forecast accuracy. Virtually all these studies have been based on aggregate data; that is, they focused on average errors for places with particular size or growth rate characteristics. In this study, we take a different approach by investigating forecast accuracy using regression models based on data for individual places. Using decennial census data from 1900 to 2000 for 2,482 counties in the US, we construct a large number of county population forecasts and calculate forecast errors for 10- and 20-year horizons. Then, we develop and evaluate several alternative functional forms of regression models relating population size and growth rate to forecast accuracy; investigate the impact of adding several other explanatory variables; and estimate the relative contributions of each variable to the discriminatory power of the models. Our results confirm several findings reported in previous studies but uncover several new findings as well. We believe regression models based on data for individual places provide powerful but under-utilized tools for investigating the determinants of population forecast accuracy.


Population Research and Policy Review | 1996

Forecasting, growth management and public policy decision making

Jeff Tayman

This papers objective is to describe the interplay between forecasting and decision making. It shows how a forecast helped shape public policy and, in turn, how public policy influenced a forecast, within the context of the growth management effort underway in the San Diego region. The forecast identified economic challenges and land use issues facing the region and public policy actions were developed to address them. Normative forecasting best describes the relationship between the forecast and these public policy decisions. This ‘active’ approach to forecasting involves first deciding what future outcome is desirable and, then, designing policies and actions to achieve these outcomes.

Collaboration


Dive into the Jeff Tayman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack Baker

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Lucky M. Tedrow

Western Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey Lin

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge