Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roy van der Weide is active.

Publication


Featured researches published by Roy van der Weide.


World Bank Economic Review | 2013

Estimating Quarterly Poverty Rates Using Labor Force Surveys: A Primer

Mohamed Douidich; Abdeljaouad Ezzrari; Roy van der Weide; Paolo Verme

The paper shows how Labor Force Surveys can be used effectively to estimate poverty rates using Household Expenditure Surveys and cross-survey imputation methods. With only two rounds of Household Expenditure Survey data for Morocco (2001 and 2007), the paper estimates quarterly poverty rates for the period 2001-2010 by imputing household expenditures into the Labor Force Surveys. The results are encouraging. The methodology is able to accurately reproduce official poverty statistics by combining current Labor Force Surveys with previous period Household Expenditure Surveys, and vice versa. Although the focus is on head-count poverty, the method can be applied to any welfare indicator that is a function of household income or expenditure, such as the poverty gap or the Gini index of inequality. The newly produced time-series of poverty rates can help researchers and policy makers to: (a) study the determinants of poverty reduction or use poverty as an explanatory factor in cross-section and panel models; (b) forecast poverty rates based on a time-series model fitted to the data; and (c) explore the linkages between labor market conditions and poverty and simulate the effects of policy reforms or economic shocks. This is a promising research agenda that can expand significantly the tool-kit of the welfare economist.


World Bank Economic Review | 2014

Inequality is bad for growth of the poor (but not for that of the rich)

Roy van der Weide; Branko Milanovic

The paper assesses the impact of overall inequality, as well as inequality among the poor and among the rich, on the growth rates along various percentiles of the income distribution. The analysis uses micro-census data from U.S. states covering the period from 1960 to 2010. The paper finds evidence that high levels of inequality reduce the income growth of the poor and, if anything, help the growth of the rich. When inequality is deconstructed into bottom and top inequality, the analysis finds that it is mostly top inequality that is holding back growth at the bottom.


Archive | 2014

Estimation of Normal Mixtures in a Nested Error Model with an Application to Small Area Estimation of Poverty and Inequality

Chris Elbers; Roy van der Weide

This paper proposes a method for estimating distribution functions that are associated with the nested errors in linear mixed models. The estimator incorporates Empirical Bayes prediction while making minimal assumptions about the shape of the error distributions. The application presented in this paper is the small area estimation of poverty and inequality, although this denotes by no means the only application. Monte-Carlo simulations show that estimates of poverty and inequality can be severely biased when the non-normality of the errors is ignored. The bias can be as high as 2 to 3 percent on a poverty rate of 20 to 30 percent. Most of this bias is resolved when using the proposed estimator. The approach is applicable to both survey-to-census and survey-to-survey prediction.


Social Science Research Network | 2002

Continuous Beliefs Dynamics

Cees Diks; Roy van der Weide

We propose a general framework for studying the evolution ofheterogeneous beliefs in a dynamic feedback setting. Beliefsdistributions are defined on a continuous space representingthe possible strategies agents can choose from. Agents base theirchoices on past performances. As new information becomesavailable strategies are re-evaluated and the beliefsdistribution is updated using a continuous choice model. Thisapproach gives rise to price dynamics in which the beliefsdistribution evolves together with realized prices. Thestatistical properties of the endogenous random pricefluctuations are fully determined by the model. The structure ofthe macroscopic model depends on the class of predictors and onthe performance measure used by the agents. Whenever a well-knowneconometric model is obtained, an economic interpretation of themodel parameters can be given, as is shown here for an ARCHmodel. The approach is illustrated with several examples andempirical applications.


Review of Income and Wealth | 2016

Is inequality underestimated in Egypt ? evidence from house prices

Roy van der Weide; Christoph Lakner; Elena Ianchovichina

Household income surveys often fail to capture top incomes which leads to an underestimation of income inequality. A popular solution is to combine the household survey with data from income tax records, which has been found to result in significant upward corrections of inequality estimates. Unfortunately, tax records are unavailable in many countries, including most of the developing world. In the absence of data from tax records, this study explores the feasibility of using data on house prices to estimate the top tail of the income distribution. In an application to Egypt, where estimates of inequality based on household surveys alone are low by international standards, the study finds strong evidence that inequality is indeed being underestimated by a considerable margin. The Gini index for urban Egypt is found to increase from 36 to 47 after correcting for the missing top tail.


XXIV Encuentro de Economía Pública, 2017 | 2016

Unequal Opportunity, Unequal Growth

Gustavo A. Marrero; Juan Gabriel Rodríguez; Roy van der Weide

This paper argues that inequality can be both good and bad for growth, depending on what inequality and whose growth. Unequal societies may be holding back one segment of the population while helping another. Similarly, high levels of income inequality may be due to a variety of different factors; some of these may be good while others may be bad for growth. The paper tests this hypothesis by “unpacking” both inequality and growth. Total inequality is decomposed into inequality of opportunity, due to observed factors that are beyond the individuals control, and residual inequality. Growth is measured at different steps of the income ladder to verify whether low, middle, and top income households fare differently in societies with high (low) levels of inequality. In an application to the United States covering 1960 to 2010, the paper finds that inequality of opportunity is particularly bad for growth of the poor. When inequality of opportunity is controlled for, the importance of total income inequality is dramatically reduced. These results are robust to different measures of inequality of opportunity and econometric methods.


Journal of Agricultural Economics | 2014

Importing High Food Prices by Exporting: Rice Prices in Lao PDR

Dick Durevall; Roy van der Weide

This paper shows how a developing country, Lao PDR, imports high glutinous rice prices by exporting its staple food to neighboring countries, Vietnam and Thailand. Lao PDR has extensive export controls on rice, generating a sizable difference between domestic and international prices. Controls are relaxed after good harvests, leading to a surge in exports early in the season and rapidly rising prices later in the year. There is thus a strong case for removal of trade restrictions since they give rise to price spikes, keep the long-term price of glutinous rice low, and thereby hinder increases in income from agriculture. Although this is a case study of Lao PDR, the findings may equally apply to other developing countries that export their staple food.


Archive | 2013

Cost-effective estimation of the population mean using prediction estimators

Tomoki Fujii; Roy van der Weide

This paper considers the prediction estimator as an efficient estimator for the population mean. The study may be viewed as an earlier study that proved that the prediction estimator based on the iteratively weighted least squares estimator outperforms the sample mean. The analysis finds that a certain moment condition must hold in general for the prediction estimator based on a Generalized-Method-of-Moment estimator to be at least as efficient as the sample mean. In an application to cost-effective double sampling, the authors show how prediction estimators may be adopted to maximize statistical precision (minimize financial costs) under a budget constraint (statistical precision constraint). This approach is particularly useful when the outcome variable of interest is expensive to observe relative to observing its covariates.


Archive | 2018

Fair Progress?: Economic Mobility across Generations around the World

Ambar Narayan; Roy van der Weide; Alexandru Cojocaru; Christoph Lakner; Silvia Redaelli; Daniel Gerszon Mahler; Rakesh Gupta Nichanametla Ramasubbaiah; Stefan Hubert Thewissen

Fair Progress? Economic Mobility across Generations around the World looks at an issue that has gotten much attention in the developed world, but with, for the first time, new data and analysis covering most of the world, including developing economies. The analysis examines whether those born in poverty or in prosperity are destined to remain in the same economic circumstances into which they were born and looks back over a half a century at whether children’s lives are better or worse than their parents’ in different parts of the world. It suggests local, national, and global actions and policies that can help break the cycle of poverty, paving the way for the next generation to realize their potential and improve their lives.


Archive | 2016

Is predicted data a viable alternative to real data

Tomoki Fujii; Roy van der Weide

It is costly to collect the household- and individual-level data that underlies official estimates of poverty and health. For this reason, developing countries often do not have the budget to update their estimates of poverty and health regularly, even though these estimates are most needed there. One way to reduce the financial burden is to substitute some of the real data with predicted data. An approach referred to as double sampling collects the expensive outcome variable for a sub-sample only while collecting the covariates used for prediction for the full sample. The objective of this study is to determine if this would indeed allow for realizing meaningful reductions in financial costs while preserving statistical precision. The study does this using analytical calculations that allow for considering a wide range of parameter values that are plausible to real applications. The benefits of using double sampling are found to be modest. There are circumstances for which the gains can be more substantial, but the study conjectures that these denote the exceptions rather than the rule. The recommendation is to rely on real data whenever there is a need for new data, and use the prediction estimator to leverage existing data.

Collaboration


Dive into the Roy van der Weide's collaboration.

Top Co-Authors

Avatar

Cees Diks

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Tomoki Fujii

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar

Chris Elbers

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge