Benoît Le Maux
University of Rennes
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benoît Le Maux.
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
Multiple criteria decision analysis is devoted to the development of decision support tools to address complex decisions, especially where other methods fail to consider more than one outcome of interest. The approach is very flexible as outcomes can be quantifiable in non-monetary terms and be expressed in ordinal or numerical terms (Sect. 11.1). Basically speaking, it starts with the construction of a value tree and the identification of relevant criteria (Sect. 11.2). The approach then proceeds with gathering information about the performance of each assessed alternative against the whole set of criteria. Values are generally normalized from 0 to 1, thereby constituting what is termed a score matrix (Sect. 11.3). Numerical weights are also assigned to criteria to better reflect their relative importance (Sect. 11.4). Weights and scores are then combined to arrive at a ranking or sorting of alternatives. Should a compensatory analysis be implemented, the approach would rely on aggregation methods to build a composite indicator (Sect. 11.5). Should a non-compensatory analysis be carried out, the approach would instead examine each dimension individually (Sect. 11.6). Furthermore, a sensitivity analysis of the weights and scores can be used to explore how changes in assumptions influence the results (Sect. 11.7).
Economics and Politics | 2009
Benoît Le Maux
This article examines how policy-makers solve problems within local representative democracies. It will be argued that politicians cannot undertake an exhaustive search of all possible policy choices; instead, they might use an incremental strategy such as the hill-climbing heuristic. These possibilities will be formalized using the median voter model as an analytical framework. The corresponding models will then be estimated over a set of French jurisdictions (the départements). The empirical results lend support to the hill-climbing model, given that: (1) for social welfare and secondary school expenditures, the influence of the past is significant; (2) a pure model of incrementalism, without any exogenous variables, is not appropriate for explaining the behavior of departmental council members; and (3) the impact of the past is more significant and stronger when expenditure levels are higher.
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
The first € price and the £ and
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. J.-M. Josselin, B. Le Maux Statistical Tools for Program Evaluation
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
In many cases, the presence of confounding factors makes the identification of causal effects rather difficult. One solution to avoid potential bias is to run a randomized controlled experiment, either in the form of a clinical trial or a field experiment (Sect. 13.1). The basic tenet is to assign the subjects to a control group and a treatment group, such that they share similar characteristics on average (Sect. 13.2). The impact of an intervention is then obtained by comparing the average outcomes observed in both groups and testing whether the difference is significant (Sect. 13.3). An important issue is to assess the risks of type I and type II errors, i.e. the probabilities that the statistical test yields the wrong conclusion (Sect. 13.4). Controlling for those risks implies finding the minimum number of subjects to enroll in the experiment to achieve a given statistical power (Sect. 13.5). Another issue is to select an indicator (e.g., absolute risk reduction, relative risk ratio, odds ratio, number needed to treat) in order to point out the number of successes and failures in each group (Sect. 13.6). The analysis can also be extended to a more general framework were the timing of event occurrence is explicitly accounted for, via the estimation of survival curves with the Kaplan-Meier approach (Sect. 13.7) and the implementation of the Mantel-Haenszel test (Sect. 13.8).
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
This chapter reviews the different statistical methods used to describe a sample and make inference for a larger population. Despite its apparent simplicity, one should not underestimate the importance of the task, especially in the context of public policies. Providing basic descriptive statistics to point out the issues that must be addressed is a preliminary and necessary step in program evaluation (Sect. 3.1). One-way and two-way tables summarize the data in a very efficient manner (Sect. 3.2). Bar graphs, pie charts, histograms, line graphs and radar charts can also be generated at the evaluator’s convenience (Sect. 3.3). To go further, numerical analysis rests on measures of central tendency (mode, median, and mean), and of dispersion (interquartile range, variance, standard deviation, coefficient of variation) (Sect. 3.4). The asymmetry of a distribution and its “tailedness” can be approximated by the skewness and kurtosis coefficients (Sect. 3.5). Last, in most cases, the description of a database is done in the context of a sample survey. Any generalization to the population of interest thus involves the calculation of confidence intervals (Sect. 3.6). Several examples illustrate the methods.
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
The construction of a reliable and relevant database is a key aspect of any statistical study. Not only will misleading information create bias and mistakes (sampling, coverage or measurement errors, etc.), but it may also seriously affect public decisions if the study is used for guiding policy-makers. The sample should be sufficiently representative of the population of interest (Sect. 2.1). The time needed to collect and process the data is an important issue in this respect (Sect. 2.2). As the purpose of a survey is to obtain sincere responses from the respondents, the design of the questionnaire also has its importance; in particular the sequencing, phrasing and format of questions (Sect. 2.3). The process of data collection should not be neglected either. The analyst must decide which sampling method is more relevant (e.g., non-probability vs. probability sampling) and assess the efficacy of data collection during the survey process (Sect. 2.4). Last, the coding of variables and the way measurement and nonresponse errors are investigated are an essential step of data construction (Sect. 2.5).
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
Since public projects have consequences on individual lives, the estimation of welfare changes is an essential step in the evaluation process (Sect. 6.1). To this end, this chapter gives several tools for eliciting individual preferences. The first set of methods consists of stated preferences techniques whereby individuals declare what their perceptions are of the project and its consequences. Those methods include contingent valuation and discrete choice experiment. The former consists in asking directly a sample of individuals their willingness to pay for a program (Sect. 6.2). Discrete choice experiment on the other hand asks the agents to compare a set of public goods or services. It estimates a multi-attribute utility function based on the idea that agents’ preference for goods depend on the characteristics they contain (Sect. 6.3). The second set of methods comprehends revealed preferences techniques, where preferences are inferred from what is observed on existing markets. For instance, the hedonic pricing method values the implicit price of non-market goods, e.g., proximity of a school or air quality, from their impact on real estate market prices (Sect. 6.4). In the same vein, the travel cost method estimates the demand for recreational sites based on the costs incurred by people for visiting the site (Sect. 6.5). Last, the third set of methods is commonly used for the assessment of public health decisions. It aims to estimate directly the utility levels (e.g., QALY) associated with particular health states (Sect. 6.6).
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
Program evaluation intends to grasp the impact of collective projects on citizens, their economic, social and environmental consequences on individual and community welfare. The task is challenging as it is not so easy to put a value on items such as health, education or changes in environment. This chapter provides an overview of the program evaluation methodology, and offers an introduction to the present book. Program evaluation is defined as a process that consists in collecting, analyzing, and using information to assess the relevance of a policy, its effectiveness and its efficiency (Sect. 1.1). The approach usually starts with a context analysis which first relies on descriptive and inferential statistical tools to point out issues that must be addressed, then measures welfare changes associated with the program (Sect. 1.2). Afterwards, an ex ante evaluation can be conducted to set up solutions and to select a particular strategy among the competing ones (Sect. 1.3). Once the selected strategy has been implemented, ex post evaluation techniques assess the extent to which planned outcomes have been achieved as a result of the program, ceteris paribus (Sect. 1.4). The last section of this chapter finally offers a brief description of how to use the book (Sect. 1.5).
Archive | 2017
Jean-Michel Josselin; Benoît Le Maux
One goal of statistical studies is to highlight associations between pairs of variables. This is particularly useful when one wants to get a clear picture of a multi-dimensional data set and motivate a specific policy intervention (Sect. 4.1). Yet, the choice of a method is not straightforward. Testing for correlation is the relevant approach to investigate a linear association between two numerical variables (Sect. 4.2). The chi-square test is an inferential test that uses data from a sample to make conclusions about the relationship between two categorical variables (Sect. 4.3). When one variable is numerical and the other is categorical, the usual approach is to test for differences between means or to implement an analysis of variance (Sect. 4.4). When faced with more than two variables, it is also possible to provide a multidimensional representation of the problem using methods such as principal component analysis (Sect. 4.5) and multiple correspondence analysis (Sect. 4.6). The idea is to reduce the dimensionality of a data set by plotting all the observations on 2D graphs describing how observations cluster with respect to various characteristics. These groups can for instance serve to identify the beneficiaries of a particular intervention. Using R-CRAN, several examples are included in this chapter to illustrate the different methods.