Open Journal of Forestry | 2021
Calibration of a Confidence Interval for a Classification Accuracy
Abstract
Coverage of nominal 95% confidence intervals of a \nproportion estimated from a sample obtained under a complex survey design, or a \nproportion estimated from a ratio of two random variables, can depart \nsignificantly from its target. Effective calibration methods exist for \nintervals for a proportion derived from a single binary study variable, but not \nfor estimates of thematic classification accuracy. To promote a calibration of \nconfidence intervals within the context of land-cover mapping, this study first \nillustrates a common problem of under and over-coverage with standard \nconfidence intervals, and then proposes a simple and fast calibration that more \noften than not will improve coverage. The demonstration is with simulated \nsampling from a classified map with four classes, and a reference class known \nfor every unit in a population of 160,000 units arranged in a square array. The \nsimulations include four common probability sampling designs for accuracy \nassessment, and three sample sizes. Statistically significant over- and \nunder-coverage was present in estimates of user’s (UA) and producer’s accuracy \n(PA) as well as in estimates of class area proportion. A calibration with Bayes \nintervals for UA and PA was most efficient with smaller sample sizes and two \ncluster sampling designs.