Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. Webster is active.

Publication


Featured researches published by R. Webster.


International Journal of Geographic Information Systems | 1990

Kriging: a method of interpolation for geographical information systems

M. A. Oliver; R. Webster

Geographical information systems could be improved by adding procedures for geostatistical spatial analysis to existing facilities. Most traditional methods of interpolation are based on mathematical as distinct from stochastic models of spatial variation. Spatially distributed data behave more like random variables, however, and regionalized variable theory provides a set of stochastic methods for analysing them. Kriging is the method of interpolation deriving from regionalized variable theory. It depends on expressing spatial variation of the property in terms of the variogram, and it minimizes the prediction errors which are themselves estimated. We describe the procedures and the way we link them using standard operating systems. We illustrate them using examples from case studies, one involving the mapping and control of soil salinity in the Jordan Valley of Israel, the other in semi-arid Botswana where the herbaceous cover was estimated and mapped from aerial photographic survey.


Advances in soil sciences (USA) | 1985

Quantitative spatial analysis of soil in the field

R. Webster

Soil scientists have recognized variation in soil from place to place for many years. They have portrayed the variation by dividing large regions into smaller parcels each of which is relatively homogeneous, and they have classified the soil to show similarities between soil in widely separated parcels. This procedure, which may be regarded as standard soil survey practice, requires appreciation of the scale of change, the abruptness or otherwise of change, the degree of correlation among different soil properties, and of relations in the landscape. Yet that appreciation has almost always been intuitive. Good soil surveyors have needed flair. Rarely have they gained their appreciation by quantitative analysis.


Computers & Geosciences | 1981

The design of optimal sampling schemes for local estimation and mapping of of regionalized variables—I: Theory and method☆

A.B. McBratney; R. Webster; T.M. Burgess

Abstract Surveys of materials at the earths surface, especially soil, can be planned to make the best use of the resources for survey or to achieve a certain minimum precision provided the nature of spatial dependence is known already. A method is described for designing optimal sampling schemes. It is based on the theory of regionalized variables, and assumes that spatial dependence is expressed quantitatively in the form of the semi-variogram. It assumes also that the maximum standard error of a kriged estimate is a reasonable measure of the goodness of a sampling scheme. By sampling on a regular triangular grid, the maximum standard error is kept to a minimum for any given sampling, but a square grid is approximately equivalent where variation is isotropic. Given the semi-variogram for a variable, the sampling density for any prescribed maximum standard error is determined. Where variation is geometrically anisotropic, the same method is employed to determine sample spacing in the direction of maximum change, and the grid mesh elongated in the perpendicular direction in proportion to the anisotropy ratio.


Geoderma | 2001

Modelling soil variation: past, present, and future

Gerard B. M. Heuvelink; R. Webster

Abstract The soil mantles the land, except where there is bare rock or ice, and it varies more or less continuously. Many of its properties change continuously in time, too. We can measure the soil at only a finite number of places and times on small supports, and any statement concerning the soil at other places or times involves prediction. Variation in soil is also complex, so complex that no description of it can be complete, and so prediction is inevitably uncertain. Soil scientists should be able to quantify this uncertainty, and manage it. This means representing the variation by models that may be in part deterministic, but cannot be wholly so; they must have some random element to represent the unpredictable variation. Here we review three families of statistically based models of soil variation that are currently in use and trace their development since the mid-1960s. In particular, we consider classification and geostatistics for modelling the spatial variation, time series analysis and physically based approaches for modelling temporal variation, and space–time Kalman filtering for predicting soil conditions in space and time simultaneously. Each of these attaches to its predictions quantitative estimates of the prediction errors. Past, present and future research has been, is, and will be directed to the development of models that diminish these errors. A challenge for the future is to investigate approaches that merge process knowledge with measurements. For soil survey, this would be achieved by integration of pedogenetic knowledge and field observations through the use of data assimilation techniques, such as the space–time Kalman filter.


Soil Science | 1983

HOW MANY OBSERVATIONS ARE NEEDED FOR REGIONAL ESTIMATION OF SOIL PROPERTIES

Alex B. McBratney; R. Webster

A common task in regional studies of soil is to determine the mean values of particular soil properties from samples. Estimates of the number of observations needed for this purpose have usually been based on classical sampling theory without regard to spatial dependence in the data. As a result they have been unduly exaggerated and have often daunted investigators from pursuing their aims. This paper demonstrates a method for determining sample size, that is, the number of observations, taking account of spatial dependence. The method depends on knowing the semivariogram for the property of interest, which is used to calculate the variances in the neighborhood of each observation point. The variances are then pooled to form the global variance from which the standard error can be calculated The pooled value is minimized for a given sample size if all neighborhoods are of the same size, i.e., if the sampling points lie on a regular grid. If variation is isotropic, then an equilateral triangular grid is slightly better than a square one, though the latter will usually be preferred for convenience. Where there is simple anisotropy, a nonsquare rectangular grid aligned with its longer intervals in the direction of least variation is practically optimal. Examples show the relations between standard errors and sample sizes when sampling on regular grids and from which sample sizes can be chosen to achieve any desired precision. In all instances the sampling effort determined this way is less, and can be very much less, than would have been judged necessary using the classical approach.


Geoderma | 2000

Is soil variation random

R. Webster

Abstract A typical geostatistical analysis of soil data proceeds on the assumption that the properties of interest are the outcomes of random processes. Is the assumption reasonable? Many factors have contributed to the soil as we see it, both in the parent material and during its formation. Each has a physical cause, each must obey the laws of physics, and each is in principle deterministic except at the sub-atomic level. The outcome must therefore be deterministic. Yet such is the complexity of the factors in combination, their variation over the time, and the incompleteness of our knowledge, that the outcome, the soil, appears to us as if it were random. Only when we see the results of mans activities, such as the division of the land into fields, the imposition of irrigation, and ditches for drainage, do we recognize organized control. Clearly, the soil is not random, but except in the latter instances we are unlikely to go far wrong if we assume that it is. A second assumption underlying many geostatistical analyses is that of stationarity. We might ask if this holds. In the real world, we have ever only one realization of the random process in a particular region, and so the question has no answer. We can look to see whether regional averages are the same when we move from region to region. This means treating data from different regions as if they were different realizations of the same generating process. We should therefore change our question to ‘is a stationary model of the soil realistic?’ We can then examine the reality against the assumptions of our model. The soil is neither random nor stationary, but our models of it may be one or other or both. We should therefore ask whether our models are reasonable in the circumstances and whether they are profitable in leading to accurate predictions.


Environmental Pollution | 1994

Geostatistical analysis of soil contamination in the Swiss Jura

O. Atteia; J.-P. Dubois; R. Webster

The topsoil of a 14.5 km(2) region of the Swiss Jura has been surveyed to identify the distributions of trace metals in it. The soil was sampled at 366 sites selected by combining a square grid and nesting. Concentrations of seven potentially toxic metals, namely Cd, Co, Cr, Cu, Ni, Pb and Zn, were measured. Land use and geology (stratigraphy) were also recorded. Variograms were bounded in the range from 110 m to 1500 m with contributions to the variance at all distances exceeding 6 m. The variograms of Cd, Cr, Cu and Pb are dominated by short range correlation, those of Co and Ni by correlation of long range, and Zn is intermediate. The concentrations were estimated at the nodes of a fine grid by ordinary block kriging and then contoured to produce maps. The maps of Co and Ni have a coarse patchy pattern similar to that of the geology, suggesting that these metals derive from the bedrock. This is supported by analysing the variance by geology. Copper and Pb have finer patterns of distribution, and are more likely to have been added with fertilizer or manure or domestic waste. Cadmium could originate from human activities, such as smelters or fertilizer spreading, or from specific geological deposits, such as moraine.


Computers & Geosciences | 1981

The design of optimal sampling schemes for local estimation and mapping of regionalized variables—II: Program and examples☆

A.B. McBratney; R. Webster

Abstract A FORTRAN IV program, OSSFIM, is presented for calculating estimation variances when interpolating by kriging from regular rectangular and triangular grids of data and previously-determined semi-variogram. The variances are computed for a range of grid spacings and block sizes, and the results graphed. The user chooses a block size, and can read from the appropriate graph the sample spacing corresponding to any prescribed maximum tolerable error. This is the optimal sampling scheme. Use of the program is illustrated with two examples showing different types of variation in soil. In one, the pH of topsoil is isotropic with a spherical semi-variogram and negligible nugget variance. An equilateral triangular grid is the best sampling scheme; it is approximately 10 per cent more efficient than a square grid. In the other example, variation is linear but anisotropic with a large nugget variance. In these circumstances, a triangular grid has no advantage over a rectangular one, which should be elongated in the ratio 1.88 to I in the direction of minimum variation.


Mathematical Geosciences | 1989

A geostatistical basis for spatial weighting in multivariate classification

M. A. Oliver; R. Webster

Earth scientists and land managers often wish to group sampling sites that are both similar with respect to their properties and near to one another on the ground. This paper outlines the geostatistical rationale for such spatial grouping and describes a multivariate procedure to implement it. Sample variograms are calculated from the original data or their leading principal components and then the parameters of the underlying functions are estimated. A dissimilarity matrix is computed for all sampling sites, preferably using Gowers general similarity coefficient. Dissimilarities are then modified using the variogram to incorporate the form and extent of spatial variation. A nonhierarchical classification of sampling sites is performed on the leading latent vectors of the modified dissimilarity matrix by dynamic clustering to an optimum. The technique is illustrated with results of its application to soil survey data from two small areas in Britain and from a transect. In the case of the latter results of spatially weighted classifications are compared with those of strict segmentation. An appendix lists a Genstat program for a spatially constrained classification using a spherical variogram as an example.


Global Change Biology | 2014

Baseline map of organic carbon in Australian soil to support national carbon accounting and monitoring under climate change.

Raphael A. Viscarra Rossel; R. Webster; Elisabeth N. Bui; Jeff Baldock

We can effectively monitor soil condition—and develop sound policies to offset the emissions of greenhouse gases—only with accurate data from which to define baselines. Currently, estimates of soil organic C for countries or continents are either unavailable or largely uncertain because they are derived from sparse data, with large gaps over many areas of the Earth. Here, we derive spatially explicit estimates, and their uncertainty, of the distribution and stock of organic C in the soil of Australia. We assembled and harmonized data from several sources to produce the most comprehensive set of data on the current stock of organic C in soil of the continent. Using them, we have produced a fine spatial resolution baseline map of organic C at the continental scale. We describe how we made it by combining the bootstrap, a decision tree with piecewise regression on environmental variables and geostatistical modelling of residuals. Values of stock were predicted at the nodes of a 3-arc-sec (approximately 90 m) grid and mapped together with their uncertainties. We then calculated baselines of soil organic C storage over the whole of Australia, its states and territories, and regions that define bioclimatic zones, vegetation classes and land use. The average amount of organic C in Australian topsoil is estimated to be 29.7 t ha−1 with 95% confidence limits of 22.6 and 37.9 t ha−1. The total stock of organic C in the 0–30 cm layer of soil for the continent is 24.97 Gt with 95% confidence limits of 19.04 and 31.83 Gt. This represents approximately 3.5% of the total stock in the upper 30 cm of soil worldwide. Australia occupies 5.2% of the global land area, so the total organic C stock of Australian soil makes an important contribution to the global carbon cycle, and it provides a significant potential for sequestration. As the most reliable approximation of the stock of organic C in Australian soil in 2010, our estimates have important applications. They could support Australias National Carbon Accounting System, help guide the formulation of policy around carbon offset schemes, improve Australias carbon balances, serve to direct future sampling for inventory, guide the design of monitoring networks and provide a benchmark against which to assess the impact of changes in land cover, land management and climate on the stock of C in Australia. In this way, these estimates would help us to develop strategies to adapt and mitigate the effects of climate change.

Collaboration


Dive into the R. Webster's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

R.M. Lark

British Geological Survey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. A. Oliver

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B.G. Rawlins

British Geological Survey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B.P. Marchant

British Geological Survey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge