Linda See
International Institute for Applied Systems Analysis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Linda See.
Environmental Modelling and Software | 2007
Christian W. Dawson; Robert J. Abrahart; Linda See
This paper presents details of an open access web site that can be used by hydrologists and other scientists to evaluate time series models. There is at present a general lack of consistency in the way in which hydrological models are assessed that handicaps the comparison of reported studies and hinders the development of superior models. The HydroTest web site provides a wide range of objective metrics and consistent tests of model performance to assess forecasting skill. This resource is designed to promote future transparency and consistency between reported models and includes an open forum that is intended to encourage further discussion and debate on the topic of hydrological performance evaluation metrics. It is envisaged that the provision of such facilities will lead to the creation of superior forecasting metrics and the development of international benchmark time series datasets.
Hydrological Processes | 2000
Robert J. Abrahart; Linda See
The forecasting power of neural network (NN) and autoregressive moving average (ARMA) models are compared. Modelling experiments were based on a 3-year period of continuous river flow data for two contrasting catchments: the Upper River Wye and the River Ouse. Model performance was assessed using global and storm-specific quantitative evaluation procedures. The NN and ARMA solutions provided similar results, although naive predictions yielded poorer estimates. The annual data were then grouped into a set of distinct hydrological event types using a self-organizing map and two rising event clusters were modelled using the NN technique. These alternative investigations provided encouraging results. Copyright
Global Change Biology | 2015
Steffen Fritz; Linda See; Ian McCallum; Liangzhi You; Andriy Bun; Elena Moltchanova; Martina Duerauer; Fransizka Albrecht; C. Schill; Christoph Perger; Petr Havlik; A. Mosnier; Philip K. Thornton; Ulrike Wood-Sichra; Mario Herrero; Inbal Becker-Reshef; Christopher O. Justice; Matthew C. Hansen; Peng Gong; Sheta Abdel Aziz; Anna Cipriani; Renato Cumani; Giuliano Cecchi; Giulia Conchedda; Stefanus Ferreira; Adriana Gomez; Myriam Haffani; François Kayitakire; Jaiteh Malanding; Rick Mueller
A new 1 km global IIASA-IFPRI cropland percentage map for the baseline year 2005 has been developed which integrates a number of individual cropland maps at global to regional to national scales. The individual map products include existing global land cover maps such as GlobCover 2005 and MODIS v.5, regional maps such as AFRICOVER and national maps from mapping agencies and other organizations. The different products are ranked at the national level using crowdsourced data from Geo-Wiki to create a map that reflects the likelihood of cropland. Calibration with national and subnational crop statistics was then undertaken to distribute the cropland within each country and subnational unit. The new IIASA-IFPRI cropland product has been validated using very high-resolution satellite imagery via Geo-Wiki and has an overall accuracy of 82.4%. It has also been compared with the EarthStat cropland product and shows a lower root mean square error on an independent data set collected from Geo-Wiki. The first ever global field size map was produced at the same resolution as the IIASA-IFPRI cropland map based on interpolation of field size data collected via a Geo-Wiki crowdsourcing campaign. A validation exercise of the global field size map revealed satisfactory agreement with control data, particularly given the relatively modest size of the field size data set used to create the map. Both are critical inputs to global agricultural monitoring in the frame of GEOGLAM and will serve the global land modelling and integrated assessment community, in particular for improving land use models that require baseline cropland information. These products are freely available for downloading from the http://cropland.geo-wiki.org website.
Progress in Physical Geography | 2012
Robert J. Abrahart; François Anctil; Paulin Coulibaly; Christian W. Dawson; Nick J. Mount; Linda See; Asaad Y. Shamseldin; Dimitri P. Solomatine; Elena Toth; Robert L. Wilby
This paper traces two decades of neural network rainfall-runoff and streamflow modelling, collectively termed ‘river forecasting’. The field is now firmly established and the research community involved has much to offer hydrological science. First, however, it will be necessary to converge on more objective and consistent protocols for: selecting and treating inputs prior to model development; extracting physically meaningful insights from each proposed solution; and improving transparency in the benchmarking and reporting of experimental case studies. It is also clear that neural network river forecasting solutions will have limited appeal for operational purposes until confidence intervals can be attached to forecasts. Modular design, ensemble experiments, and hybridization with conventional hydrological models are yielding new tools for decision-making. The full potential for modelling complex hydrological systems, and for characterizing uncertainty, has yet to be realized. Further gains could also emerge from the provision of an agreed set of benchmark data sets and associated development of superior diagnostics for more rigorous intermodel evaluation. To achieve these goals will require a paradigm shift, such that the mass of individual isolated activities, focused on incremental technical refinement, is replaced by a more coordinated, problem-solving international research body.
Hydrological Sciences Journal-journal Des Sciences Hydrologiques | 2000
Linda See; Stan Openshaw
Abstract This paper presents four different approaches for integrating conventional and AI-based forecasting models to provide a hybridized solution to the continuous river level and flood prediction problem. Individual forecasting models were developed on a stand alone basis using historical time series data from the River Ouse in northern England. These include a hybrid neural network, a simple rule-based fuzzy logic model, an ARMA model and naive predictions (which use the current value as the forecast). The individual models were then integrated via four different approaches: calculation of an average, a Bayesian approach, and two fuzzy logic models, the first based purely on current and past river flow conditions and the second, a fuzzification of the crisp Bayesian method. Model performance was assessed using global statistics and a more specific flood related evaluation measure. The addition of fuzzy logic to the crisp Bayesian model yielded overall results that were superior to the other individual and integrated approaches.
Hydrological Sciences Journal-journal Des Sciences Hydrologiques | 1999
Linda See; Stan Openshaw
Abstract This paper assesses one of many potential enhancements to conventional flood forecasting that can be achieved through the use of soft computing technologies. A methodology is outlined in which the forecasting data set is split into subsets before training with a series of neural networks. These networks are then recombined via a rule-based fuzzy logic model that has been optimized using a genetic algorithm. The methodology is demonstrated using historical time series data from the Ouse River catchment in northern England. The model forecasts are assessed on global performance statistics and on a more specific flood-related evaluation measure, and they are compared to benchmarks from a statistical model and naive predictions. The overall results indicate that this methodology may provide a well performing, low-cost solution, which may be readily integrated into existing operational flood forecasting and warning systems.
Environmental Research Letters | 2011
Steffen Fritz; Linda See; Ian McCallum; C. Schill; Michael Obersteiner; Marijn van der Velde; Hannes Boettcher; Petr Havlik; Frédéric Achard
In the last 10 years a number of new global datasets have been created and new, more sophisticated algorithms have been designed to classify land cover. GlobCover and MODIS v.5 are the most recent global land cover products available, where GlobCover (300 m) has the finest spatial resolution of other comparable products such as MODIS v.5 (500 m) and GLC-2000 (1 km). This letter shows that the thematic accuracy in the cropland domain has decreased when comparing these two latest products. This disagreement is also evident spatially when examining maps of cropland and forest disagreement between GLC-2000, MODIS and GlobCover. The analysis highlights the continued uncertainty surrounding these products, with a combined forest and cropland disagreement of 893 Mha (GlobCover versus MODIS v.5). This letter suggests that data sharing efforts and the provision of more in situ data for training, calibration and validation are very important conditions for improving future global land cover products.
International Journal of Geographical Information Science | 2005
Steffen Fritz; Linda See
A generic problem associated with different land cover maps that cover the same geographical area is the use of different legend categories. There may be disagreement in many areas when comparing different land cover products even though the legend shows the same or very similar land cover class. To capture the uncertainty associated with both differences in the legend and the difficulty in classification when comparing two land cover maps, expert knowledge and a fuzzy logic framework are used to map the fuzzy agreement. The methodology is illustrated by comparing the Global Land Cover 2000 data set and the MODIS global land cover product. Overall accuracy measures are calculated, and the spatial fuzzy agreement between the two land cover products is provided. This approach can be used to improve the overall confidence in a land cover product, since areas of severe disagreement can be highlighted, and areas can be identified that require further attention and possible re-mapping.
PLOS ONE | 2013
Linda See; Alexis J. Comber; Carl F. Salk; Steffen Fritz; Marijn van der Velde; Christoph Perger; C. Schill; Ian McCallum; F. Kraxner; Michael Obersteiner
There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future.
International Journal of Applied Earth Observation and Geoinformation | 2013
Alexis J. Comber; Linda See; Steffen Fritz; Marijn van der Velde; Christoph Perger; Giles M. Foody
There is much interest in using volunteered geographic information (VGI) in formal scientific analyses. This analysis uses VGI describing land cover that was captured using a web-based interface, linked to Google Earth. A number of control points, for which the land cover had been determined by experts allowed measures of the reliability of each volunteer in relation to each land cover class to be calculated. Geographically weighted kernels were used to estimate surfaces of volunteered land cover information accuracy and then to develop spatially distributed correspondences between the volunteer land cover class and land cover from 3 contemporary global datasets (GLC-2000, GlobCover and MODIS v.5). Specifically, a geographically weighted approach calculated local confusion matrices (correspondences) at each location in a central African study area and generated spatial distributions of users, producers, portmanteau, and partial portmanteau accuracies. These were used to evaluate the global datasets and to infer which of them was ‘best’ at describing Tree cover at each location in the study area. The resulting maps show where specific global datasets are recommended for analyses requiring Tree cover information. The methods presented in this research suggest that some of the concerns about the quality of VGI can be addressed through careful data collection, the use of control points to evaluate volunteer performance and spatially explicit analyses. A research agenda for the use and analysis of VGI about land cover is outlined.