Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Sandgathe is active.

Publication


Featured researches published by Scott Sandgathe.


Weather and Forecasting | 2006

Cluster analysis for verification of precipitation fields

Caren Marzban; Scott Sandgathe

Abstract A statistical method referred to as cluster analysis is employed to identify features in forecast and observation fields. These features qualify as natural candidates for events or objects in terms of which verification can be performed. The methodology is introduced and illustrated on synthetic and real quantitative precipitation data. First, it is shown that the method correctly identifies clusters that are in agreement with what most experts might interpret as features or objects in the field. Then, it is shown that the verification of the forecasts can be performed within an event-based framework, with the events identified as the clusters. The number of clusters in a field is interpreted as a measure of scale, and the final “product” of the methodology is an “error surface” representing the error in the forecasts as a function of the number of clusters in the forecast and observation fields. This allows for the examination of forecast error as a function of scale.


Monthly Weather Review | 2008

Cluster Analysis for Object-Oriented Verification of Fields: A Variation

Caren Marzban; Scott Sandgathe

Abstract In a recent paper, a statistical method referred to as cluster analysis was employed to identify clusters in forecast and observed fields. Further criteria were also proposed for matching the identified clusters in one field with those in the other. As such, the proposed methodology was designed to perform an automated form of what has been called object-oriented verification. Herein, a variation of that methodology is proposed that effectively avoids (or simplifies) the criteria for matching the objects. The basic idea is to perform cluster analysis on the combined set of observations and forecasts, rather than on the individual fields separately. This method will be referred to as combinative cluster analysis (CCA). CCA naturally lends itself to the computation of false alarms, hits, and misses, and therefore, to the critical success index (CSI). A desirable feature of the previous method—the ability to assess performance on different spatial scales—is maintained. The method is demonstrated on ...


Monthly Weather Review | 2006

MOS, Perfect Prog, and Reanalysis

Caren Marzban; Scott Sandgathe; Eugenia Kalnay

Statistical postprocessing methods have been successful in correcting many defects inherent in numerical weather prediction model forecasts. Among them, model output statistics (MOS) and perfect prog have been most common, each with its own strengths and weaknesses. Here, an alternative method (called RAN) is examined that combines the two, while at the same time utilizes the information in reanalysis data. The three methods are examined from a purely formal/mathematical point of view. The results suggest that whereas MOS is expected to outperform perfect prog and RAN in terms of mean squared error, bias, and error variance, the RAN approach is expected to yield more certain and bias-free forecasts. It is suggested therefore that a real-time RAN-based postprocessor be developed for further testing.


Weather and Forecasting | 2009

Three Spatial Verification Techniques: Cluster Analysis, Variogram, and Optical Flow

Caren Marzban; Scott Sandgathe; Hilary Lyons; Nicholas C. Lederer

Abstract Three spatial verification techniques are applied to three datasets. The datasets consist of a mixture of real and artificial forecasts, and corresponding observations, designed to aid in better understanding the effects of global (i.e., across the entire field) displacement and intensity errors. The three verification techniques, each based on well-known statistical methods, have little in common and, so, present different facets of forecast quality. It is shown that a verification method based on cluster analysis can identify “objects” in a forecast and an observation field, thereby allowing for object-oriented verification in the sense that it considers displacement, missed forecasts, and false alarms. A second method compares the observed and forecast fields, not in terms of the objects within them, but in terms of the covariance structure of the fields, as summarized by their variogram. The last method addresses the agreement between the two fields by inferring the function that maps one to ...


Weather and Forecasting | 2009

Verification with Variograms

Caren Marzban; Scott Sandgathe

Abstract The verification of a gridded forecast field, for example, one produced by numerical weather prediction (NWP) models, cannot be performed on a gridpoint-by-gridpoint basis; that type of approach would ignore the spatial structures present in both forecast and observation fields, leading to misinformative or noninformative verification results. A variety of methods have been proposed to acknowledge the spatial structure of the fields. Here, a method is examined that compares the two fields in terms of their variograms. Two types of variograms are examined: one examines correlation on different spatial scales and is a measure of texture; the other type of variogram is additionally sensitive to the size and location of objects in a field and can assess size and location errors. Using these variograms, the forecasts of three NWP model formulations are compared with observations/analysis, on a dataset consisting of 30 days in spring 2005. It is found that within statistical uncertainty the three formu...


Bulletin of the American Meteorological Society | 2016

The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

Gerhard Theurich; Cecelia DeLuca; Timothy Campbell; Fushan Liu; K. Saint; Mariana Vertenstein; Junye Chen; R. Oehmke; James D. Doyle; Timothy R Whitcomb; Alan J. Wallcraft; Mark Iredell; Thomas L. Black; A. da Silva; T. Clune; Robert D. Ferraro; P. Li; M. Kelley; I. Aleinov; V. Balaji; N. Zadeh; Robert L. Jacob; Benjamin Kirtman; Francis X. Giraldo; D. McCarren; Scott Sandgathe; Steven E. Peckham; R. Dunlap

The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.


Monthly Weather Review | 2008

An Object-Oriented Verification of Three NWP Model Formulations via Cluster Analysis: An Objective and a Subjective Analysis

Caren Marzban; Scott Sandgathe; Hilary Lyons

Abstract Recently, an object-oriented verification scheme was developed for assessing errors in forecasts of spatial fields. The main goal of the scheme was to allow the automatic and objective evaluation of a large number of forecasts. However, processing speed was an obstacle. Here, it is shown that the methodology can be revised to increase efficiency, allowing for the evaluation of 32 days of reflectivity forecasts from three different mesoscale numerical weather prediction model formulations. It is demonstrated that the methodology can address not only spatial errors, but also intensity and timing errors. The results of the verification are compared with those performed by a human expert. For the case when the analysis involves only spatial information (and not intensity), although there exist variations from day to day, it is found that the three model formulations perform comparably, over the 32 days examined and across a wide range of spatial scales. However, the higher-resolution model formulatio...


Weather and Forecasting | 2010

Optical Flow for Verification

Caren Marzban; Scott Sandgathe

Abstract Modern numerical weather prediction (NWP) models produce forecasts that are gridded spatial fields. Digital images can also be viewed as gridded spatial fields, and as such, techniques from image analysis can be employed to address the problem of verification of NWP forecasts. One technique for estimating how images change temporally is called optical flow, where it is assumed that temporal changes in images (e.g., in a video) can be represented as a fluid flowing in some manner. Multiple realizations of the general idea have already been employed in verification problems as well as in data assimilation. Here, a specific formulation of optical flow, called Lucas–Kanade, is reviewed and generalized as a tool for estimating three components of forecast error: intensity and two components of displacement, direction and distance. The method is illustrated first on simulated data, and then on a 418-day series of 24-h forecasts of sea level pressure from one member [the Global Forecast System (GFS)–fif...


Monthly Weather Review | 2014

Model Tuning with Canonical Correlation Analysis

Caren Marzban; Scott Sandgathe; James D. Doyle

AbstractKnowledge of the relationship between model parameters and forecast quantities is useful because it can aid in setting the values of the former for the purpose of having a desired effect on the latter. Here it is proposed that a well-established multivariate statistical method known as canonical correlation analysis can be formulated to gauge the strength of that relationship. The method is applied to several model parameters in the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) for the purpose of “controlling” three forecast quantities: 1) convective precipitation, 2) stable precipitation, and 3) snow. It is shown that the model parameters employed here can be set to affect the sum, and the difference between convective and stable precipitation, while keeping snow mostly constant; a different combination of model parameters is shown to mostly affect the difference between stable precipitation and snow, with minimal effect on convective precipitation. In short, the proposed method c...


Monthly Weather Review | 2014

Variance-Based Sensitivity Analysis: Preliminary Results in COAMPS

Caren Marzban; Scott Sandgathe; James D. Doyle; Nicholas C. Lederer

AbstractNumerical weather prediction models have a number of parameters whose values are either estimated from empirical data or theoretical calculations. These values are usually then optimized according to some criterion (e.g., minimizing a cost function) in order to obtain superior prediction. To that end, it is useful to know which parameters have an effect on a given forecast quantity, and which do not. Here the authors demonstrate a variance-based sensitivity analysis involving 11 parameters in the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS). Several forecast quantities are examined: 24-h accumulated 1) convective precipitation, 2) stable precipitation, 3) total precipitation, and 4) snow. The analysis is based on 36 days of 24-h forecasts between 1 January and 4 July 2009. Regarding convective precipitation, not surprisingly, the most influential parameter is found to be the fraction of available precipitation in the Kain–Fritsch cumulus parameterization fed back to the grid scale...

Collaboration


Dive into the Scott Sandgathe's collaboration.

Top Co-Authors

Avatar

Caren Marzban

University of Washington

View shared research outputs
Top Co-Authors

Avatar

James D. Doyle

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David W. Jones

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hilary Lyons

University of Washington

View shared research outputs
Top Co-Authors

Avatar

A. da Silva

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar

Alan J. Wallcraft

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge