Imke Durre
National Oceanic and Atmospheric Administration
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Imke Durre.
Journal of Atmospheric and Oceanic Technology | 2012
Matthew J. Menne; Imke Durre; Russell S. Vose; Byron E. Gleason; Tamara G. Houston
AbstractA database is described that has been designed to fulfill the need for daily climate data over global land areas. The dataset, known as Global Historical Climatology Network (GHCN)-Daily, was developed for a wide variety of potential applications, including climate analysis and monitoring studies that require data at a daily time resolution (e.g., assessments of the frequency of heavy rainfall, heat wave duration, etc.). The dataset contains records from over 80 000 stations in 180 countries and territories, and its processing system produces the official archive for U.S. daily data. Variables commonly include maximum and minimum temperature, total daily precipitation, snowfall, and snow depth; however, about two-thirds of the stations report precipitation only. Quality assurance checks are routinely applied to the full dataset, but the data are not homogenized to account for artifacts associated with the various eras in reporting practice at any particular station (i.e., for changes in systematic...
Journal of Climate | 2006
Imke Durre; Russell S. Vose; David B. Wuertz
This paper provides a general description of the Integrated Global Radiosonde Archive (IGRA), a new radiosonde dataset from the National Climatic Data Center (NCDC). IGRA consists of radiosonde and pilot balloon observations at more than 1500 globally distributed stations with varying periods of record, many of which extend from the 1960s to present. Observations include pressure, temperature, geopotential height, dewpoint depression, wind direction, and wind speed at standard, surface, tropopause, and significant levels. IGRA contains quality-assured data from 11 different sources. Rigorous procedures are employed to ensure proper station identification, eliminate duplicate levels within soundings, and select one sounding for every station, date, and time. The quality assurance algorithms check for format problems, physically implausible values, internal inconsistencies among variables, runs of values across soundings and levels, climatological outliers, and temporal and vertical inconsistencies in temperature. The performance of the various checks was evaluated by careful inspection of selected soundings and time series. In its final form, IGRA is the largest and most comprehensive dataset of quality-assured radiosonde observations freely available. Its temporal and spatial coverage is most complete over the United States, western Europe, Russia, and Australia. The vertical resolution and extent of soundings improve significantly over time, with nearly three-quarters of all soundings reaching up to at least 100 hPa by 2003. IGRA data are updated on a daily basis and are available online from NCDC as both individual soundings and monthly means.
Journal of Applied Meteorology and Climatology | 2010
Imke Durre; Matthew J. Menne; Byron E. Gleason; Tamara G. Houston; Russell S. Vose
This paper describes a comprehensive set of fully automated quality assurance (QA) procedures for observations of daily surface temperature, precipitation, snowfall, and snow depth. The QA procedures are being applied operationally to the Global Historical Climatology Network (GHCN)-Daily dataset. Since these data are used for analyzing and monitoring variations in extremes, the QA system is designed to detect as many errors as possible while maintaining a low probability of falsely identifying true meteorological events as erroneous. The system consists of 19 carefully evaluated tests that detect duplicate data, climatological outliers, and various inconsistencies (internal, temporal, and spatial). Manual review of random samples of the values flagged as errors is used to set the threshold for each procedure such that its falsepositive rate, or fraction of valid values identified as errors, is minimized. In addition, the tests are arranged in a deliberate sequence in which the performance of the later checks is enhanced by the error detection capabilities of the earlier tests. Based on an assessment of each individual check and a final evaluation for each element, the system identifies 3.6 million (0.24%) of the more than 1.5 billion maximum/minimum temperature, precipitation, snowfall, and snow depth values in GHCN-Daily as errors, has a false-positive rate of 1%22%, and is effective at detecting both the grossest errors as well as more subtle inconsistencies among elements.
Journal of Geophysical Research | 2005
Melissa Free; Dian J. Seidel; J. K. Angell; John R. Lanzante; Imke Durre; Thomas C. Peterson
[1] A new data set containing large-scale regional mean upper air temperatures based on adjusted global radiosonde data is now available up to the present. Starting with data from 85 of the 87 stations adjusted for homogeneity by Lanzante, Klein and Seidel, we extend the data beyond 1997 where available, using a first differencing method combined with guidance from station metadata. The data set consists of temperature anomaly time series for the globe, the hemispheres, tropics (30N–30S) and extratropics. Data provided include annual time series for 13 pressure levels from the surface to 30 mbar and seasonal time series for three broader layers (850–300, 300–100 and 100–50 mbar). The additional years of data increase trends to more than 0.1 K/decade for the global and tropical midtroposphere for 1979–2004. Trends in the stratosphere are approximately 0.5 to 0.9 K/decade and are more negative in the tropics than for the globe. Differences between trends at the surface and in the troposphere are generally reduced in the new time series as compared to raw data and are near zero in the global mean for 1979–2004. We estimate the uncertainty in global mean trends from 1979 to 2004 introduced by the use of first difference processing after 1995 at less than 0.02–0.04 K/decade in the troposphere and up to 0.15 K/decade in the stratosphere at individual pressure levels. Our reliance on metadata, which is often incomplete or unclear, adds further, unquantified uncertainty that could be comparable to the uncertainty from the FD processing. Because the first differencing method cannot be used for individual stations, we also provide updated station time series that are unadjusted after 1997. The Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) data set will be archived and updated at NOAA’s National Climatic Data Center as part of its climate monitoring program.
Bulletin of the American Meteorological Society | 2013
Markus G. Donat; Lisa V. Alexander; H. Yang; Imke Durre; Russell S. Vose; John Caesar
AMERICAN METEOROlOGICAl SOCIETy | July 2013| 997 PB AFFILIATIONS: Donat, alexanDer, anD Yang—Climate Change Research Centre, and ARC Centre of Excellence for Climate System Science, University of New South Wales, Sydney, Australia; Durre anD Vose—NOAA’s National Climatic Data Center, Asheville, North Carolina; Caesar—Met Office Hadley Centre, Exeter, United Kingdom CORRESPONDING AUTHOR: Markus Donat, Climate Change Research Centre, University of New South Wales, Sydney, Australia E-mail: [email protected]
Journal of Applied Meteorology and Climatology | 2014
Russell S. Vose; Scott Applequist; Mike Squires; Imke Durre; Matthew J. Menne; Claude N. Williams; Chris Fenimore; Karin Gleason; Derek S. Arndt
AbstractThis paper describes an improved edition of the climate division dataset for the conterminous United States (i.e., version 2). The first improvement is to the input data, which now include additional station networks, quality assurance reviews, and temperature bias adjustments. The second improvement is to the suite of climatic elements, which now includes both maximum and minimum temperatures. The third improvement is to the computational approach, which now employs climatologically aided interpolation to address topographic and network variability. Version 2 exhibits substantial differences from version 1 over the period 1895–2012. For example, divisional averages in version 2 tend to be cooler and wetter, particularly in mountainous areas of the western United States. Division-level trends in temperature and precipitation display greater spatial consistency in version 2. National-scale temperature trends in version 2 are comparable to those in the U.S. Historical Climatology Network whereas ver...
Bulletin of the American Meteorological Society | 2012
Anthony Arguez; Imke Durre; Scott Applequist; Russell S. Vose; Michael F. Squires; Xungang Yin; Richard R. Heim; Timothy W. Owen
The National Oceanic and Atmospheric Administration (NOAA) released the 1981–2010 U.S. Climate Normals in July 2011, representing the latest decadal installment of this long-standing product line. Climatic averages (and other statistics) of temperature, precipitation, snowfall, and numerous derived quantities were calculated for ~9,800 stations operated by the U.S. National Weather Service (NWS). They include estimated normals, or “quasi normals,” for approximately 2,000 active short-record stations such as those in the U.S. Climate Reference Network. The 1981–2010 installment features several new products and methodological enhancements: 1) state-of-the-art temperature homogenization at the monthly scale, 2) extensive utilization of quality-controlled daily climate data, 3) new statistical approaches for calculating daily temperature normals and heating and cooling degree days, and 4) a comprehensive suite of precipitation, snowfall, and snow depth statistics. This paper provides a general overview of th...
Journal of Applied Meteorology and Climatology | 2008
Imke Durre; Matthew J. Menne; Russell S. Vose
Abstract The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure’s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an “extremes check” for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approac...
Climate of The Past | 2012
R. J. H. Dunn; Kate M. Willett; Peter W. Thorne; Emma V. Woolley; Imke Durre; Aiguo Dai; D. E. Parker; Russ E. Vose
This paper describes the creation of HadISD: an automatically quality-controlled synoptic resolution dataset of temperature, dewpoint temperature, sea-level pressure, wind speed, wind direction and cloud cover from global weather stations for 1973–2011. The full dataset consists of over 6000 stations, with 3427 long-term stations deemed to have sufficient sampling and quality for climate applications requiring sub-daily resolution. As with other surface datasets, coverage is heavily skewed towards Northern Hemisphere mid-latitudes. The dataset is constructed from a large pre-existing ASCII flatfile data bank that represents over a decade of substantial effort at data retrieval, reformatting and provision. These raw data have had varying levels of quality control applied to them by individual data providers. The work proceeded in several steps: merging stations with multiple reporting identifiers; reformatting to netCDF; quality control; and then filtering to form a final dataset. Particular attention has been paid to maintaining true extreme values where possible within an automated, objective process. Detailed validation has been performed on a subset of global stations and also on UK data using known extreme events to help finalise the QC tests. Further validation was performed on a selection of extreme events world-wide (Hurricane Katrina in 2005, the cold snap in Alaska in 1989 and heat waves in SE Australia in 2009). Some very initial analyses are performed to illustrate some of the types of problems to which the final data could be applied. Although the filtering has removed the poorest station records, no attempt has been made to homogenise the data thus far, due to the complexity of retaining the true distribution of high-resolution data when applying adjustments. Hence non-climatic, time-varying errors may still exist in many of the individual station records and care is needed in inferring long-term trends from these data. This dataset will allow the study of high frequency variations of temperature, pressure and humidity on a global basis over the last four decades. Both individual extremes and the overall population of extreme events could be investigated in detail to allow for comparison with past and projected climate. A version-control system has been constructed for this dataset to allow for the clear documentation of any updates and corrections in the future.
Journal of Climate | 2004
Melissa Free; J. K. Angell; Imke Durre; John R. Lanzante; Thomas C. Peterson; Dian J. Seidel
Abstract The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade−1 for 1960–97 a...