Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven R. Hanna is active.

Publication


Featured researches published by Steven R. Hanna.


Archive | 1982

Handbook on atmospheric diffusion

Steven R. Hanna; G.A. Briggs; R.P. Hosker

Basic meteorological concepts are covered as well as plume rise, source effects, and diffusion models. Chapters are included on cooling tower plumes and urban diffusion. Suggestions are given for calculating diffusion in special situations, such as for instantaneous releases over complex terrain, over long distances, and during times when chemical reactions or dry or wet deposition are important. (PSB)


Atmospheric Environment | 1989

Confidence limits for air quality model evaluations, as estimated by bootstrap and jackknife resampling methods

Steven R. Hanna

Abstract Air quality models are used to make decisions regarding the construction of industrial plants, the types of fuel that will be burnt and the types of pollution control devices that will be used. It is important to know the uncertainties that are associated with these model predictions. Standard analytical methods found in elementary statistics textbooks for estimating uncertainties are generally not applicable since the distributions of performance measures related to air quality concentrations are not easily transformed to a Gaussian shape. This paper suggests several possible resampling procedures that can be used to calculate uncertainties or confidence limits on air quality model performance. In these resampling methods, many new data sets are drawn from the original data set using an empirical set of rules. A few alternate forms of the socalled bootstrap and jackknife resampling procedures are tested using a concocted data set with a Gaussian parent distributions, with the result that the jackknife is the most efficient procedure to apply, although its confidence bounds are slightly overestimated. The resampling procedures are then applied to predictions by seven air quality models for the Carpinteria coastal dispersion experiment. Confidence intervals on the fractional mean bias and the normalized mean square error are calculated for each model and for differences between models. It is concluded that these uncertainties are sometimes so large for data sets consisting of about 20 elements that it cannot be stated with 95% confidence that the performance measure for the ‘best’ model is significantly different from that for another model.


Atmospheric Environment. Part A. General Topics | 1993

Hazardous gas model evaluation with field observations

Steven R. Hanna; Joseph C. Chang; David G. Strimaitis

Abstract Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two “benchmark” analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas models predicted, at most, a 20% increase in concentrations for this variation in averaging time.


Atmospheric Environment | 2001

Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain

Steven R. Hanna; Zhigang Lu; H. Christopher Frey; Neil Wheeler; Jeffrey M. Vukovich; Saravanan Arunachalam; Mark E. Fernau; D. Alan Hansen

The photochemical grid model, UAM-V, has been used by regulatory agencies to make decisions concerning emissions controls, based on studies of the July 1995 ozone episode in the eastern US. The current research concerns the effect of the uncertainties in UAM-V input variables (emissions, initial and boundary conditions, meteorological variables, and chemical reactions) on the uncertainties in UAM-V ozone predictions. Uncertainties of 128 input variables have been estimated and most range from about 20% to a factor of two. 100 Monte Carlo runs, each with new resampled values of each of the 128 input variables, have been made for given sets of median emissions assumptions. Emphasis is on the maximum hourly-averaged ozone concentration during the 12–14 July 1995 period. The distribution function of the 100 Monte Carlo predicted domain-wide maximum ozone concentrations is consistently close to log-normal with a 95% uncertainty range extending over plus and minus a factor of about 1.6 from the median. Uncertainties in ozone predictions are found to be most strongly correlated with uncertainties in the NO2 photolysis rate. Also important are wind speed and direction, relative humidity, cloud cover, and biogenic VOC emissions. Differences in median predicted maximum ozone concentrations for three alternate emissions control assumptions were investigated, with the result that (1) the suggested year-2007 emissions changes would likely be effective in reducing concentrations from those for the year-1995 actual emissions, that (2) an additional 50% NOx emissions reductions would likely be effective in further reducing concentrations, and that (3) an additional 50% VOC emission reductions may not be effective in further reducing concentrations.


Journal of Applied Meteorology | 2001

Evaluations of mesoscale models' simulations of near-surface winds, temperature gradients, and mixing depths

Steven R. Hanna; Ruixin Yang

Mesoscale meteorological models are being used to provide inputs of winds, vertical temperature and stability structure, mixing depths, and other parameters to atmospheric transport and dispersion models. An evaluation methodology is suggested and tested with simulations available from four mesoscale meteorological models (Fifth-Generation Pennsylvania State University‐National Center for Atmospheric Research Mesoscale Model, Regional Atmospheric Modeling System, Coupled Ocean‐Atmosphere Mesoscale Prediction System, and Operational Multiscale Environmental Model with Grid Adaptivity). These models have been applied by others to time periods of several days in three areas of the United States (Northeast, Lake Michigan area, and central California) and in Iraq. The authors’ analysis indicates that the typical root-mean-square error (rmse) of hourly averaged surface wind speed is found to be about 2‐3 m s21 for a wide range of wind speeds for the models and for the geographic regions studied. The rmse of surface wind direction is about 508 for wind speeds of about 3 o r4ms 21. It is suggested that these uncertainties in wind speeds and directions are primarily due to random turbulent processes that cannot be simulated by the models and to subgrid variations in terrain and land use, and therefore it is unlikely that the errors can be reduced much further. Model simulations of daytime mixing depths are shown to be often within 20% of observations. However, the models tend to predict weaker inversions than are observed in interfacial layers capping the mixing depth. The models also underestimate the vertical temperature gradients in the lowest 100 m during the nighttime, which implies that the simulated boundary layer stability is not as great as that observed, suggesting that the rate of vertical dispersion may be overestimated. The models would be able to simulate better the structure of shallow inversions if their vertical grid sizes were smaller.


Journal of Applied Meteorology | 1981

Lagrangian and Eulerian Time-Scale Relations in the Daytime Boundary Layer

Steven R. Hanna

Abstract Lagrangian (neutral balloon) and Eulerian (tower and aircraft) turbulence observations were made in the daytime mixed layer near Boulder, Colorado. Average sampling time was ∼25 min. Average Lagrangian time scale is ∼70 s and average ratio of Lagrangian to Eulerian time scales (β = TL/TE) is about 1.7. The ratio β is inversely proportional to turbulence intensity i. These data support the formula β = 0.7/i. Lagrangian time scale for the vertical component of turbulence at heights above ∼100 m is given by the formula TL = 0.17zi/σμ where zi is mixing depth. This formula is valid for the horizontal components of turbulence at all heights in the mixed layer. Lagrangian spectra in the inertial subrange are best represented by the formula Fr(n) = 0.2ϵn−2.


Journal of Applied Meteorology | 1989

Hybrid Plume Dispersion Model (HPDM) Development and Evaluation

Steven R. Hanna; Robert J. Paine

Abstract The Hybrid Plume Dispersion Model (HPDM) was developed for application to tall stack plumes dispersing over nearly flat terrain. Emphasis is on convective and high-wind conditions. The meteorological component is based on observational and modeling studies of the planetary boundary layer. The dispersion estimates for the convective boundary layer (CBL) were developed from laboratory experiments and field studies and incorporate convective scaling, i.e., the convective velocity scale, w*, and the CBL height, h, which are the relevant velocity and length scales of the turbulence. The model has a separate component to handle the dispersion of highly buoyant plumes that remain near the top of the CBL and resist downward mixing. For convective conditions, the vertical concentration distribution is non-Gaussian, but for neutral and stable conditions it is assumed to be Gaussian. The HPDM performance is assessed with extensive ground-level concentration measurements around the Kincaid, Illinois, and Bul...


Archive | 1984

Applications in Air Pollution Modeling

Steven R. Hanna

In the first four chapters, up-to-date information was given on the meteorological structure of the planetary boundary layer (PBL). The purpose of the last three chapters was to discuss how this new knowledge of the PBL could be used to develop new and improved diffusion theories and models. Chapters 5 and 6 dealt specifically with diffusion in the convective boundary layer and the stable boundary layer, respectively. Clearly they have pre-empted much of what I might say, and I will only briefly cover these areas for the sake of completeness.


Atmospheric Environment | 2002

Comparisons of model simulations with observations of mean flow and turbulence within simple obstacle arrays

Steven R. Hanna; S Tehranian; B Carissimo; R.W Macdonald; Rainald Löhner

Abstract A three-dimensional numerical code with unstructured tetrahedral grids, the finite element flow solver (FEFLO), was used to simulate the mean flow and the turbulence within obstacle array configurations consisting of simple cubical elements. Model simulations were compared with observations from a hydraulic water flume at the University of Waterloo. FEFLO was run in large eddy simulation mode, using the Smagorinsky closure model, to resolve the larger scales of the flow field. There were four experiment test cases consisting of square and staggered arrays of cubical obstacles with separations of 1.5 and 0.5 obstacle heights. The mean velocity profile for the incoming neutral boundary layer was approximated by a power law, and the turbulent fluctuations in the approach flow were generated using a Monte Carlo model. The numerical simulations were able to capture, within 40% on average, the general characteristics of the mean flow and the turbulence, such as the strong mean wind shears and the maximum turbulence at the elevation of the obstacles and the nearly constant mean wind and the 50% reduction in the turbulent velocity within the obstacle canopy. As expected, the mean wind speeds were significantly decreased (by about a factor of two or three) in the array with closer obstacle packing. It was found that, a “street canyon” effect was more obvious for the square arrays, with higher flow speeds in between the obstacles, than for the staggered arrays.


Journal of Applied Meteorology | 1983

Lateral Turbulence Intensity and Plume Meandering During Stable Conditions

Steven R. Hanna

Abstract There is much evidence in the literature for the presence of mesoscale lateral meanders in the stable nighttime boundary layer. These meanders result in relatively high lateral turbulence intensities and diffusion rates when averaged over an hour. Anemometer data from 17 overnight experiments at Cinder Cone Butte in Idaho are analyzed to show that the dominant period of the mesoscale meanders is about two hours. Lidar cross-sections of tracer plumes from these same experiments show that the hourly average σ y is often dominated by meandering. Since meandering is not always observed for given meteorological conditions, it is suggested that nighttime diffusion cannot be accurately predicted without using onsite observations of wind fluctuations. In case no turbulence data are available, an empirical formula is suggested that predicts the hourly average lateral turbulence intensity as a function of wind speed and hour-to-hour variation in wind direction.

Collaboration


Dive into the Steven R. Hanna's collaboration.

Top Co-Authors

Avatar

Re Britter

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Brown

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

D. Alan Hansen

Electric Power Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F.A. Gifford

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge