Tommy Wright
Oak Ridge National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tommy Wright.
Other Information: PBD: Aug 1997 | 1997
Tommy Wright; Patricia S Hu; Jennifer Young; An Lu
For highway maintenance and planning purposes, it is desirable to characterize each road segment by its traffic flow [such as the annual average daily traffic (AADT) and the AADT for each vehicle class], by the weight distribution of vehicles that travel on its roads [such as the annual average daily equivalent single axle loadings (ESAL) and the annual average daily weight per vehicle for each vehicle class]. As with almost any data collection effort, the monitoring data suffer from errors from many sources. This report summarizes results of a two year empirical research effort, which was sponsored by the Federal highway Administration, (i) to study and characterize the variability in the traffic data (volume, classification, and weight) from the continuously monitored road segments, and (ii) to study the extent to which this variability is transferred to, and affects the precision of the data produced form the road segments which are monitored only one or two days each year. The ultimate hope is not only that states will eventually be able to publish an estimate of a characteristic such as AADT for each road segment, but also that each estimate will be accompanied by a statement of how good the estimate is in terms of the estimated variability or precision which will likely be experienced as a coefficient of variation (i.e., the quotient of a standard deviation and a mean). This report provides highlights of research reported in five working papers.
Transportation Research Record | 1998
Patricia S Hu; Tommy Wright; Tony Esteve
Traffic characteristics, such as the annual average daily traffic (AADT) and the AADT for each vehicle class, are essential for highway maintenance and planning. In practice, selected road segments are monitored continuously every day of the year to identify their traffic characteristics. A sample of the remaining road segments is monitored for 1 or 2 d each year, and the resulting data are adjusted (by using factors based on data collected from the continuously monitored road segments) to produce estimates of annual average daily traffic characteristics. A simulation study empirically considered how the precision of an estimate from a continuously monitored site compares with the precision of an estimate from a short-term monitored site. The original estimates of traffic characteristics (i.e., AADT and AADT by vehicle class) treating the site as a continuously monitored site are on average quite close to, but smaller than, the simulated estimates treating the site as a short-term monitored site. The original estimates (continuous monitoring) appear to be more precise, on average, than the simulated estimates (short-term monitoring). This decrease in precision typically occurs for vehicle classes that account for less than 1 percent of the daily traffic volume, suggesting that these less-common vehicle classes could be combined to achieve reliable AADT estimates.
Statistical Methods and the Improvement of Data Quality | 1983
Tommy Wright; How J. Tsao
The bibliography begins with a definition of frames; and covers construction of the sampling frame, determination of frame units, development of a frame, validation of a frame, administration of a frame, frame maintenance procedures, area frames, imperfect frames, Szameitat and Schaeffers model for analyzing imperfect frames, sampling from imperfect list frames, and multiple frame surveys. The bibliography covers the period from 1945 through 1981. (GHT)
The American Statistician | 1990
Tommy Wright
Abstract When zero defectives are observed in a sample from a finite universe, investigators and decision makers are often tempted to (and want to) declare that there are zero defectives in the finite universe with high confidence. By focusing on the more general question of upper bounds for confidence coefficients of upper confidence bounds, as a special case we are able to show explicitly that our confidence that there are zero defectives in the universe when zero defectives appear in the sample is bounded by the sampling fraction. This note is meant to help reassure those who seek to help others yield not to the aforementioned temptation.
The American Statistician | 2012
Tommy Wright
We present a surprising though obvious result that seems to have been unnoticed until now. In particular, we demonstrate the equivalence of two well-known problems—the optimal allocation of the fixed overall sample size n among L strata under stratified random sampling and the optimal allocation of the H = 435 seats among the 50 states for apportionment of the U.S. House of Representatives following each decennial census. In spite of the strong similarity manifest in the statements of the two problems, they have not been linked and they have well-known but different solutions; one solution is not explicitly exact (Neyman allocation), and the other (equal proportions) is exact. We give explicit exact solutions for both and note that the solutions are equivalent. In fact, we conclude by showing that both problems are special cases of a general problem. The result is significant for stratified random sampling in that it explicitly shows how to minimize sampling error when estimating a total TY while keeping the final overall sample size fixed at n; this is usually not the case in practice with Neyman allocation where the resulting final overall sample size might be near n + L after rounding. An example reveals that controlled rounding with Neyman allocation does not always lead to the optimum allocation, that is, an allocation that minimizes variance.
Communications in Statistics-theory and Methods | 1990
Tommy Wright
There can be gains in estimation efficiency over equal probability samplin methods when one makes use of auxiliary information for probability proporti onal to size with replacement (πpswr) sampling methods. The usual method is simple to execute, but might lead to more than one appearance in the sampl e for any particular unit. When a suitable variable x is not available, one may know how to rank units reasonably well relative to the unknown y values before sample selection. When such ranking is possible, we introduce a simple and efficient sampling plan using the ranks as the unknown x measures of size. The proposed sampling plan is similar to, has the simplicity of, and has no greater sampling variance than with replacement sampling, but is without replacement.
The American Statistician | 1992
Tommy Wright
Abstract In this department The American Statistician publishes articles, reviews, and notes of interest to teachers of the first mathematical statistics course and of applied statistics courses. The department includes the Accent on Teaching Materials section; suitable contents for the section are described under the section heading. Articles and notes for the department, but not intended specifically for the section, should be useful to a substantial number of teachers of the indicated types of courses or should have the potential for fundamentally affecting the way in which a course is taught. Using an elementary but important identity, this article presents a simple proof that Pearsons correlation coefficient r is always between − 1 and 1 and that all points (x i , y i ) for i = 1, …, n fall on a straight line iff r 2 = 1. The presentation is suitable for a wide audience with minimal background.
The American Statistician | 1983
How Tsao; Tommy Wright
Abstract A simple tool, called the maximum ratio, is suggested that serves as a measure of closeness among comparable estimates and as the basis of a test for determining that at least one of the estimates under comparison will be more than 100α% of the target parameter away from the target parameter being estimated where 0 < α < 1.
Linear Algebra and its Applications | 1985
Tommy Wright; How Tsao
Abstract A direct alternative proof of Keyfitzs optimal solution to the problem of maximizing the probability of retention in sampling on a second occasion is given, using techniques of elementary linear algebra. The proof and comments help one to better understand Keyfitzs solution, and they clearly demonstrate that the closed form solution of Keyfitz is one of a possible infinity of solutions offered by a linear programming approach. We also give one of those other solutions offered by linear programming, which is easy to obtain by hand calculations using only the operation of subtraction.
Journal of Quality Technology | 1985
Tommy Wright; How Tsao
Some useful results about simple random samples are given which are not generally known or are often assumed to be true but with some uncertainty by the experimenter. Such results, when called to the attention of users of sampling techniques, should lea..