Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay Devore is active.

Publication


Featured researches published by Jay Devore.


Communications in Statistics-theory and Methods | 1977

A note on the randomized response technique

Jay Devore

It is pointed out that the usual estimators for the parameters of a randomized response model are not, contrary to popular belief, maximum likelihood estimators.


Biometrics | 1983

Probability and Statistics for Engineering and the Sciences.

G. M. Clarke; Jay Devore

1. OVERVIEW AND DESCRIPTIVE STATISTICS. Populations, Samples, and Processes. Pictorial and Tabular Methods in Descriptive Statistics. Measures of Location. Measures of Variability. 2. PROBABILITY. Sample Spaces and Events. Axioms, Interpretations, and Properties of Probability. Counting Techniques. Conditional Probability. Independence. 3. DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Random Variables. Probability Distributions for Discrete Random Variables. Expected Values of Discrete Random Variables. The Binomial Probability Distribution. Hypergeometric and Negative Binomial Distributions. The Poisson Probability Distribution. 4. CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Continuous Random Variables and Probability Density Functions. Cumulative Distribution Functions and Expected Values. The Normal Distribution. The Exponential and Gamma Distribution. Other Continuous Distributions. Probability Plots. 5. JOINT PROBABILITY DISTRIBUTIONS AND RANDOM SAMPLES. Jointly Distributed Random Variables. Expected Values, Covariance, and Correlation. Statistics and Their Distributions. The Distribution of the Sample Mean. The Distribution of a Linear Combination. 6. POINT ESTIMATION. Some General Concepts of Point Estimation. Methods of Point Estimation. 7. STATISTICAL INTERVALS BASED ON A SINGLE SAMPLE. Basic Properties of Confidence Intervals. Large-Sample Confidence Intervals for a Population Mean and Proportion. Intervals Based on a Normal Population Distribution. Confidence Intervals for the Variance and Standard Deviation of a Normal Population. 8. TESTS OF HYPOTHESES BASED ON A SINGLE SAMPLE. Hypothesis and Test Procedures. Tests About a Population Mean. Tests Concerning a Population Proportion. P-Values. Some Comments on Selecting a Test. 9. INFERENCES BASED ON TWO SAMPLES. z Tests and Confidence Intervals for a Difference Between Two Population Means. The Two-Sample t Test and Confidence Interval. Analysis of Paired Data. Inferences Concerning a Difference Between Population Proportions. Inferences Concerning Two Population Variances. 10. THE ANALYSIS OF VARIANCE. Single-Factor ANOVA. Multiple Comparisons in ANOVA. More on Single-Factor ANOVA. 11. MULTIFACTOR ANALYSIS OF VARIANCE. Two-Factor ANOVA with Kij = 1. Two-Factor ANOVA with Kij > 1. Three-Factor ANOVA. 2p Factorial Experiments. 12. SIMPLE LINEAR REGRESSION AND CORRELATION. The Simple Linear Regression Model. Estimating Model Parameters. Inferences About the Slope Parameter a1. Inferences Concerning Y-x* and the Prediction of Future Y Values. Correlation. 13. NONLINEAR AND MULTIPLE REGRESSION. Aptness of the Model and Model Checking. Regression with Transformed Variables. Polynomial Regression. Multiple Regression Analysis. Other Issues in Multiple Regression. 14. GOODNESS-OF-FIT TESTS AND CATEGORICAL DATA ANALYSIS. Goodness-of-Fit Tests When Category Probabilities are Completely Specified. Goodness of Fit for Composite Hypotheses. Two-Way Contingency Tables. 15. DISTRIBUTION-FREE PROCEDURES. The Wilcoxon Signed-Rank Test. The Wilcoxon Rank-Sum Test. Distribution-Free Confidence Intervals. Distribution-Free ANOVA. 16. QUALITY CONTROL METHODS. General Comments on Control Charts. Control Charts fort Process Location. Control Charts for Process Variation. Control Charts for Attributes. CUSUM Procedures. Acceptance Sampling. APPENDIX TABLES. Cumulative Binomial Probabilities. Cumulative Poisson Probabilities. Standard Normal Curve Areas. The Incomplete Gamma Function. Critical Values for t Distributions. Tolerance Critical Values for Normal Population Distributions. Critical Values for Chi-Squared Distributions. t Curve Tail Areas. Critical Values for F Distributions. Critical Values for Studentized Range Distributions. Chi-Squared Curve Tail Areas. Critical Values for the Ryan-Joiner Test of Normality. Critical Values for the Wilcoxon Signed-Rank Test. Critical Values for the Wilcoxon Rank-Sum Test. Critical Values for the Wilcoxon Signed-Rank Interval. Critical Values for the Wilcoxon Rank-Sum Interval. a Curves for t Tests. Answers to Odd-Numbered Exercises. Index.


Archive | 2014

Probability with applications in engineering, science, and technology

Matthew A. Carlton; Jay Devore

Probability.- Discrete Random Variables and Probability Distributions.- Continuous Random Variables and Probability Distributions.- Joint probability distributions and their applications.- The Basics of Statistical Inference.- Markov chains.- Random processes.- Introduction to signal processing.


Archive | 2011

Inferences Based on Two Samples

Jay Devore; Kenneth N. Berk

Chapters 8 and 9 presented confidence intervals (CIs) and hypothesis testing procedures for a single mean μ, single proportion p, and a single variance σ2. Here we extend these methods to situations involving the means, proportions, and variances of two different population distributions.


Archive | 2011

Regression and Correlation

Jay Devore; Kenneth N. Berk

The general objective of a regression analysis is to determine the relationship between two (or more) variables so that we can gain information about one of them through knowing values of the other(s). Much of mathematics is devoted to studying variables that are deterministically related. Saying that x and y are related in this manner means that once we are told the value of x, the value of y is completely specified.


Archive | 2011

Statistics and Sampling Distributions

Jay Devore; Kenneth N. Berk

This chapter helps make the transition between probability and inferential statistics. Given a sample of \( n \) observations from a population, we will be calculating estimates of the population mean, median, standard deviation, and various other population characteristics (parameters).


Archive | 2011

Overview and Descriptive Statistics

Jay Devore; Kenneth N. Berk

Statistical concepts and methods are not only useful but indeed often indispensable in understanding the world around us. They provide ways of gaining new insights into the behavior of many phenomena that you will encounter in your chosen field of specialization.


Archive | 2011

Continuous Random Variables and Probability Distributions

Jay Devore; Kenneth N. Berk

As mentioned at the beginning of Chap. 2, the two important types of random variables are discrete and continuous. In this chapter, we study the second general type of random variable that arises in many applied problems. Sections 3.2 and 3.3 present the basic definitions and properties of continuous random variables, their probability distributions, and their various expected values. The normal distribution, arguably the most important and useful model in all of probability and statistics, is introduced in Sect. 3.4. Sections 3.5 and 3.6 discuss some other continuous distributions that are often used in applied work. In Sect. 3.7, we introduce a method for assessing whether given sample data is consistent with a specified distribution. Section 3.8 presents methods for obtaining the distribution of a rv Y from the distribution of X when the two are related by some equation Y = h(X). The last section of this chapter is dedicated to the simulation of continuous rvs.


Archive | 2011

Statistical Intervals Based on a Single Sample

Jay Devore; Kenneth N. Berk

A point estimate, because it is a single number, by itself provides no information about the precision and reliability of estimation. Consider, for example, using the statistic \( \overline{X} \) to calculate a point estimate for the true average breaking strength (g) of paper towels of a certain brand, and suppose that \( \overline{{x}} = {9322}.{7} \). Because of sampling variability, it is virtually never the case that \( \overline{{x}} = \mu \).


The American Statistician | 2006

Statistics for Business and Economics (9th ed.). David R. Anderson, Dennis J. Sweeney, and Thomas A. Williams

Jay Devore

homogeneity. Chapter 9 (Multiple Sources of Variation) covers three main topics. The first, split plot designs, addresses an important subject that was missing from the first edition. It recognizes the fact that many industrial experiments involve multiple sources of error, either of necessity or by design. Many of these experiments involve experimental factors, such as a reactor temperature, that are difficult to change. Recognition of these situations up front, in the design, and afterward in correctly analyzing the results, is critical. The authors do an excellent job of elucidating these issues by example, and, as is the case throughout, supplement the appropriate ANOVA analyses with corresponding graphical ANOVAs. The remaining sections of this chapter, on variance components analysis based on nested designs, and transmission of error calculations, essentially reproduce corresponding sections of Chapter 17 of the original book. Chapter 10 (Least Squares and Why We Need Designed Experiments) covers essentially the same ground as Chapter 14 of the original, including the basic theory of linear regression, the normal equations, the ANOVA and lack-of-fit tests, confidence regions, sequential sums of squares, and orthogonalization. Section 10.3, The Origin of Experimental Design, contains an excellent discussion of the shortcomings and pitfalls of analyzing historical data. The chapter concludes with a good illustration of nonlinear regression. Appendixes cover a number of useful topics, including the geometry of least squares, matrix formulation of the normal equations, and weighted least squares. Chapters 11 (Modeling, Geometry, and Experimental Design) and 12 (Some Applications of Response Surface Methods) together provide a greatly expanded treatment of response surface methodology compared to the first edition (Chapter 15), with about 100 pages now devoted to this subject compared to the original 30. There is much here of interest, including the geometry of quadratic polynomials, canonical analysis, and the design information function. Interestingly, the authors dismiss alphabet-optimality (e.g., D-optimality) with a single sentence: “The attempt to compress the information about a design into a single number seems mistaken”—a point well taken, though perhaps a mite too harsh. A few more words of elaboration might have been illuminating. Both central composite designs (CCD) and Box-Behnken designs are discussed thoroughly. An excellent section on sequential design strategies describes a number of approaches to sequential assembly of designs, including the use of orthogonally blocked CCD’s. An extended description of the well-known paper helicopter workshop provides a beautiful illustration of how the sequential approach to experimentation can lead to unexpectedly good solutions. Also noteworthy is a very good discussion on detecting active and inactive factor spaces, and how these can be exploited for multiple response optimization. In short, these two chapters offer a treasure trove of insight and good advice on response surface methods. If there is one area of industrial experimentation that has seen a blossoming of interest and development since the original publication of the book, it is the area of robust process and product design. This is the subject of Chapter 13 (Designing Robust Products and Processes: An Introduction), which consists of essentially two parts. The first, titled Environmental Robustness, deals with variation, or noise, coming from environmental factors. Examples are given of the use of split plot designs, with the process factors taking the role of whole plot factors, and the environmental factors the role of subplot factors. Significant interactions between process and environmental factors are exploited to determine process settings that are robust to the environmental factors. The second part of the chapter, titled Robustness to Component Variation, deals with the transmitted variation due to variation in maintaining process factor settings. A number of approaches are described, including the use of propagation of error formulas (for known response functions), Taguchi’s innerand outerarray designs, and a response surface approach. Weaknesses of some of Taguchi’s signal-to-noise ratios are pointed out, but overall, the discussion is refreshingly free of dogmatic statements. Though the chapter is relatively brief, it provides a good overview of the subject. Chapter 14 (Process Control, Forecasting, and Time Series) pulls together two topics, process control and time series, which received brief discussions in the first edition (Section 17.1 and Chapter 18). Although these topics are somewhat peripheral to the central theme of the book, the authors have chosen to expand their treatment somewhat. The process control discussion is broken down in turn into Process Monitoring (read “statistical process control”) and Process Adjustment (read “automated process control”). As tools for statistical process control, Shewhart, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) charts are all briefly outlined (although the simplified version of CUSUM presented here is not the usual two-parameter version most widely used.) The EWMA serves as a bridge to the time series and forecasting discussion, where it is shown that the EWMA is the optimal one-step-ahead forecast for a first-order integrated moving average process. There is also a brief discussion of seasonal models. The book concludes with Chapter 15 (Evolutionary Process Operation), a brief discussion of EVOP that expands slightly on two examples in Chapter 11 of the first edition. In summary, this new edition of Statistics for Experimenters manages to retain both the style and content that made the original version outstanding, while still incorporating a substantial number of upgrades and additions. Among the more noteworthy of these are the following: a more orderly organization; improved approach to graphical ANOVA; enhanced treatment of orthogonal screening designs, including Plackett-Burman designs and Bayesian analysis; stronger emphasis on sequential approaches to experimentation; new treatments of split plot designs and designs for robustness; and a substantially expanded treatment of response surface methods. I feel duty-bound to report that, whereas the content of the book is of superlative quality, the same, unfortunately, cannot be said of its proofreading. Not only does the text have more than its share of relatively innocuous, though annoying, typos, but more seriously, there are a distressing number of more substantive editing glitches that can leave the reader confused or, even worse, misled. To cite just a few examples: (1) There are references at several points in the book to tables in the back that are nonexistent. Apparently, these tables were inadvertently omitted. (2) In Table 4.9 on page 158, illustrating a decomposition of a Latin square, I found three errors, the most egregious being the first column of plus signs, which should actually be equals signs. (3) On page 421, the first formula on the page appears to be totally extraneous, while the second one is missing a sigma-squared on the right-hand-side. (4) Careless editing resulting in nonsense sentences, such as the one on the bottom of page 296. (5) On page 527, a mathematical derivation of the fact that the average prediction variance, averaged over the design points, is σ 2p/n, is correct up to the last line, where it suddenly concludes with the erroneous statement that the variance of the average y-hat-bar, is σ 2p/n. Poor proofreading notwithstanding (one would hope that future printings will correct these glitches), I have to conclude by saying that I think this book belongs on the shelf of every industrial statistician. There is much wisdom and depth here, and the improvements embodied in this new edition are substantial enough to recommend it even to those who already possess the first edition. I also intend to continue to recommend this book as a reference of first choice to my consulting clients, as well as students in my Strategy of Experimentation courses, who are interested in expanding their knowledge of statistical methods and statistical thinking.

Collaboration


Dive into the Jay Devore's collaboration.

Top Co-Authors

Avatar

Kenneth N. Berk

Illinois State University

View shared research outputs
Top Co-Authors

Avatar

Matthew A. Carlton

California Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar

Richard A. Johnson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Kmenta

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge