Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Douglas Barrett is active.

Publication


Featured researches published by J. Douglas Barrett.


Journal of Business Ethics | 2000

Corporate Philanthropy, Criminal Activity, and Firm Reputation: Is There a Link?

Robert J. Williams; J. Douglas Barrett

This study examined the influence of corporate giving programs on the link between certain categories of corporate crime and corporate reputation. Specifically, firms that violate EPA and OSHA regulations should, to some extent, experience a decline in their reputations, while firms that contribute to charitable causes should see their reputations enhanced. The results of this study support both of these contentions. Further, the results suggest that corporate giving significantly moderates the link between the number of EPA and OSHA violations committed by a firm and its reputation. Thus, while a firms reputation can be diminished through its violation of various government regulations, the extent of the decline in reputation may be significantly reduced through charitable giving.


Technometrics | 1995

A probabilistic and statistical view of fuzzy methods

Michael Laviolette; John W. Seaman; J. Douglas Barrett; William H. Woodall

Fuzzy set theory has primarily been associated with control theory and with the representation of uncertainty in applications in artificial intelligence. More recently, fuzzy methods have been proposed as alternatives to traditional statistical methods in statistical quality control, linear regression, and forecasting, among other areas. We review some basic concepts of fuzzy methods, point out some philosophical and practical problems, and offer simpler alternatives based on traditional probability and statistical theory. Applications in control theory and statistical quality control serve as our primary examples.


Journal of Sports Economics | 2008

Differences in the Success of NFL Coaches by Race: A Different Perspective

Keith D. Malone; Jim F. Couch; J. Douglas Barrett

Madden analyzed data on coaching records in the National Football League during the 1990-2002 seasons. The overall conclusion is that African American coaches are held to higher employment standards, with respect to winning, than White coaches. Maddens model used complete seasons coached and omitted partial seasons. Here, the data are analyzed analogously but partial seasons are included. The inclusion of partial season results in a lack of significance of race as a factor in firing. Furthermore, there are other potential explanations of the historically low percentage of African American coaches in the National Football League. These are discussed herein.


Technometrics | 2007

Diagnosis and Fault-Tolerant Control

J. Douglas Barrett

One of the major concerns in engineering is that the systems we design behave reasonably well in practice. We deal with imperfect models, model uncertainties, uncertainties in the interaction with the environment, and the finite dependability of the hardware and software components. There are various methods used in the engineering fields that account for such difficulties, including methods that check the dependability of designs by simulations and formal verification, methods for uncertainties in models such as worst case analysis and Monte Carlo analysis, and specific methods that can deal with certain faulty situations, such as in the areas of self-stabilization in computer science and error-correcting codes in coding theory. In control systems, we have disciplines such as robust control, adaptive control and fault-tolerant control. While these three disciplines have similar goals, a careful look reveals the differences. Following the authors of the book, we notice that fault-tolerant control “aims at changing the control law so as to cancel the effects of the faults and or to attenuate them to an acceptable level.” Compared to disturbances and model uncertainties, faults are more severe changes that cannot be suppressed by a fixed controller. This distinguishes fault-tolerant control from robust control, in which a fixed controller is designed as a tradeoff between performance and robustness. Further, the principle of adaptive control is “particularly efficient only for plants described by linear models with slowly varying parameters.” However, fault-tolerant control is to deal also with systems of nonlinear behavior, while faults typically involve sudden parameter changes. A further distinction can be made between traditional fault tolerance and model-based fault-tolerant control, the latter being approached in the book. Traditional fault tolerance improves the dependability of the system based on physical redundancy: a component is replaced with a component of the same type when it fails. Model-based fault tolerant control achieves dependability by means of analytical redundancy: In case of faults, changes are made in the control law and possibly also in the plant, by means of reconfigurations. The field of fault-tolerant control is relatively new. Two surveys of the field are [2] and [3]. Note that fault diagnosis, which has been studied extensively in the literature, is required for the implementation of fault tolerant control. The book puts together several fault-tolerant control and fault diagnosis approaches, with an emphasis on the work of the authors. Application examples are also given, allowing the reader to compare the approaches proposed in the book. In the literature, there is another book on fault-tolerant control [1], which complements the material of the book with methods for systems with Markovian parameters. The book is organized in ten chapters and six appendices. Chapters 1–3 are introductory, presenting an overview of the main ideas of the book, examples, and the various types of models used in the book. Chapters 4 and 5 present methods applicable at a higher-level of abstraction, in which the analytical details of the plant model are absent. Chapters 6 and 7 address fault diagnosis and fault-tolerant control


Public Finance Review | 2004

Alabama’s Enterprise Zones: Designed to Aid the Needy?

Jim F. Couch; J. Douglas Barrett

The Alabama Enterprise Zone Act established 27 zones across the state. The act provides incentives to business firms that locate within an approved zone in an effort to attract desperately needed jobs. Zone status was purported to be based on economic need as measured by five variables. In this article, the selection process is closely examined. The authors find that the selection criteria were frequently ignored in favor of political concerns. The results suggest that a policy designed to help needy individuals in the state was convoluted by the political process.


Technometrics | 2007

Taguchi's Quality Engineering Handbook

J. Douglas Barrett

points, decisions are made based on results for the study endpoints. In clinical trials, the decisions are usually whether to stop the trial because the efficacy and safety of the treatment can be confirmed already, the safety risks are too great, or the treatment is very unlikely to achieve its therapeutic goal (called stopping for futility). Rules for stopping the trial are made prior to collecting any data. Such rules, called stopping rules, are typically formally defined in a protocol that is completed and approved prior to the start of the trial. Adaptive procedures add the following features to the possible decisions at the interim analyses: (1) the addition or deletion of trial arms in a multiple armed clinical trial, (2) an increase or decrease in the total sample at the end of the study (based on interim estimates of variability and/or other assumed parameters, e.g., effect size), and (3) other changes to the design (such as changes to the inclusion/exclusion criteria for the study subjects). Statisticians in the pharmaceutical and the medical device industries as well as at the National Institutes of Health (NIH) and other medical research institutes will find this book invaluable. Given that most readers of Technometrics are statisticians and practitioners in either the physical, chemical, or engineering sciences, they may not find it as immediately applicable as a biostatistician would. Prior to the development of group sequential procedures there were sequential procedures. Sequential procedures are just like group sequential procedures except that the interim analyses occur after each newly observed data point. These sequential methods were developed (both theory and applications) by Abraham Wald in the United States and George Barnard in the United Kingdom in the 1940s as part of the war effort during World War II. The motivating application was reliability testing of military products such as ammunition. There was a desire to determine that the ammunition was safe and reliable without wasting a lot of ammunition in testing. The same reasoning could apply to any product that requires destructive testing to determine its reliability and is expensive or time consuming to produce. After the war, the practical application was hindered by the need to make and the difficulty of making real-time decisions after every sample. Group sequential methods made the whole idea of sequential testing or monitoring much more useful. The reliability applications may be of interest to the general Technometrics reader, but this book and the text by Jennison and Turnbull (2000) include only clinical trial applications. The authors of the text under review are among the top researchers in the field, and this text by Proschan, Lan, and Wittes very well written and provides thorough and nearly complete coverage of the latest developments in group sequential methods. It also contains a chapter on adaptive sample size methods (Chap. 11). These methods are a subset of the adaptive procedures, and include Stein’s method and others for constructing two-stage designs to deal with nuisance parameters. Among other sample size adjustment methods, the authors include adjusting the sample size based on an interim assessment of the effect size. The few topics in group sequential methods that are not covered in detail are outlined in Chapter 12 (titled “Topics Not Covered”). The text by Jennison and Turnbull (2000) was the first major text on group sequential methods. It came out in 2000 and is considered by many to be a classic on the subject. Both, Jennison and Turnbull are well-known statisticians and they have published widely in the statistics and biostatistics literature. These two texts cover mostly the same topics, are both very current, and both give examples in clinical trials. So a reader, like me, who already owns a copy of Jennison and Turnbull might ask what would be the added value of purchasing Proschan, Lan, and Wittes? I would give the following reasons:


Stochastic Models | 1999

A simple probabilistic representation of the product-sum fuzzy logic controller

J. Douglas Barrett; William H. Woodall

The product-sum fuzzy logic controller (FLC) is a tool widely used in many engineering applications. Barrett and Woodall [2] showed that the product-sum FLC can be represented probabilistically. Here we offer an example of this representation using a single input – single output controller and a multiple input – single output controller. Advantages of the probabilistic approach are given


Archive | 2012

The Market for NFL Coaches and Managers

Keith D. Malone; Jim F. Couch; J. Douglas Barrett

Everyone in an organization plays a role in the success or lack thereof in achieving stated objectives. Sports are no exception in this regard. Players, coaches, trainers, front offices, owners, and other staff each contribute to the ultimate results, whether they be winning, making money, or both. It is hardly uncommon for debates to ensue across the country regarding which group is ultimately the most responsible for the success or failure of the team. In this chapter, the contribution of coaches is considered.


Technometrics | 2008

Process Control Performance Assessment: From Theoryto Implementation

J. Douglas Barrett

manent SAS data sets; creating formats; performing conditional and iterative processing; working with dates, arrays, numeric and character functions; and creating subsets and combined SAS data sets. Part 3—Presenting and Summarizing Your Data consists of seven chapters, including 107 example programs and 52 self-test problems. This section presents ways to display and customize the data via the basics like PROC PRINT and PROC SORT, PROC REPORT, PROC TABULATE, a brief introduction to ODS, and PROC GCHART. It also discusses basic summary options such as PROC MEANS and PROC FREQ. Part 4—Advanced Topics consists of six chapters, including 74 example programs and 46 self-test problems. These topics include using advanced INPUT techniques and advanced features of user-defined formats and informats, and restructuring SAS data sets. Cody also introduces the use of SAS macro language and SQL, but quickly recommends other references for further study. While preparing this review, I lent my copy of Learning SAS by Example to a friend, who promptly adopted it for his SAS programming class. Now, approximately a month later, he is pleased with it as a textbook and I am pleased with it as both a reference and a tutorial. I would definitely recommend this book for each of its target audiences.


Technometrics | 2007

Advanced Fuzzy Logic Technologies in Industrial Applications

J. Douglas Barrett

The question of whether or not this book is appropriate must be answered by the prospective reader. In doing so, he or she must ask three questions: Do I need to learn about FTC?; how badly do I need to learn about FTC?; and is my engineering control (and math) background strong? If your answers are yes, badly, and yes, then this is the book for you. Any other set of answers, and the advice here is that there are many other quite admirable endeavors in life. This book is as advertised. If you have a strong background in control theory and are interested in fault-tolerant control, this offers a comprehensive package. If you already have studied fault-tolerant control and need a reference work, this is a good one. However, if you find the topic of interest and have no background in engineering control, then you have some catch-up work to do before to tackling this text. Good luck.

Collaboration


Dive into the J. Douglas Barrett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim F. Couch

University of North Alabama

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith D. Malone

University of North Alabama

View shared research outputs
Top Co-Authors

Avatar

Michael Laviolette

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge