Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles C. Brown is active.

Publication


Featured researches published by Charles C. Brown.


Journal of Chronic Diseases | 1976

How many controls

Mitchell H. Gail; Roger R. Williams; David P. Byar; Charles C. Brown

A COMMON question in clinical and epidemiologic research is, ‘How many controls are needed for this study ?. Some retrospective studies can be strengthened by using more controls than cases, and some prospective clinical studies can be improved by unequal allocation of subjects into treatment and control groups. We shall confine our discussion of ‘How many controls? to studies comparing responses in only two groups. The first group (G,) may be subjected to a new treatment (T,) while the second (G,) is given conventional or control therapy (T2), and the response may be quantitative, such as the lowering of blood pressure, or qualitative, such as whether or not the subject survives. In retrospective studies G1 is typically a group of ‘cases’ and Gz a group of matched or unmatched controls, and the response is whether or not each subject was exposed to a possible etiologic agent. Most such studies allocate equal numbers of subjects to G1 and G2, namely n, = n2 = n, and formulae, graphs, and tables are available for determining the sample size n, = n2 required to have a given probability (power) of detecting a prespecified treatment effect at significance level CI when equal allocation is used. Pertinent references include [l, pp. 15-311, [2, p. 191, [3], [4, pp. 4794821 and [S, pp. 111-114 and 221-2221. We now discuss three situations in which unequal allocation may be preferred. In prospective clinical trials involving treatments with comparable risk, the investigator may have discretion over the numbers treated in each group. Then other factors, such as the relative monetary costs of the two treatments, or the relative inconvenience or discomfort to patients, can influence the experimental design. If it costs r times as much to study a subject in G1 as in G2, and if one wishes to minimize the cost of the experiment while maintaining the same power (the power is 1 p, where fl is the probability of type II error as in [6, p. 279]), one should allocate Jr subjects to G2 for each subject allocated to G,. This result, which we call the ‘square root rule’, has been discussed by Cochran [7, p. 1453 and Nam [8]. We generalize the square root rule to the case where the response variable has different variances in each group. This generalized square


Biometrics | 1975

On the Use of Indicator Variables for Studying the Time-Dependence of Parameters in a Response-Time Model

Charles C. Brown

The study of the dependence of response-time data on a multivariate regressor variable in the presence of arbitrary censoring has been approached in a number of ways. The exponential regression model proposed byr Feigl and Zelen [1965] and extended by Zippin and Armitage [1966] and by Mantel and Myers [1971] to the case of arbitrarily right censored data relates the reciprocal of the exponential parameter, i.e. the expected survival time, to a linear function of the regressor variables. Later, Glasser [1967] proposed an exponential model in which the logarithm of the exponential parameter was assumed to be a linear function of the regressor variables. In both formulations the rather stringent assumption of a constant hazard may be dropped by the assumption of a more general response-time distribution such as the Weibull, gamma or Gompertz, each of which contains the exponential as a special case. The nonparametric model proposed by Cox [1972] admits an arbitrary response-time distribution and, for discrete data, becomes a logistic regression model. An alternative version of Coxs discrete model has beenl proposed by Kalbfleisch and Prentice [1973]. These approaches have the advantage of not specifying the hazard function in advance and, as such, are more robust than the above parametric methods. Their major drawback, however, is the computational difficulties in the presence of tied response times. In many practical situations the data are recorded in such a way as to make this a very real problem and serious enough to implv that an alternative procedure may be desirable. This logistic regression model was also used by Myers et al. [1973] in conjunction with the assumption of a constant hazard. The model they considered incorporated concomitant information by assuming that the probability of responding within a unit time period followed a logistic regression function, while the actual time to response followed a particular distributional form. They chose a form which assumed a time-independent risk of responding-the exponential for a continuous time process or geometric for discrete time. This approach was extended by Hankey and Mantel [1974] by the addition of a time function to the logistic regression function. This tinme function was approximated by a low order polynomial. Inherent in these exponential and logistic regression models is the assumption that the effects of the covariates are independent of time. The exponential model of Feigl and Zelen relates the expected survival time to the concomitant information and, since the exponential distribution is without memory, the expected remaining survival timne given survival up to some time point T has the same relationship to the concomitant information no matter what the value of T. The logistic regression methods that have been proposed allow the underlying hazard to be a function of time but the relative effects of the covariates


Biometrics | 1983

The Statistical Comparison of Relative Survival Rates

Charles C. Brown

A statistical procedure for comparing the survival of two or more groups of patients adjusted for normal mortality expectation, i.e. for calculating relative survival, is proposed. The method is shown to correspond to some commonly used procedures for comparing unadjusted survival; it provides an improvement over these procedures in many situations, even when the normal mortality expectations for the patient groups are the same. An example of its use is given.


Biometrics | 1981

Carcinogenic Risk Assessment: A Guide to the Literature

Daniel Krewski; Charles C. Brown

An overview of the problems involved in assessing the carcinogenic potential of environmental chemicals is presented. Statistical aspects of the safety evaluation process are noted and appropriate references to the literature are provided. 1. Introducffon As a result of the increasing awareness of the potential health hazards of environmental chemicals, considerable effort is being devoted to the identification and regulation of those chemicals which are carcinogenic. While the primary concern of this research is ultimately human health, information on the carcinogenic potential of chemical substances is necessarily derived mainly from bioassays conducted with animal models. The carcinogenicity of a substance is established when the administration of it to test animals in an adequately designed and conducted laboratory experiment results in an increased incidence or decreased latent period of one or more types of neoplasia, when compared to control animals maintained under identical conditions but not exposed to the compound under study. In this paper, a guide to the statistical literature on carcinogenic risk assessment using animal models is presented. Included are sections on the general principles of carcinogen bioassay, statistical analysis of screening bioassays, quantitative risk assessment, and regulatory considerations. Some references have been included in more than one section, where appropriate. The practical aspects of conducting an adequate and valid carcinogen bioassay are discussed in the references in §2. Statistical procedures for the analysis of screening bioassays designed to detect carcinogenic compounds may be found in the references in §3. Although simple binomial comparisons of the tumor incidence rates observed in the control and test groups may be appropriate for conventional bioassay designs (§3.1), other procedures are required for two-generation studies where the litter rather than the individual animal may be the appropriate experimental unit for purposes of statistical analysis (§3.2). Time-adjusted analysis may be used whenever it is desirable to take into account the time at which lesions were observed (§3.3). Such an analysis may be


Biometrics | 1986

Logistic regression methods for retrospective case-control studies using complex sampling procedures.

Thomas R. Fears; Charles C. Brown

There are a number of possible designs for case-control studies. The simplest uses two separate simple random samples, but an actual study may use more complex sampling procedures. Typically, stratification is used to control for the effects of one or more risk factors in which we are interested. It has been shown (Anderson, 1972, Biometrika 59, 19-35; Prentice and Pyke, 1979, Biometrika 66, 403-411) that the unconditional logistic regression estimators apply under stratified sampling, so long as the logistic model includes a term for each stratum. We consider the case-control problem with stratified samples and assume a logistic model that does not include terms for strata, i.e., for fixed covariates the (prospective) probability of disease does not depend on stratum. We assume knowledge of the proportion sampled in each stratum as well as the total number in the stratum. We use this knowledge to obtain the maximum likelihood estimators for all parameters in the logistic model including those for variables completely associated with strata. The approach may also be applied to obtain estimators under probability sampling.


Biometrics | 1981

Exact significance levels for multiple binomial testing with application to carcinogenicity screens.

Charles C. Brown; Thomas R. Fears

A simple experimental design consisting of one control group and one or more treatment groups is considered. Relevant research often focuses on the presence or absence of any of several characteristics in the treatment group(s). The statistical analysis frequently includes the comparison of the control group with each treatment group by the use of Fisher-Irwin exact tests for each of many 2 x 2 tables. The multiplicity of comparisons has given rise to concern that individual Fisher-Irwin tests could seriously overstate the experimental evidence in some situations. This paper provides a method for calculating the exact permutational probability of at least one significant Fisher-Irwin test when only one treatment group and one control group is used. For multiple-treatment-group designs, upper and lower bounds on the probability are provided. Emphasis is given throughout to carcinogenesis screening experiments and an example of such an experiment is provided.


Annals of the New York Academy of Sciences | 1975

FROM MOUSE TO MAN—OR HOW TO GET FROM THE LABORATORY TO PARK AVENUE AND 59TH STREET*

Marvin A. Schneiderman; Nathan Mantel; Charles C. Brown

Where does this all leave us? It leaves us able to develop rather good animal data at dose levels that do not really interest us. That is a first-class highway that takes us where we do not want to go. It leaves us unlikely to be able to develop good data at realistic doses. To extrapolate animal results to man exposed at these realistic doses today requires assuming a mathematical model of dose-response in the animal and conservative use of this model. Then we have to jump from one species to another in ignorance of the terrain of the landing site, i.e., the many species differences. However, we will have knowledge of some important species similarities and that makes the jump a lot less dangerous. With respect to costs and benefits we are just beginning to understand some of the implications of the arithmetic. We have begun to see that there are few, if any, good ways of totaling the costs or computing the benefits. Cost-benefit may be another blind alley. Tomorrow and the next day we must do the appropriate research on species differences in metabolism and in the mathematics of the modeling and extrapolation--as a minimum. The socially related issues, such as what is an acceptable risk, what are the costs, what are the benefits, must be discussed in the open, freely. This implies recognizing that someones costs may be someone elses benefits. (Our medical costs are our physicians source of living.) The inputs to the cost-benefit algebra are not well worked out. Our ways of working must include the adversary approach as well as the pleasanter way of cooperation. And today, we must get to precautionary decisions for mans safety and health based on the road maps from animal data--inadequate as they are. We have gotten to the neighborhood of Park Avenue and 59th Street and we can probably one day get to a lot of other places.


Toxicological Sciences | 1984

Determining “safe” levels of exposure: Safety factors or mathematical models?

Daniel Krewski; Charles C. Brown; Duncan J. Murdoch

The object of regulatory toxicology is to determine safe levels of human exposure to toxicants present in the environment. The traditional safety factor approach is compared to more recent mathematical modeling techniques, outlining the underlying assumptions and statistical properties of each procedure. Several linear extrapolation procedures are examined in detail using computer simulation, along with the impact of nonlinear kinetics on the extrapolation process.


Biometrics | 1981

Optimal designs for the analysis of interactive effects of two carcinogens or other toxicants

Jiirgen Wahrendorf; Reinhard Zentgraf; Charles C. Brown

In this paper we consider the design of animal experiments conducted to test for interaction between two carcinogens or other toxicants. We examine 2 x 2 designs which contain an untreated control group, two groups treated with a single dose of each toxic agent alone, and one group treated with both agents together. Optimal rules for allocating animals to the experimental groups are derived on the basis of expected response rates for acute toxicity studies and these rules are then extended to long-term carcinogenesis studies by considering times-to-tumor under an assumption of proportional hazards. Unbalanced designs with more animals in the combination group than in the control group are shown to provide a gain in efficacy of about 20% over balanced designs.


Annals of the New York Academy of Sciences | 1979

THRESHOLDS FOR ENVIRONMENTAL CANCER: BIOLOGIC AND STATISTICAL CONSIDERATIONS

Marvin A. Schneiderman; Pierre Decouflé; Charles C. Brown

Why the interest in thresholds in cancer? The answer seems obvious. If there are thresholds, and if we can find them, we can establish a safe level of an offending agent, and exposures below this level will cause no harm to anyone. There are also nonbiologic reasons why the threshold concept is important. There is the political use (or nonuse) of the threshold concept. At least with respect to food additives, no threshold for a carcinogen is permitted. This concept has led to the Delaney clause, or amendment, to the Food and Drug Administration laws and is a source of argument, controversy, charges of poor science, and the basis for many words being added to the so-called literature.’ Finally, if we can establish thresholds, we will not have to worry about low-dose extrapolation, choice of models, mouse-to-man conversions, and all those other difficulties. If this is so obvious, why does there seem to be so much conflict about this concept? Some people think there must (or at least should) be thresholds for carcinogenesis. First, there seem to be thresholds for all kinds of other toxicities. Some minimum number of molecules is probably needed to affect some minimum number of cells, and without this minimum number of cells affected, no deleterious effect will occur. This is probably because human (and other) bodies have superb repair mechanisms, and damages, even serious ones, can be ameliorated. Broken bones knit. Wounds heal (although not always without leaving a scar). There is recovery from most disease, most infectious disease anyway. There is another argument for threshold or for what some people have called a “practical threshold.” Druckrey2 demonstrated that the median time to appearance of tumors in his laboratory animals was inversely related to dose. Thus, lowering the dose extended the median time or perhaps the “latent period.” From this it has been argued that if dose were sufficiently low, the median time to appearance of cancer, or the time when half of the individuals destined to develop cancer actually developed it, would be so far beyond the normal life-span of humans as to be practically impossible to achieve. For example, a dose capable of producing cancer at a median age of, for example, 320 years should b.e well below a “practical threshold,” and that dose should be fully safe. We will deal with this suggestion later, remarking here only that there seem to be important issues not initially perceived in the extrapolation of the Druckrey concept to humans. Each agent seems to be dealt with in isolation. The possible additivity of dose is not considered, nor is there any consideration of the distribution of times to appearance but only discussion of median times. The proportion of exposed individuals developing cancer from which the extrapolation is made is not dealt with, nor are the related problems of genetic similarity and diversity within the exposed population.

Collaboration


Dive into the Charles C. Brown's collaboration.

Top Co-Authors

Avatar

Nathan Mantel

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Byar

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mitchell H. Gail

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jürgen Wahrendorf

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge