Jerome Cornfield
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jerome Cornfield.
Journal of Chronic Diseases | 1967
Jeanne Truett; Jerome Cornfield; William B. Kannel
INTRODUCTION IT IS the function of longitudinal studies, like that of coronary heart disease in Framingham, [l] to investigate the effects of a large variety of variables, both singly and jointly on the risk of developing a disease. The traditional analytic method of the epidemiologist, multiple cross-classification, quickly becomes impracticable as the number of variables to be investigated increases. Thus, if 10 variables are under consideration, and each variable is to be studied at only three levels, e.g. serum cholesterols of less than 225 mg/ 100 ml, 225-274, and 275 and over, there would be 59,049 cells in the multiple cross-classification. Even with only 10 cases for the denominator of the rate for each cell, a cohort of approximately 600,000 persons would be required. Study populations of this size are not often available and one is consequently led to seek a more powerful form of analysis than inspection of the results of a multiple cross-classification. One such method was suggested by CORNFIELD. [2] He considered the case of k variables, say xl, x2. . . xk and assumed that the multivariate frequency distributions of those who would (CHD) and those who would not (NCHD) develop the disease could be represented by two known mathematical functions, say fl (x1. . . XJ and fu (x1 . . . xk). In that case the probability P (xl. . . xk), that an individual characterized by the variable values xl. . . xk would develop the disease is given by
Journal of the American Statistical Association | 1950
Jerome Cornfield; Nathan Mantel
Abstract The estimation of the parameters of dosage response curves by the standard probit method is an iterative process beginning with approximations to the parameters and using one or more cycles of computations to “improve” these estimates until they converge. The present paper gives a table and method for computing the maximum likelihood solution which converges more rapidly than the standard probit method. A procedure is presented for obtaining more accurate initial approximations, and the problem of the bias of the maximum likelihood estimates in small samples is considered.
Archives of Biochemistry and Biophysics | 1956
Piero Gullino; Milton Winitz; Sanford M. Birnbaum; Jerome Cornfield; M.Clyde Otey; Jesse P. Greenstein
The l- and d-forms of arginine. HCl, histidine. HCl, isoleucine, alloisoleucine, leucine, lysine. HCl, methionine, phenylalanine, threonine, allothreonine, tryptophan, and valine were administered intraperitoneally to rats, and the LD50, LD99.9, LD99.99 levels were evaluated with the aid of statistical computations. Among the l-amino acids, alloisoleucine was the least toxic and tryptophan the most toxic, whereas among the d-amino acids, although alloisoleucine was still the least toxic, arginine. HCl was the most toxic. With the exception of tryptophan whose l-isomer was about three times more toxic than the d-isomer, the differences in toxicity between the l- and d-isomers of the amino acids were of relatively small degree, the d-isomers at the LD50 level being generally less than or equal to the corresponding l-isomers in toxicity. Mixtures of the ten essential l-amino acids possessed a toxicity considerably less than that calculated from the mean of the toxicities of the individual components. This was found to be due to the presence of l-arginine. HCl, for a mixture of nine l-amino acids from which l-arginine was excluded possessed a toxicity not far from that calculated from the mean of the toxicities of the individual components. This protective effect of l-arginine was further demonstrated by adding it to a lethal mixture of nine l-amino acids which caused a reduction in mortality from 100 to 24%. Mixtures of the d-isomers of the ten essential amino acids possessed a toxicity less than that calculated from the mean of the toxicities of the individual components, but no one component of this mixture could be implicated in this general effect.
Journal of the American Statistical Association | 1966
Jerome Cornfield
Abstract Consider someone who sets out to collect data sequentially in such a way as to disprove a hypothesis, H 0, about the value of θ, the mean of a normal distribution. Observation is continued as long as the posterior probability of H 0 exceeds α1 and stopped when it falls below it. It is shown that if a non-zero prior probability, p, is assigned to H 0(0≤α1 < p) and the remaining prior probability is spread over alternate values of θ that the probability that H 0 will eventually be rejected when true is With non-zero assignment of p it is therefore not possible to sample to a foregone conclusion. The Wald 3-decision scheme is shown to be equivalent to setting and assigning prior probability of to the alternatives θ = θ0 + Δ and θ = θ0 − Δ. Anomalous features of this scheme, particularly for the case of unknown σ, are shown to be a consequence of the unrealistic nature of the alternatives assumed. It is suggested that meaningful assignments of p are possible in practice by considering the minimum num...
American Journal of Cardiology | 1976
Richard O. Russell; Roger E. Moraski; Nicholas T. Kouchoukos; Robert B. Karp; John A. Mantle; Charles E. Rackley; Leon Resnekov; Raul E. Falicov; Jafar Al-Sadir; Harold L. Brooks; Constantine E. Anagnostopoulos; John J. Lamberti; Michael J. Wolk; Thomas Killip; Paul A. Ebert; Robert A. Rosati; N. Oldham; B. Mittler; Robert H. Peter; C. R. Conti; Richard S. Ross; Robert K. Brawley; G. Plotnick; Vincent L. Gott; James S. Donahoo; Lewis C. Becker; Adolph M. Hutter; Roman W. DeSanctis; Herman K. Gold; Robert C. Leinbach
A preliminary report is presented of a prospective randomized trial conducted by eight cooperative institutions under the auspices of the National Heart and Lung Institute to compare the effectiveness of medical and surgical therapy in the management of the acute stages of unstable angina pectoris. To date 150 patients have been included in the randomized trial, 80 assigned to medical and 70 to surgical therapy; the clinical presentation, coronary arterial anatomy and left ventricular function in the two groups are similar. Some physicians have been reluctant to prescribe medical or surgical therapy by a random process, and the eithical basis of the trial has been questioned. Since there are no hard data regarding the natural history and outcome of therapy for unstable angina pectoris, randomization appears to provide a rational way of selecting therapy. Furthermore, subsets of patients at high risk may emerge during the process of randomization. The design of this randomized trial is compared with that of another reported trial. Thus far, the study has shown that it is possible to conduct a randomized trial in patients with unstable angina pectoris, and that the medical and surgical groups have been similar in relation to the variables examined. The group as a whole presented with severe angina pectoris, either as a crescendo pattern or as new onset of angina at rest, and 84 percent had recurrence of pain while in the coronary care unit and receiving vigorous medical therapy. It is anticipated that sufficient patients will have been entered into the trial within the next 12 months to determine whether medical or surgical therapy is superior in the acute stages of unstable angina pectoris.Abstract A preliminary report is presented of a prospective randomized trial conducted by eight cooperative institutions under the auspices of the National Heart and Lung Institute to compare the effectiveness of medical and surgical therapy in the management of the acute stages of unstable angina pectoris. To date 150 patients have been included in the randomized trial, 80 assigned to medical and 70 to surgical therapy; the clinical presentation, coronary arterial anatomy and left ventricular function in the two groups are similar. Some physicians have been reluctant to prescribe medical or surgical therapy by a random process, and the ethical basis of the trial has been questioned. Since there are no hard data regarding the natural history and outcome of therapy for unstable angina pectoris, randomization appears to provide a rational way of selecting therapy. Furthermore, subsets of patients at high risk may emerge during the process of randomization. The design of this randomized trial is compared with that of another reported trial. Thus far, the study has shown that it is possible to conduct a randomized trial in patients with unstable angina pectoris, and that the medical and surgical groups have been similar in relation to the variables examined. The group as a whole presented with severe angina pectoris, either as a crescendo pattern or as new onset of angina at rest, and 84 percent had recurrence of pain while in the coronary care unit and receiving vigorous medical therapy. It is anticipated that sufficient patients will have been entered into the trial within the next 12 months to determine whether medical or surgical therapy is superior in the acute stages of unstable angina pectoris.
American Journal of Cardiology | 1975
Hubert V. Pipberger; Donald McCaughan; David Littmann; Hanna A. Pipberger; Jerome Cornfield; Rosalie A. Dunn; Charles D. Batchlor; Alan S. Berson
An electrocardiographic computer program based on multivariate analysis of orthogonal leads (Frank) was applied to records transmitted daily by telephone from the Veterans Administration Hospital, West Roxbury, Mass., to the Veterans Administration Hospital, Washington, D. C. A Bayesian classification procedure was used to compute probabilities for all diagnostic categories that might be encountered in a given record. Computer results were compared with interpretations of conventional 12 lead tracings. Of 1,663 records transmitted, 1,192 were selected for the study because the clinical diagnosis in these cases could be firmly established on the basis of independent, nonelectrocardiographic information. Twenty-one percent of the records were obtained from patients without evidence of cardiac disease and 79 percent from patients with various cardiovascular illnesses. Diagnostic electrocardiographic classifications were considered correct when in agreement with documented clinical diagnoses. Of the total sample of 1,192 recordings, 86 percent were classified correctly by computer as compared with 68 percent by conventional 12 lead electrocardiographic analysis. Improvement in diagnostic recognition by computer was most striking in patients with hypertensive cardiovascular disease or chronic obstructive lung disease. The multivariate classification scheme functioned most efficiently when a problem-oriented approach to diagnosis was simulated. This was accomplished by a simple method of adjusting prior probabilities according to the diagnostic problem under consideration.
Computers and Biomedical Research | 1973
Jerome Cornfield; Rosalie A. Dunn; Charles D. Batchlor; Hubert V. Pipberger
Abstract A computer program for the automatic analysis of the electrocardiogram is described with details about wave recognition, measurements, and the calculation of the posterior probabilities of each diagnosis. The advantages and disadvantages of this multivariate statistical procedure are discussed. Classification statements result from separate analyses of the QRS-T complex, P wave, and ST segment. Criteria for inclusion of the patients forming the data base are given, with tables of measurement means and standard deviations, as well as an investigation of agreement between calculated and observed probabilities. Estimated error rates are given in the form of misclassification matrices, computed from large numbers of tracings collected by a Veterans Administration Cooperative Study, where the correct diagnoses are taken from clinical, laboratory, and autopsy information.
Journal of Clinical Investigation | 1962
Leroy E. Duncan; Jerome Cornfield; Katherin Buck
According to one of the common theories (1, 2) of the genesis of atherosclerosis, that disease arises as a complication of the passage of plasma water and its solutes through arterial tissue. In this passage the plasma constituents are believed to move from the arterial lumen across the intimal endothelium and on through the arterial tissue. The low density lipoproteins are thought to share in this general movement of plasma constituents through arterial wall but to have characteristics that lead to their being trapped in the arterial intima where they decompose, leaving lipid deposits that eventually form atheromata. Because of this, it appears likely that information about the movement through arterial wall of plasma proteins, and especially of plasma lipoproteins, may increase our understanding of the genesis of atherosclerosis. Studies of the movement of the lipoproteins into arterial wall have proved difficult to perform and interpret (3) because the lipoproteins contain both lipid and protein moieties and because their lipid components exchange freely between the various lipoproteins (4) and with the lipids of cells (5). It therefore seemed advantageous to develop information about the passage of a simple plasma protein like albumin through arterial wall because such studies are relatively easy to interpret and can be carried out relatively rapidly to form a background for the more involved study of the passage of lipoproteins into arterial tissue. Our studies suggest that such a background will be pertinent. This conclusion is based on the similarities between the entrance rates of albumin into canine aortic wall, the entrance of labeled cholesterol into canine aortic wall, and the accumulation of cholesterol in canine aortic wall in experimental atherosclerosis. Albumin enters the inner layer of the aortic wall with a gradient of rates (6). The rate of entrance is fastest in the proximal aorta and becomes progressively less rapid down the length of the aorta. Labeled plasma cholesterol enters the aortic wall with a similar rate gradient (7). This similarity suggests that the plasma lipoproteins of the normal dog also enter aortic wall with a gradient of rates, but the lability of cholesterol as a label for lipoproteins renders such a conclusion tentative. Early in the development of experimental canine atherosclerosis the accumulation of cholesterol in the inner layer of the aortic wall forms a gradient (8) similar to the gradient of the entrance rate of albumin. This suggests that some common factor is involved in the entrance of albumin and the accumulation of cholesterol, and is compatible with the hypothesis that low density lipoproteins enter aortic wall with a gradient of rates and thus give rise to the observed gradient of cholesterol concentrations. Alternative explanations are of course possible. In addition, the fact that later in the course of experimental canine atherosclerosis the gradient disappears and the concentration of cholesterol comes to be highest in the abdominal aorta makes the interpretation of the later course of the disease highly speculative. Despite the foregoing qualifications, the observed similarities of the entrance rate of labeled albumin in normal dogs, the entrance rate of labeled cholesterol in normal dogs, and the rate of accumulation of cholesterol in dogs on an atherogenic regimen do suggest that information about the movement of albumin through arterial wall will provide a pertinent background to the study of the movement of lipoproteins through that tissue. Studies already carried out have shown the existence of the gradient of rates and have demonstrated that the gradient is not due to the pulsatile nature of blood pressure or flow, since it is partially preserved in vitro under static conditions (9). The present paper reports studies
Journal of the American Statistical Association | 1955
Max Halperin; Samuel W. Greenhouse; Jerome Cornfield; Julia Zalokar
Abstract * The authors wish to acknowledge the valuable computing assistance of Mrs. Ina R. Hughes. Tables of upper and lower limits on the upper 5 per cent and 1 per cent points of the distribution of the studentized maximum absolute deviate in normal samples are presented. The method of computation and the reliability of the tables are described, and approximations which may be used to supplement the table are derived and discussed. Examples of the use of the tables are given with special attention devoted to their use for multiple significance testing on a set of means.
Journal of the American Statistical Association | 1969
Jerome Cornfield; Max Halperin; Samuel W. Greenhouse
Abstract An adaptive procedure for sequential clinical trials is considered, in which an increasing proportion of patients is assigned to the better of two treatments as evidence for it accumulates. The procedure considered is a multi-stage application of an optimum two-stage procedure. In the first stage of the two-stage procedure patients are assigned to each of two treatments and in the second all are assigned to the apparently better of the two treatments. The optimum two-stage procedure is one in which the proportions assigned to each of the treatments in the first stage, 2pθ and 2p(1 — θ), are such as to minimize a certain natural cost function, which depends on information available before the first stage and on the total number of patients to be treated. Multi-Stage application consists of recalculating the optimum two-stage θ at each point in the sequential stream of patients and interpreting it as the proportion of the next “small” batch of patients to be assigned to treatment 1. This multi-stag...