Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy A. Denton is active.

Publication


Featured researches published by Timothy A. Denton.


American Heart Journal | 1990

Fascinating rhythm: A primer on chaos theory and its application to cardiology

Timothy A. Denton; George A. Diamond; Richard H. Helfant; Steven S. Khan; Hrayr S. Karagueuzian

Nonlinear dynamics is an exciting new way of looking at peculiarities that in the past have been ignored or explained away. We have attempted to give a general introduction to the basics of the mathematics, applications to cardiology, and a brief review of the new tools needed to use the concepts of nonlinear mathematics. The careful mathematical approach to problems in cardiac electrical dynamics and blood flow is opening a window on behaviors and mechanisms previously inaccessible.


Journal of Neurochemistry | 1987

A dopaminergic cell line variant resistant to the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine.

Timothy A. Denton; Bruce D. Howard

1‐Methyl‐4‐phenyl‐1,2,3,6‐tetrahydropyridine (MPTP) is known to cause parkinsonism by killing dopaminergic neurons; the toxic substance is a metabolite, 1‐methyl‐4‐phenylpyridinium ion (MPP+). PC12 cells, which are dopaminergic, are killed in culture by MPTP and MPP+ but at concentrations much higher than that required to kill affected neurons in vivo. However, at low concentrations (10–100 μM), MPP+ caused an increased production of lactate by PC12 cells. MPP+‐treated PC12 cells exhibited decreased mitochondrial respiration. Mitochondria from the treated cells respired normally in the presence of added succinate but not β‐hydroxybutyrate, a finding indicating that MPP+ inhibits the oxidation of some substrates selectively. MPP+ was more effective in killing the cells when glycolysis was reduced with 2‐deoxyglucose or by lowering the glucose content of the culture medium. Under these conditions, MPP+ inhibited ATP synthesis and depleted cellular stores of ATP. A PC12 variant that is even more resistant to MPTP and MPP+ than are wild‐type cells has been isolated. The MPTP‐resistant variant is also more resistant to the lethal effects of oligomycin, antimycin A, and rotenone. This variant exhibited altered lactate production and mitochondrial respiration. It is suggested that some brain neurons that accumulate MPP+ without being killed by it may also have an energy metabolism somewhat different from that of more sensitive neurons.


Annals of Internal Medicine | 1993

Alternative Perspectives on the Biased Foundations of Medical Technology Assessment

George A. Diamond; Timothy A. Denton

The truths which are ultimately accepted as the first principles of a science, are really the last results of metaphysical analysis J. S. Mill, Utilitarianism Health care costs currently account for 14% of the U.S. Gross National Product-an increase from 9.4% in 1980-and are increasing at three times the rate of the Consumer Price Index [1]. The high cost of medical technology is only one of many reasons for these increases. Nevertheless, most physicians think that unnecessary use of medical technology has contributed to the rising cost of health care [2]; similar views are voiced both by the general public and by elected officials [3]. In response, increasing attention is directed at the process of medical technology assessment. This process, however, is built on a foundation of metaphysical assumptions about study design, data analysis, and practical clinical application [4]. We analyze the rational basis for several of these assumptions and discuss their implications for health care policy. Our goal is to optimize the relevance of technology assessment to the care of the individual patient. Efficacy versus Effectiveness New medical technology, whether drug or device, is usually evaluated using optimal conditions-in highly selected patient populations, by the best trained physicians, and in academic centers of excellence. As a result, the diffusion of technology from the investigational laboratory to clinical practice is fueled more by the promise of performance than by performance itself; despite this imbalance, both these characteristics are important. The drug, device, or procedure must first have utility among a group of patients in an ideal setting (efficacy), but it must also have utility for the individual patient in a realistic clinical setting (effectiveness) [5]. Reports of efficacy get published in journal articles, but many of these articles fail to emphasize the complex learning curve and limited range of applicability associated with a new technology, factors that are critical to the technologys effectiveness. Clinicians need to know more about the way a technology works in the real world-populated by real doctors, real patients, and real problems-than they do about the way it works in the utopian world described in a journal article [4, 6]. We shall illustrate this distinction with a relatively common example: the analysis of crossovers in a randomized, clinical trial. Crossovers Unintentional treatment crossover in a clinical trial occurs when a patient allocated to one treatment group receives the treatment intended for the other group. In a trial of coronary artery bypass surgery, for example, a patient assigned to receive medical treatment might decide to have surgery during the follow-up period because of worsening symptoms or a patient assigned to surgery might refuse the procedure and opt for medical treatment instead. Crossovers such as these are common in trials of coronary artery bypass surgery. In the Veterans Affairs (VA) Study [7], 17% of patients allocated to medical treatment had surgery during the next 21 months, and 6% of patients allocated to surgical treatment during that same period refused the procedure. The cumulative crossover to surgery in the VA trial increased to 30% after 8 years and to 38% after 11 years [8]. In the European Coronary Surgery Study (ECSS) [9], crossover rates after 5 years were 24% for medical allocations and 7% for surgical allocations. In the Coronary Artery Surgery Study (CASS) [10], crossover rates after 5 years were 24% for medical allocations and 8% for surgical allocations. Crossover rates of this magnitude raise serious concerns about bias. In CASS, noncompliant patients crossing from medical allocation to surgical treatment were more symptomatic and had more anatomic disease than did their compliant counterparts. The annual crossover rates for those in the medical group with single-, double-, and triple-vessel disease were 2.0%, 4.2%, and 7.6%, respectively (P < 0.001) [11]. In contrast, patients crossing from surgical allocation to medical treatment had fewer symptoms and less anatomic disease; 15% of patients with single-vessel disease refused surgery and crossed to medical therapy, whereas only 3% with triple-vessel disease did so (P = 0.02) [10]. By inference, then, treatment crossover in the CASS was related to the magnitude of myocardial ischemia and the risk for subsequent coronary events. This bias favored medical treatment, because sicker patients allocated to medical treatment tended to cross over to surgery (thereby decreasing the proportion of events among those actually receiving medical treatment), whereas healthier patients allocated to surgical treatment tended to refuse surgery (thereby increasing the proportion of events among those actually receiving surgical treatment). Because conclusions from clinical trials influence medical practice [11] and because this bias can lead to erroneous conclusions, the analysis and interpretation of such trials remain controversial. Several analytical methods can be used to deal with the problem. The first method is analysis by treatment received. In this case, patients are grouped according to the actual treatment administered, regardless of the initial assignment. The second method is analysis by treatment assigned. Here, patients are grouped according to the initial treatment allocation-the intention to treat-regardless of the actual treatment administered. These alternatives are the core of a long-standing debate between clinicians and statisticians. The clinicians argue that analysis by treatment assigned is unrealistic and misleading. They claim that patients not receiving treatment should not be analyzed as if they had. The statisticians argue that analysis by treatment received is naive and improper, because it is not the administration, but the allocation, of treatment that has been randomized. They maintain that a randomized trial should analyze only randomized groups. This debate can be resolved by recognizing that each analytic approach is best suited to a particular kind of trial [12, 13]. Explanatory Trials According to Schwartz and Lellouch [12], an explanatory trial is aimed at efficacy and understanding. The explanatory trial seeks to verify a biological hypothesis like: Treatment A is better than Treatment B. If we are interested in comparing the effect of coronary artery bypass surgery and coronary angioplasty on long-term survival, we might restrict the study to older patients with multivessel disease in whom survival differences will be easier to show during a reasonably short follow-up period. This design has two advantages: First, the investigator has precise control of the risk for wrongly concluding that A and B are different when they actually are not (the conventional false-positive or Type-I error) by choosing the threshold for statistical significance. Second, the investigator can also control the risk for wrongly concluding that A and B are not different when they actually are (the conventional false-negative or Type-II error) by determining the number of patients in the study. The disadvantage, however, is that the sample might be defined in a limited manner that restricts the inferential power of the conclusions (for example, with respect to younger patients or those with single-vessel disease). The goal of an explanatory trial requires that one compare groups defined by the treatment they actually received. Accordingly, treatment crossovers must be excluded from analysis, even though such exclusions will bias the analysis whenever the crossovers do not occur randomly. As a result, the explanatory power of such (pseudorandomized) trials is compromised, and they can no longer be relied on to determine if one treatment is better than the other. Pragmatic Trials A pragmatic trial is aimed at effectiveness and decision [12]. The pragmatic trial seeks to define the utility of choosing among available alternatives, thereby minimizing the probability of administering the inferior treatment: Should we recommend Treatment A or Treatment B? The disadvantage of the pragmatic trial is that it must be done in a large sample that is broadly representative with respect to the decision if the results are to be extrapolated. Its advantage is that the crossover problem is circumvented by definition. Because the result of recommending Treatment A to recommending Treatment B is being compared, rather than the result of administering Treatment A to administering Treatment B, the data can be analyzed with respect to the initial therapeutic allocation (treatment assigned) without regard for actual therapeutic administration (treatment received). Thus, although it has been claimed that analysis by treatment assigned results in reducing the clinical relevance of the findings [14], Schwartz and Lellouch [12] conclude that this analysis is precisely that of interest in practice. Computer Simulation of Crossover Ferguson and coworkers [15] recently did a series of computer simulations to quantify the effect of crossover from one treatment arm to another during a hypothetical, randomized clinical trial to compare the event-free survival of patients who had medical compared with surgical treatment of coronary artery disease. The simulation was designed so that the two treatments were actually equally effective. First, a logistic prediction model was constructed from an actual data set to predict 62 cardiac events among 598 patients who had rest-exercise radionuclide angiography (based on age, gender, resting left ventricular ejection fraction, exercise duration, and maximum exercise-induced, electrocardiographic ST-segment depression). Patients were randomly assigned to one of the two hypothetical treatment arms: surgery and medicine. Each patient was then selected for crossover to the other arm based on a crossover index defined as the product of a uniform random number and the probab


Biochemical and Biophysical Research Communications | 1984

Inhibition of dopamine uptake by N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine, a cause of parkinsonism

Timothy A. Denton; Bruce D. Howard

N-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine has been reported to cause parkinsonism in man and monkeys, producing behavioral effects within 5 min of administration. The compound reversibly and competively inhibited (IC50 = 2 microM) dopamine uptake into PC12, a clonal line of rat pheochromocytoma cells that store and secrete dopamine and acetylcholine. Uptake of choline and 2-deoxyglucose was not affected. Prolonged exposure to the compound was lethal to PC12; survivors of this treatment lost the ability to store dopamine and acetylcholine and to extend neurites upon incubation with nerve growth factor.


American Journal of Cardiology | 1995

Prior Restraint: A Bayesian Perspective on the Optimization of Technology Utilization for Diagnosis of Coronary Artery Disease

George A. Diamond; Timothy A. Denton; Daniel S. Berman; Ishae Cohen

In conclusion, at least 1/3 of patients with suspected coronary artery disease are inappropriately referred for scintigraphic diagnostic testing from a Bayesian such as those described in this report, may be a powerful mechanism for encouraging more appropriate technology utilization while simultaneously controlling costs, and are thereby deserving of a formal prospective demonstration trial. However, since only half the patients currently being tested are referred for diagnostic purposes, analogous strategies must be developed with respect to prognostic and therapeutic evaluation.


Computers in Biology and Medicine | 1991

Can the analytic techniques of nonlinear dynamics distinguish periodic, random and chaotic signals

Timothy A. Denton; George A. Diamond

Recent advances in the mathematical discipline of nonlinear dynamics have led to its use in the analysis of many biologic processes. But the ability of the tools of nonlinear dynamic analysis to identify chaotic behavior has not been determined. We analyzed a series of signals--periodic, chaotic and random--with five tools of nonlinear dynamics. Periodic signals were sine, square, triangular, sawtooth, modulated sine waves and quasiperiodic, generated at multiple amplitudes and frequencies. Chaotic signals were generated by solving sets of nonlinear equations including the logistic map, Duffings equation, Lorenz equations and the Silnikov attractor. Random signals were both discontinuous and continuous. Gaussian noise was added to some signals at magnitudes of 1, 2, 5, 10 and 20% of the signals amplitude. Each signal was then subjected to tools of nonlinear dynamics (phase plane plot, return map, Poincaré section, correlation dimension and spectral analysis) to determine the relative ability of each to characterize the underlying system as periodic, chaotic or random. In the absence of noise, phase plane plots and return maps were the most sensitive detectors of chaotic and periodic processes. Spectral analysis could determine if a process was periodic or quasiperiodic, but could not distinguish between chaotic and random signals. Correlation dimension was useful to determine the overall complexity of a signal, but could not be used in isolation to identify a chaotic process. Noise at any level effaced the structure of the phase plane plot. Return maps were relatively immune to noise at levels of up to 5%. Spectral analysis and correlation dimension were insensitive to noise. Accordingly, we recommend that unknown signals be subjected to all of the techniques to increase the accuracy of identification of the underlying process. Based on these data, we conclude that no single test is sufficiently sensitive or specific to categorize an unknown signal as chaotic.


Current Opinion in Cardiology | 1994

Long-term survival after coronary artery bypass grafting.

Steven S. Khan; Timothy A. Denton; Jack M. Matloff

The most quoted long-term outcome studies from the coronary artery bypass surgery literature were performed in the 1970s, and these trials—the Coronary Artery Surgery Study, the Veterans Administration Study, and the European Cooperative Study—added significantly to our knowledge of the efficacy of bypass surgery. However, important studies are still being performed and are refining our knowledge of long-term outcomes. This review covers early factors that affect long-term outcome, and recent information concerning particular subgroups of patients undergoing bypass surgery. In addition, new information is becoming available about the relative roles of percutaneous transluminal coronary angioplasty and coronary artery bypass grafting, and these studies are also discussed.


Neurochemistry International | 1986

Binding and uptake of MPTP by preparations of human and animal brain

Timothy A. Denton; Bruce D. Howard

1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) is known to cause parkinsonism in man and animals, producing acute behavioral effects within minutes of administration. This syndrome has been attributed to specific effects on dopaminergic systems. MPTP blocked the binding of haloperidol to membranes from rat and human brain (IC(50) = 2.5 ?M), but it did not block the binding of flupenthixol to these membranes. These results indicate that MPTP is a ligand for D-2 dopamine receptors but not for D-1 dopamine receptors. Synaptosomes from rat, mouse or guinea-pig corpus striatum or from monkey caudate nucleus exhibited little ability to take up MPTP from the incubation medium. The synaptosomes took up at least 20-50 times more dopamine than MPTP. These results indicate that MPTP could cause acute effects by binding to dopamine receptors and that the specific toxicity MPTP exerts for dopaminergic neuron is not primarily based on the specific uptake of MPTP into these neurons.


Journal of Electrocardiology | 1990

Comprehensive electrocardiology : theory and practice in health and disease

Timothy A. Denton; Thomas Peter


Journal of the American College of Cardiology | 1995

Incremental prognostic value of exercise thallium-201 myocardial single-photon emission computed tomography late after coronary artery bypass surgery

Walter Palmas; Scott Bingham; George A. Diamond; Timothy A. Denton; Hosen Kiat; John D. Friedman; Debra Scarlata; Jamshid Maddahi; Ishac Cohen; Daniel S. Berman

Collaboration


Dive into the Timothy A. Denton's collaboration.

Top Co-Authors

Avatar

George A. Diamond

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar

Steven S. Khan

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack M. Matloff

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar

Daniel S. Berman

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar

Walter Palmas

University of California

View shared research outputs
Top Co-Authors

Avatar

Boris Kogan

University of California

View shared research outputs
Top Co-Authors

Avatar

Debra Scarlata

University of California

View shared research outputs
Top Co-Authors

Avatar

Eckart Fleck

Humboldt State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge