Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Constantine Gatsonis is active.

Publication


Featured researches published by Constantine Gatsonis.


The New England Journal of Medicine | 2011

Reduced lung-cancer mortality with low-dose computed tomographic screening.

Denise R. Aberle; Amanda M. Adams; Christine D. Berg; William C. Black; Jonathan D. Clapp; Richard M. Fagerstrom; Ilana F. Gareen; Constantine Gatsonis; Pamela M. Marcus; JoRean D. Sicks

BACKGROUND The aggressive and heterogeneous nature of lung cancer has thwarted efforts to reduce mortality from this cancer through the use of screening. The advent of low-dose helical computed tomography (CT) altered the landscape of lung-cancer screening, with studies indicating that low-dose CT detects many tumors at early stages. The National Lung Screening Trial (NLST) was conducted to determine whether screening with low-dose CT could reduce mortality from lung cancer. METHODS From August 2002 through April 2004, we enrolled 53,454 persons at high risk for lung cancer at 33 U.S. medical centers. Participants were randomly assigned to undergo three annual screenings with either low-dose CT (26,722 participants) or single-view posteroanterior chest radiography (26,732). Data were collected on cases of lung cancer and deaths from lung cancer that occurred through December 31, 2009. RESULTS The rate of adherence to screening was more than 90%. The rate of positive screening tests was 24.2% with low-dose CT and 6.9% with radiography over all three rounds. A total of 96.4% of the positive screening results in the low-dose CT group and 94.5% in the radiography group were false positive results. The incidence of lung cancer was 645 cases per 100,000 person-years (1060 cancers) in the low-dose CT group, as compared with 572 cases per 100,000 person-years (941 cancers) in the radiography group (rate ratio, 1.13; 95% confidence interval [CI], 1.03 to 1.23). There were 247 deaths from lung cancer per 100,000 person-years in the low-dose CT group and 309 deaths per 100,000 person-years in the radiography group, representing a relative reduction in mortality from lung cancer with low-dose CT screening of 20.0% (95% CI, 6.8 to 26.7; P=0.004). The rate of death from any cause was reduced in the low-dose CT group, as compared with the radiography group, by 6.7% (95% CI, 1.2 to 13.6; P=0.02). CONCLUSIONS Screening with the use of low-dose CT reduces mortality from lung cancer. (Funded by the National Cancer Institute; National Lung Screening Trial ClinicalTrials.gov number, NCT00047385.).


Clinical Chemistry | 2003

The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; David Moher; Drummond Rennie; Henrica C.W. de Vet; Jeroen G. Lijmer

The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalizability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist. The document contains a clarification of the meaning, rationale, and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart, and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in health care.


Annals of Internal Medicine | 2003

The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; David Moher; Drummond Rennie; Henrica C.W. de Vet; Jeroen G. Lijmer

Introduction In studies of diagnostic accuracy, results from one or more tests are compared with the results obtained with the reference standard on the same subjects. Such accuracy studies are a vital step in the evaluation of new and existing diagnostic technologies (1, 2). Several factors threaten the internal and external validity of a study of diagnostic accuracy (3-8). Some of these factors have to do with the design of such studies, others with the selection of patients, the execution of the tests, or the analysis of the data. In a study involving several meta-analyses a number of design deficiencies were shown to be related to overly optimistic estimates of diagnostic accuracy (9). Exaggerated results from poorly designed studies can trigger premature adoption of diagnostic tests and can mislead physicians to incorrect decisions about the care for individual patients. Reviewers and other readers of diagnostic studies must therefore be aware of the potential for bias and a possible lack of applicability. A survey of studies of diagnostic accuracy published in four major medical journals between 1978 and 1993 revealed that the methodological quality was mediocre at best (8). Furthermore, this review showed that information on key elements of design, conduct, and analysis of diagnostic studies was often not reported (8). To improve the quality of reporting of studies of diagnostic accuracy the Standards for Reporting of Diagnostic Accuracy (STARD) initiative was started. The objective of the STARD initiative is to improve the quality of reporting of studies of diagnostic accuracy. Complete and accurate reporting allows the reader to detect the potential for bias in the study and to judge the generalizability and applicability of the results. For this purpose, the STARD project group has developed a single-page checklist. Where possible, the decision to include items in the checklist was based on evidence linking these items to bias, variability in results, or limitations of the applicability of results to other settings. The checklist can be used to verify that all essential elements are included in the report of a study. This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist. The document contains a clarification of the meaning, rationale, and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The first part of this document contains a summary of the design and terminology of diagnostic accuracy studies. The second part contains an item-by-item discussion with examples. Studies of Diagnostic Accuracy Studies of diagnostic accuracy have a common basic structure (10). One or more tests are evaluated, with the purpose of detecting or predicting a target condition. The target condition can refer to a particular disease, a disease stage, a health status, or any other identifiable condition within a patient, such as staging a disease already known to be present, or a health condition that should prompt clinical action, such as the initiation, modification, or termination of treatment. Here test refers to any method for obtaining additional information on a patients health status. This includes laboratory tests, imaging tests, function tests, pathology, history, and physical examination. In a diagnostic accuracy study, the test under evaluationreferred to here as the index testis applied to a series of subjects. The results obtained with the index test are compared with the results of the reference standard, obtained in the same subjects. In this framework, the reference standard is the best available method for establishing the presence or absence of the target condition. The reference standard can be a single test, or a combination of methods and techniques, including clinical follow-up of tested subjects. The term accuracy refers to the amount of agreement between the results from the index test and those from the reference standard. Diagnostic accuracy can be expressed in a number of ways, including sensitivityspecificity pairs, likelihood ratios, diagnostic odds ratios, and areas under ROC [receiver-operating characteristic] curves (11, 12). Study Question, Design, and Potential for Bias Early in the evaluation of a test, the author may simply want to know if the test is able to discriminate. The appropriate early question may be Do the test results in patients with the target condition differ from the results in healthy people? If preliminary studies answer this question affirmatively, the next study question is, Are patients with specific test results more likely to have the target disorder than similar patients with other test results? The usual study design to answer this is to apply the index test and the reference standard to a number of patients who are suspected of the target condition. Some study designs are more prone to bias and have a more limited applicability than others. In this article, the term bias refers to difference between the observed measures of test performance and the true measures. No single design is guaranteed to be both feasible and able to provide valid, informative, and relevant answers with optimal precision to all study questions. For each study, the reader must judge the relevance, the potential for bias, and the limitations to applicability, making full and transparent reporting critical. For this reason, checklist items refer to the research question that prompted the study of diagnostic accuracy and ask for an explicit and complete description of the study design and results. Variability Measures of test accuracy may vary from study to study. Variability may reflect differences in patient groups, differences in setting, differences in definition of the target condition, and differences in test protocols or in criteria for test positivity (13). For example, bias may occur if a test is evaluated under circumstances that do not correspond to those of the research question. Examples are evaluating a screening test for early disease in patients with advanced stages of the disease and evaluating a physicians office test device in the specialty department of a university hospital. The checklist contains a number of items to make sure that a study report contains a clear description of the inclusion criteria for patients, the testing protocols and the criteria for positivity, as well as an adequate account of subjects included in the study and their results. These items will enable readers to judge if the study results apply to their circumstances. Items in the Checklist The next section contains a point-by-point discussion of the items on the checklist. The order of the items corresponds to the sequence used in many publications of diagnostic accuracy studies. Specific requirements made by journals could lead to a different order. Item 1. Identify the Article as a Study of Diagnostic Accuracy (Recommend MeSH Heading Sensitivity and Specificity) Example (an Excerpt from a Structured Abstract) Purpose: To determine the sensitivity and specificity of computed tomographic colonography for colorectal polyp and cancer detection by using colonoscopy as the reference standard (14). Electronic databases have become indispensable tools to identify studies. To facilitate retrieval of their study, authors should explicitly identify it as a report of a study of diagnostic accuracy. We recommend the use of the term diagnostic accuracy in the title or abstract of a report that compares the results of one or more index tests with the results of a reference standard. In 1991 the National Library of Medicines MEDLINE database introduced a specific keyword (MeSH heading) for diagnostic studies: Sensitivity and Specificity. Using this keyword to search for studies of diagnostic accuracy remains problematic (15-19). In a selected set of MEDLINE journals covering publications between 1992 through 1995, the use of the MeSH heading Sensitivity and Specificity identified only 51% of all studies of diagnostic accuracy and incorrectly identified many articles that were not reports of studies on diagnostic accuracy (18). In the example, the authors used the more general term Performance Characteristics of CT Colonography in the title. The purpose section of the structured abstract explicitly mentions sensitivity and specificity. The MEDLINE record for this paper contains the MeSH Sensitivity and Specificity. Item 2. State the Research Questions or Study Aims, Such as Estimating Diagnostic Accuracy or Comparing Accuracy between Tests or across Participant Groups Example Invasive x-ray coronary angiography remains the gold standard for the identification of clinically significant coronary artery disease . A noninvasive test would be desirable. Coronary magnetic resonance angiography performed while the patient is breathing freely has reached sufficient technical maturity to allow more widespread application with a standardized protocol. Therefore, we conducted a study to determine the [accuracy] of coronary magnetic resonance angiography in the diagnosis of native-vessel coronary artery disease (20). The Helsinki Declaration states that biomedical research involving people should be based on a thorough knowledge of the scientific literature (21). In the introduction of scientific reports authors describe the scientific background, previous work on the subject, the remaining uncertainty, and, hence, the rationale for their study. Clearly specified research questions help the readers to judge the appropriateness of the study design and data analysis. A single general description, such as diagnostic value or clinical usefulness, is usually not very helpful to the readers. In the example, the authors use the introduction section of their paper to describe the potential of coronary magnetic resonance angiography as a non-invasive alternative to conventional x-ray angiography in the diagn


Annals of Internal Medicine | 2008

Systematic reviews of diagnostic test accuracy

Mariska M.G. Leeflang; Jonathan J Deeks; Constantine Gatsonis; Patrick M. Bossuyt

Diagnosis is a critical component of health care, and clinicians, policymakers, and patients routinely face a range of questions regarding diagnostic tests. They want to know whether testing improves outcome; what test to use, purchase, or recommend in practice guidelines; and how to interpret test results. Well-designed diagnostic test accuracy studies can help in making these decisions, provided that they transparently and fully report their participants, tests, methods, and results as facilitated, for example, by the STARD (Standards for Reporting of Diagnostic Accuracy) statement (1). That 25-item checklist was published in many journals and is now adopted by more than 200 scientific journals worldwide. As in other areas of science, systematic reviews and meta-analysis of accuracy studies can be used to obtain more precise estimates when small studies addressing the same test and patients in the same setting are available. Reviews can also be useful to establish whether and how scientific findings vary by particular subgroups, and may provide summary estimates with a stronger generalizability than estimates from a single study. Systematic reviews may help identify the risk for bias that may be present in the original studies and can be used to address questions that were not directly considered in the primary studies, such as comparisons between tests. The Cochrane Collaboration is the largest international organization preparing, maintaining, and promoting systematic reviews to help people make well-informed decisions about health care (2). The Collaboration decided in 2003 to make preparations for including systematic reviews of diagnostic test accuracy in their Cochrane Database of Systematic Reviews. To enable this, a working group (Appendix). was formed to develop methodology, software, and a handbook The first diagnostic test accuracy review was published in the Cochrane Database in October 2008. In this paper, we review recent methodological developments concerning problem formulation, location of literature, quality assessment, and meta-analysis of diagnostic accuracy studies by using our experience from the work on the Cochrane Handbook. The information presented here is based on the recent literature and updates previously published guidelines by Irwig and colleagues (3). Definition of the Objectives of the Review Diagnostic test accuracy refers to the ability of a test to distinguish between patients with disease (or more generally, a specified target condition) and those without. In a study of test accuracy, the results of the test under evaluation, the index test, are compared with those of the reference standard determined in the same patients. The reference standard is an agreed-on and accurate method for identifying patients who have the target condition. Test results are typically categorized as positive or negative for the target condition. By using such binary test outcomes, the accuracy is most often expressed as the tests sensitivity (the proportion of patients with positive results on the reference standard that are also positive on the index test) and specificity (the proportion of patients with negative results on the reference standard that are also negative on the index test). Other measures have been proposed and are in use (46). It has long been recognized that test accuracy is not a fixed property of a test. It can vary between patient subgroups, with their spectrum of disease, with the clinical setting, or with the test interpreters and may depend on the results of previous testing. For this reason, inclusion of these elements in the study question is essential. In order to make a policy decision to promote use of a new index test, evidence is required that using the new test increases test accuracy over other testing options, including current practice, or that the new test has equivalent accuracy but offers other advantages (79). As with the evaluation of interventions, systematic reviews need to include comparative analyses between alternative testing strategies and should not focus solely on evaluating the performance of a test in isolation. In relation to the existing situation, 3 possible roles for a new test can be defined: replacement, triage, and add-on (7). If a new test is to replace an existing test, then comparing the accuracy of both tests on the same population and with the same reference standard provides the most direct evidence. In triage, the new test is used before the existing test or testing pathway, and only patients with a particular result on the triage test continue the testing pathway. When a test is needed to rule out disease in patients who then need no further testing, a test that gives a minimal proportion of falsenegative results and thus a relatively high sensitivity should be used. Triage tests may be less accurate than existing ones, but they have other advantages, such as simplicity or low cost. A third possible role of a new test is add-on. The new test is then positioned after the existing testing pathway to identify false-positive or false-negative results after the existing pathway. The review should provide data to assess the incremental change in accuracy made by adding the new test. An example of a replacement question can be found in a systematic review of the diagnostic accuracy of urinary markers for primary bladder cancer (10). Clinicians may use cytology to triage patients before they undergo invasive cystoscopy, the reference standard for bladder cancer. Because cytology combines high specificity with low sensitivity (11), the goal of the review was to identify a tumor marker with sufficient accuracy to either replace cytology or be used in addition to cytology. For a marker to replace cytology, it has to achieve equally high specificity with improved sensitivity. New markers that are sensitive but not specific may have roles as adjuncts to conventional testing. The review included studies in which the test under evaluation (several different tumor markers and cytology) was evaluated against cystoscopy or histopathology. Included studies compared 1 or more of the markers, cytology only, or a combination of markers and cytology. Although information on accuracy can help clinicians make decisions about tests, good diagnostic accuracy is a desirable but not sufficient condition for the effectiveness of a test (8). To demonstrate that using a new test does more good than harm to patients tested, randomized trials of test-and-treatment strategies and reviews of such trials may be necessary. However, with the possible exception of screening, in most cases, such randomized trials are not available and systematic reviews of test accuracy may provide the most useful evidence available to guide clinical and health policy decision making and use as input for decision and cost-effectiveness analysis (12). Identification and Selection of Studies Identifying test accuracy studies is more difficult than searching for randomized trials (13). There is not a clear, unequivocal keyword or indexing term for an accuracy study in literature databases comparable with the term randomized, controlled trial. The Medical Subject Heading sensitivity and specificity may look suitable but is inconsistently applied in most electronic bibliographic databases. Furthermore, data on diagnostic test accuracy may be hidden in studies that did not have test accuracy estimation as their primary objective. This complicates the efficient identification of diagnostic test accuracy studies in electronic databases, such as MEDLINE. Until indexing systems properly code studies of test accuracy, searching for them will remain challenging and may require additional manual searches, such as screening reference lists. In the development of a comprehensive search strategy, review authors can use search strings that refer to the test(s) under evaluation, the target condition, and the patient description or a subset of these. For tests with a clear name that are used for a single purpose, searching for publications in which those tests are mentioned may suffice. For other reviews, adding the patient description may be necessary, although this is also often poorly indexed. A search strategy in MEDLINE should contain both Medical Subject Headings and free text words. A search strategy for articles about tests for bladder cancer, for example, should include as many synonyms for bladder cancer as possible in the search strategy, including neoplasm, carcinoma, transitional cell, and hematuria. Several methodological electronic search filters for diagnostic test accuracy studies have been developed, each attempting to restrict the search to articles that are most likely to be test accuracy studies (1316). These filters rely on indexing terms for research methodology and text words used in reporting results, but they often miss relevant studies and are unlikely to decrease the number of articles one needs to screen. Therefore, they are not recommended for systematic reviews (17, 18). The incremental value of searching in languages other than English and in the gray literature has not yet been fully investigated. In systematic reviews of intervention studies, publication bias is an important and well-studied form of bias in which the decision to report and publish studies is linked to their findings. For clinical trials, the magnitude and determinants of publication bias have been identified by tracing the publication history of cohorts of trials reviewed by ethics committees and research boards (19). A consistent observation has been that studies with significant results are more likely to be published than studies with nonsignificant findings (19). Investigating publication bias for diagnostic tests is problematic, because many studies are done without ethical review or study registration; therefore, identification of cohorts of studies from registration to final publication status i


The New England Journal of Medicine | 2012

CT Angiography for Safe Discharge of Patients with Possible Acute Coronary Syndromes

Harold I. Litt; Constantine Gatsonis; Brad Snyder; Harjit Singh; Chadwick D. Miller; Daniel W. Entrikin; James M. Leaming; Laurence J. Gavin; Charissa Pacella; Judd E. Hollander

BACKGROUND Admission rates among patients presenting to emergency departments with possible acute coronary syndromes are high, although for most of these patients, the symptoms are ultimately found not to have a cardiac cause. Coronary computed tomographic angiography (CCTA) has a very high negative predictive value for the detection of coronary disease, but its usefulness in determining whether discharge of patients from the emergency department is safe is not well established. METHODS We randomly assigned low-to-intermediate-risk patients presenting with possible acute coronary syndromes, in a 2:1 ratio, to undergo CCTA or to receive traditional care. Patients were enrolled at five centers in the United States. Patients older than 30 years of age with a Thrombolysis in Myocardial Infarction risk score of 0 to 2 and signs or symptoms warranting admission or testing were eligible. The primary outcome was safety, assessed in the subgroup of patients with a negative CCTA examination, with safety defined as the absence of myocardial infarction and cardiac death during the first 30 days after presentation. RESULTS We enrolled 1370 subjects: 908 in the CCTA group and 462 in the group receiving traditional care. The baseline characteristics were similar in the two groups. Of 640 patients with a negative CCTA examination, none died or had a myocardial infarction within 30 days (0%; 95% confidence interval [CI], 0 to 0.57). As compared with patients receiving traditional care, patients in the CCTA group had a higher rate of discharge from the emergency department (49.6% vs. 22.7%; difference, 26.8 percentage points; 95% CI, 21.4 to 32.2), a shorter length of stay (median, 18.0 hours vs. 24.8 hours; P<0.001), and a higher rate of detection of coronary disease (9.0% vs. 3.5%; difference, 5.6 percentage points; 95% CI, 0 to 11.2). There was one serious adverse event in each group. CONCLUSIONS A CCTA-based strategy for low-to-intermediate-risk patients presenting with a possible acute coronary syndrome appears to allow the safe, expedited discharge from the emergency department of many patients who would otherwise be admitted. (Funded by the Commonwealth of Pennsylvania Department of Health and the American College of Radiology Imaging Network Foundation; ClinicalTrials.gov number, NCT00933400.).


Clinical Chemistry | 2015

STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; Jeroen G. Lijmer; David Moher; Drummond Rennie; Henrica C.W. de Vet; Herbert Y. Kressel; Nader Rifai; Robert M. Golub; Douglas G. Altman; Lotty Hooft; Daniël A. Korevaar; Jérémie F. Cohen

Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.


The New England Journal of Medicine | 1990

Comparison of magnetic resonance imaging and ultrasonography in staging early prostate cancer: Results of a multi-institutional cooperative trial

Matthew D. Rifkin; Elias A. Zerhouni; Constantine Gatsonis; Leslie E. Quint; David M. Paushter; Jonathan I. Epstein; Ulrike M. Hamper; Patrick C. Walsh; Barbara J. McNeil

Abstract Background. In 1987, a cooperative study group consisting of five institutions was formed to determine the relative benefits of magnetic resonance imaging (MRI) and endorectal (transrectal) ultrasonography in evaluating patients with clinically localized prostate cancer (stage Ta or Tb). Methods. Over a period of 15 months, 230 patients were entered into the study and evaluated with identical imaging techniques. We compared imaging results with information obtained at the time of surgery and on pathological analysis. Results. MRI correctly staged 77 percent of cases of advanced disease and 57 percent of cases of localized disease; the corresponding figures for ultrasonography were 66 and 46 percent (P not significant). These figures did not vary significantly between readers; moreover, simultaneous interpretation of MRI and ultrasound scans did not improve accuracy. In terms of detecting and localizing lesions, MRI identified only 60 percent of all malignant tumors measuring more than 5 mm on pat...


Journal of the American Statistical Association | 1997

Statistical methods for profiling providers of medical care : Issues and applications

Sharon-Lise T. Normand; Mark E. Glickman; Constantine Gatsonis

Abstract Recent public debate on costs and effectiveness of health care in the United States has generated a growing emphasis on “profiling” of medical care providers. The process of profiling involves comparing resource use and quality of care among medical providers to a community or a normative standard. This is valuable for targeting quality improvement strategies. For example, hospital profiles may be used to determine whether institutions deviate in important ways in the process of care they deliver. In this article we propose a class of performance indices to profile providers. These indices are based on posterior tail probabilities of relevant model parameters that indicate the degree of poor performance by a provider. We apply our performance indices to profile hospitals on the basis of 30-day mortality rates for a cohort of elderly heart attack patients. The analysis used data from 96 acute care hospitals located in one state and accounted for patient and hospital characteristics using a hierarc...


Radiology | 2008

Diagnostic Accuracy of Digital versus Film Mammography: Exploratory Analysis of Selected Population Subgroups in DMIST

Etta D. Pisano; R. Edward Hendrick; Martin J. Yaffe; Janet K. Baum; Suddhasatta Acharyya; Jean Cormack; Lucy A. Hanna; Emily F. Conant; Laurie L. Fajardo; Lawrence W. Bassett; Carl J. D'Orsi; Roberta A. Jong; Murray Rebner; Anna N. A. Tosteson; Constantine Gatsonis

PURPOSE To retrospectively compare the accuracy of digital versus film mammography in population subgroups of the Digital Mammographic Imaging Screening Trial (DMIST) defined by combinations of age, menopausal status, and breast density, by using either biopsy results or follow-up information as the reference standard. MATERIALS AND METHODS DMIST included women who underwent both digital and film screening mammography. Institutional review board approval at all participating sites and informed consent from all participating women in compliance with HIPAA was obtained for DMIST and this retrospective analysis. Areas under the receiver operating characteristic curve (AUCs) for each modality were compared within each subgroup evaluated (age < 50 vs 50-64 vs >or= 65 years, dense vs nondense breasts at mammography, and pre- or perimenopausal vs postmenopausal status for the two younger age cohorts [10 new subgroups in toto]) while controlling for multiple comparisons (P < .002 indicated a significant difference). All DMIST cancers were evaluated with respect to mammographic detection method (digital vs film vs both vs neither), mammographic lesion type (mass, calcifications, or other), digital machine type, mammographic and pathologic size and diagnosis, existence of prior mammographic study at time of interpretation, months since prior mammographic study, and compressed breast thickness. RESULTS Thirty-three centers enrolled 49 528 women. Breast cancer status was determined for 42,760 women, the group included in this study. Pre- or perimenopausal women younger than 50 years who had dense breasts at film mammography comprised the only subgroup for which digital mammography was significantly better than film (AUCs, 0.79 vs 0.54; P = .0015). Breast Imaging Reporting and Data System-based sensitivity in this subgroup was 0.59 for digital and 0.27 for film mammography. AUCs were not significantly different in any of the other subgroups. For women aged 65 years or older with fatty breasts, the AUC showed a nonsignificant tendency toward film being better than digital mammography (AUCs, 0.88 vs 0.70; P = .0025). CONCLUSION Digital mammography performed significantly better than film for pre- and perimenopausal women younger than 50 years with dense breasts, but film tended nonsignificantly to perform better for women aged 65 years or older with fatty breasts.


Journal of the American Academy of Child and Adolescent Psychiatry | 1993

Suicidal Behaviors and Childhood-Onset Depressive Disorders: A Longitudinal Investigation

Maria Kovacs; David B. Goldston; Constantine Gatsonis

In this longitudinal study, the rates and correlates of suicidal ideation and suicide attempts were determined among outpatient youths with depressive disorders and youths with other psychiatric disorders. At study entry, about 66% of the subjects evidenced suicidal ideation and 9% already attempted suicide. The rate of ideation remained fairly stable over time, whereas the rate of attempts reached 24% by the average age of 17 years. Major depressive and dysthymic disorders were associated with significantly higher rates of suicidal behaviors than were adjustment disorder with depressed mood and nondepressive disorders. In the presence of affective disorders, comorbid conduct and/or substance use disorders further increased the risk of suicide attempts.

Collaboration


Dive into the Constantine Gatsonis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Etta D. Pisano

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Drummond Rennie

American Medical Association

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge