Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tony Tse is active.

Publication


Featured researches published by Tony Tse.


The New England Journal of Medicine | 2011

The ClinicalTrials.gov Results Database — Update and Key Issues

Deborah A. Zarin; Tony Tse; Rebecca J. Williams; Robert M. Califf; Nicholas C. Ide

BACKGROUND The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research. METHODS We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010. RESULTS As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants. CONCLUSIONS ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.


BMJ | 2012

Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis

Joseph S. Ross; Tony Tse; Deborah A. Zarin; Hui Xu; Lei Zhou; Harlan M. Krumholz

Objective To review patterns of publication of clinical trials funded by US National Institutes of Health (NIH) in peer reviewed biomedical journals indexed by Medline. Design Cross sectional analysis. Setting Clinical trials funded by NIH and registered within ClinicalTrials.gov (clinicaltrials.gov), a trial registry and results database maintained by the US National Library of Medicine, after 30 September 2005 and updated as having been completed by 31 December 2008, allowing at least 30 months for publication after completion of the trial. Main outcome measures Publication and time to publication in the biomedical literature, as determined through Medline searches, the last of which was performed in June 2011. Results Among 635 clinical trials completed by 31 December 2008, 294 (46%) were published in a peer reviewed biomedical journal, indexed by Medline, within 30 months of trial completion. The median period of follow-up after trial completion was 51 months (25th-75th centiles 40-68 months), and 432 (68%) were published overall. Among published trials, the median time to publication was 23 months (14-36 months). Trials completed in either 2007 or 2008 were more likely to be published within 30 months of study completion compared with trials completed before 2007 (54% (196/366) v 36% (98/269); P<0.001). Conclusions Despite recent improvement in timely publication, fewer than half of trials funded by NIH are published in a peer reviewed biomedical journal indexed by Medline within 30 months of trial completion. Moreover, after a median of 51 months after trial completion, a third of trials remained unpublished.


Chest | 2009

Reporting “Basic Results” in ClinicalTrials.gov

Tony Tse; Rebecca J. Williams; Deborah A. Zarin

Growing awareness of selective publication of research studies (“publication bias”) and the selective reporting of outcomes in publications (“outcome reporting bias”),1 has led policymakers to call for increased “clinical trial transparency” through the public disclosure of key information about clinical trials.2 A US federal law3 enacted in 2007 mandates registration and results reporting for certain clinical trials of drugs, biological products, and devices, regardless of study funding source, at ClinicalTrials.gov, an online registry and results database operated by the National Library of Medicine of the National Institutes of Health (NIH). Table 1 summarizes the goals of the results database. The theoretical benefits and limitations of non-peer-reviewed public results databases have been discussed elsewhere.4–6 This article summarizes the current requirements for reporting results and describes how to submit results data elements to the ClinicalTrials.gov results database. These tips for creating clear, understandable entries may be kept as a resource for use when preparing to enter results data for a clinical trial. A previous “Medical Writing Tips” article focused on trial registration.7 Table 1 Results Database Purposes for Various Groups


Canadian Medical Association Journal | 2010

Registration of observational studies: Is it time?

Rebecca J. Williams; Tony Tse; William R. Harlan; Deborah A. Zarin

Observational studies form an important part of the medical evidence base, particularly for assessing rare adverse events and long-term effectiveness of medications and devices. [1][1] However, observational studies, like interventional studies (clinical trials), are subject to publication bias and


The New England Journal of Medicine | 2015

The Proposed Rule for U.S. Clinical Trial Registration and Results Submission

Deborah A. Zarin; Tony Tse; Jerry Sheehan

This article calls attention to an open public comment period regarding a proposed new rule from the Department of Health and Human Services that affects clinical trial registration and results reporting.


JAMA Internal Medicine | 2013

Time to publication among completed clinical trials.

Joseph S. Ross; Marian Mocanu; Julianna F. Lampropulos; Tony Tse; Harlan M. Krumholz

To the Editor: Prior studies have shown that 25% to 50% of clinical trials are never published.1–4 However, among those published, we know little about the length of time required for publication in the peer-reviewed biomedical literature after study completion. Ioannidis previously demonstrated that a sample of randomized phase 2 and 3 trials conducted between 1986 and 1996 required nearly 2.5 years for publication,5 while our more recent study of National Institutes of Health (NIH)-funded trials found that average time to publication was almost two years.4 We sought to determine time to publication for a recent and representative sample of trials published in 2009.


PLOS ONE | 2015

Terminated Trials in the ClinicalTrials.gov Results Database: Evaluation of Availability of Primary Outcome Data and Reasons for Termination

Rebecca J. Williams; Tony Tse; Katelyn DiPiazza; Deborah A. Zarin

Background Clinical trials that end prematurely (or “terminate”) raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. Methods and Findings A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Conclusions Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study.


The New England Journal of Medicine | 2017

Update on Trial Registration 11 Years after the ICMJE Policy Was Established.

Deborah A. Zarin; Tony Tse; Rebecca J. Williams; Thiyagu Rajakannan

More than a decade ago, the International Committee of Medical Journal Editors established a policy requiring trial registration. This article reviews the current state of this endeavor.


international conference on biological and medical data analysis | 2005

A text corpora-based estimation of the familiarity of health terminology

Qing T. Zeng; Eunjung Kim; Jonathan Crowell; Tony Tse

In a pilot effort to improve health communication we created a method for measuring the familiarity of various medical terms. To obtain term familiarity data, we recruited 21 volunteers who agreed to take medical terminology quizzes containing 68 terms. We then created predictive models for familiarity based on term occurrence in text corpora and readers demographics. Although the sample size was small, our preliminary results indicate that predicting the familiarity of medical terms based on an analysis of the frequency in text corpora is feasible. Further, individualized familiarity assessment is feasible when demographic features are included as predictors.


PLOS Medicine | 2016

Sharing Individual Participant Data (IPD) within the Context of the Trial Reporting System (TRS)

Deborah A. Zarin; Tony Tse

Deborah Zarin and Tony Tse of ClinicalTrials.Gov consider how sharing individual participant data can and cannot help improve the reporting of clinical trials.

Collaboration


Dive into the Tony Tse's collaboration.

Top Co-Authors

Avatar

Deborah A. Zarin

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Nicholas C. Ide

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Rebecca J. Williams

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Alla Keselman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Allen C. Browne

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Graciela Rosemblat

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald A. B. Lindberg

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge