Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven K. Cheng is active.

Publication


Featured researches published by Steven K. Cheng.


JAMA Internal Medicine | 2013

Characteristics of Oncology Clinical Trials: Insights From a Systematic Analysis of ClinicalTrials.gov

Bradford R. Hirsch; Robert M. Califf; Steven K. Cheng; Asba Tasneem; John Horton; Karen Chiswell; Kevin A. Schulman; David M. Dilts; Amy P. Abernethy

IMPORTANCE Clinical trials are essential to cancer care, and data about the current state of research in oncology are needed to develop benchmarks and set the stage for improvement. OBJECTIVE To perform a comprehensive analysis of the national oncology clinical research portfolio. DESIGN All interventional clinical studies registered on ClinicalTrials.gov between October 2007 and September 2010 were identified using Medical Subject Heading terms and submitted conditions. They were reviewed to validate classification, subcategorized by cancer type, and stratified by design characteristics to facilitate comparison across cancer types and with other specialties. RESULTS Of 40 970 interventional studies registered between October 2007 and September 2010, a total of 8942 (21.8%) focused on oncology. Compared with other specialties, oncology trials were more likely to be single arm (62.3% vs 23.8%; P < .001), open label (87.8% vs 47.3%; P < .001), and nonrandomized (63.9% vs 22.7%; P < .001). There was moderate but significant correlation between number of trials conducted by cancer type and associated incidence and mortality (Spearman rank correlation coefficient, 0.56 [P = .04] and 0.77 [P = .001], respectively). More than one-third of all oncology trials were conducted solely outside North America. CONCLUSIONS AND RELEVANCE There are significant variations between clinical trials in oncology and other diseases, as well as among trials within oncology. The differences must be better understood to improve both the impact of cancer research on clinical practice and the use of constrained resources.


Clinical Cancer Research | 2010

A Sense of Urgency: Evaluating the Link between Clinical Trial Development Time and the Accrual Performance of Cancer Therapy Evaluation Program (NCI-CTEP) Sponsored Studies

Steven K. Cheng; Mary S. Dietrich; David M. Dilts

Purpose: Postactivation barriers to oncology clinical trial accruals are well documented; however, potential barriers prior to trial opening are not. We investigate one such barrier: trial development time. Experimental Design: National Cancer Institute Cancer Therapy Evaluation Program (CTEP)–sponsored trials for all therapeutic, nonpediatric phase I, I/II, II, and III studies activated between 2000 and 2004 were investigated for an 8-year period (n = 419). Successful trials were those achieving 100% of minimum accrual goal. Time to open a study was the calendar time from initial CTEP submission to trial activation. Multivariate logistic regression analysis was used to calculate unadjusted and adjusted odds ratios (OR), controlling for study phase and size of expected accruals. Results: Among the CTEP-approved oncology trials, 37.9% (n = 221) failed to attain the minimum accrual goals, with 70.8% (n = 14) of phase III trials resulting in poor accrual. A total of 16,474 patients (42.5% of accruals) accrued to those studies were unable to achieve the projected minimum accrual goal. Trials requiring less than 12 months of development were significantly more likely to achieve accrual goals (OR, 2.15; 95% confidence interval, 1.29-3.57, P = 0.003) than trials with the median development times of 12 to 18 months. Trials requiring a development time of greater than 24 months were significantly less likely to achieve accrual goals (OR, 0.40; 95% confidence interval, 0.20-0.78; P = 0.011) than trials with the median development time. Conclusions: A large percentage of oncology clinical trials do not achieve minimum projected accruals. Trial development time appears to be one important predictor of the likelihood of successfully achieving the minimum accrual goals. Clin Cancer Res; 16(22); 5557–63. ©2010 AACR.


Journal of Clinical Oncology | 2009

Steps and Time to Process Clinical Trials at the Cancer Therapy Evaluation Program

David M. Dilts; Alan Sandler; Steven K. Cheng; J. Crites; L. Ferranti; Amy Wu; Shanda Finnigan; Steven Friedman; Margaret Mooney; Jeffrey Abrams

PURPOSE To examine the processes and document the calendar time required for the National Cancer Institutes Cancer Therapy Evaluation Program (CTEP) and Central Institutional Review Board (CIRB) to evaluate and approve phase III clinical trials. METHODS Process steps were documented by (1) interviewing CTEP and CIRB staff regarding the steps required to activate a trial from initial concept submission to trial activation by a cooperative group, (2) reviewing standard operating procedures, and (3) inspecting trial records and documents for selected trials to identify any additional steps. Calendar time was collected from initial concept submission to activation using retrospective data from the CTEP Protocol and Information Office. RESULTS At least 296 distinct processes are required for phase III trial activation: at least 239 working steps, 52 major decision points, 20 processing loops, and 11 stopping points. Of the 195 trials activated during the January 1, 2000, to December 31, 2007, study period, a sample of 167 (85.6%) was used for gathering timing data. Median calendar days from initial formal concept submission to CTEP to trial activation by a cooperative group was 602 days (interquartile range, 454 to 861 days). This time has not significantly changed over the past 8 years. There is a high variation in the time required to activate a clinical trial. CONCLUSION Because of their complexity, the overall development time for phase III clinical trials is lengthy, process laden, and highly variable. To streamline the process, a solution must be sought that includes all parties involved in developing trials.


Academic Medicine | 2011

The prevalence and economic impact of low-enrolling clinical studies at an academic medical center

Darlene Kitterman; Steven K. Cheng; David M. Dilts; Eric S. Orwoll

Purpose The authors assessed the prevalence and associated economic impact of low-enrolling clinical studies at a single academic medical center. Method The authors examined all clinical studies receiving institutional review board (IRB) review between FY2006 and FY2009 at Oregon Health & Science University (OHSU) for recruitment performance and analyzed them by type of IRB review (full-board, exempt, expedited), funding mechanism, and academic unit. A low-enrolling study included those with zero or one participant at the time of study termination. The authors calculated the costs associated with IRB review, financial setup, contract negotiation, and department study start-up activities and the total economic impact on OHSU of low-enrolling studies for FY2009. Results A total of 837 clinical studies were terminated during the study period, 260 (31.1%) of which were low-enrolling. A greater proportion of low-enrolling studies were government funded than industry funded (P = .006). The authors found significant differences among the various academic units with respect to percentages of low-enrolling studies (from 10% to 67%). The uncompensated economic impact of low-enrolling studies was conservatively estimated to be nearly


Clinical Cancer Research | 2010

Phase III Clinical Trial Development: A Process of Chutes and Ladders

David M. Dilts; Steven K. Cheng; Joshua S. Crites; Alan Sandler; James H. Doroshow

1 million for FY2009. Conclusions A substantial proportion of clinical studies incurred high institutional and departmental expense but resulted in little scientific benefit. Although a certain percentage of low-enrolling studies can be expected in any research organization, the overall number of such studies must be managed to reduce the aggregate costs of conducting research and to maximize research opportunities. Effective, proactive interventions are needed to address the prevalence and impact of low enrollment.


Clinical Cancer Research | 2008

Development of Clinical Trials in a Cooperative Group Setting: The Eastern Cooperative Oncology Group

David M. Dilts; Alan Sandler; Steven K. Cheng; J. Crites; L. Ferranti; Amy Wu; Robert Gray; Jean MacDonald; Donna Marinucci; Robert L. Comis

Purpose: The Institute of Medicine report on cooperative groups and the National Cancer Institute (NCI) report from the Operational Efficiency Working Group both recommend changes to the processes for opening a clinical trial. This article provides evidence for the need for such changes by completing the first comprehensive review of all the time and steps required to open a phase III oncology clinical trial and discusses the effect of time to protocol activation on subject accrual. Methods: The Dilts and Sandler method was used at four cancer centers, two cooperative groups, and the NCI Cancer Therapy Evaluation Program. Accrual data were also collected. Results: Opening a phase III cooperative group therapeutic trial requires 769 steps, 36 approvals, and a median of approximately 2.5 years from formal concept review to study opening. Time to activation at one group ranged from 435 to 1,604 days, and time to open at one cancer center ranged from 21 to 836 days. At centers, group trials are significantly more likely to have zero accruals (38.8%) than nongroup trials (20.6%; P < 0.0001). Of the closed NCI Cancer Therapy Evaluation Program–approved phase III clinical trials from 2000 to 2007, 39.1% resulted in <21 accruals. Conclusions: The length, variability, and low accrual results demonstrate the need for the NCI clinical trials system to be reengineered. Improvements will be of only limited effectiveness if done in isolation; there is a need to return to the collaborative spirit with all parties creating an efficient and effective system. Recommendations put forth by the Institute of Medicine and Operational Efficiency Working Group reports, if implemented, will aid this renewal. Clin Cancer Res; 16(22); 5381–9. ©2010 AACR.


Clinical Cancer Research | 2012

The importance of doing trials right while doing the right trials.

David M. Dilts; Steven K. Cheng

Purpose: We examine the processes and document the calendar time required to activate phase II and III clinical trials by an oncology group: the Eastern Cooperative Oncology Group (ECOG). Methods: Setup steps were documented by (a) interviewing ECOG headquarters and statistical center staff, and committee chairs, (b) reviewing standard operating procedure manuals, and (c) inspecting study records, documents, and e-mails to identify additional steps. Calendar time was collected for each major process for each study in this set. Results: Twenty-eight phase III studies were activated by ECOG during the January 2000 to July 2006 study period. We examined a sample from 16 of those studies in detail. More than 481 distinct processes were required for study activation: 420 working steps, 61 major decision points, 26 processing loops, and 13 stopping points. Median calendar days to activate a trial in the phase III subset was 783 days (range, 285-1,542 days) from executive approval and 808 days (range, 435-1,604 days) from initial conception of the study. Data were collected for all phase II and phase III trials activated and completed during this time period (n = 52) for which development time represented 43.9% and 54.1% of the total trial time, respectively. Conclusion: The steps required to develop and activate a clinical trial may require as much or more time than the actual completion of a trial. The data shows that to improve the activation process, research should to be directed toward streamlining both internal and external groups and processes.


Translational Research in Biomedicine | 2013

Building expertise in translational processes through partnerships with schools of business

Steven K. Cheng; David M. Dilts

Effort is being expended in investigating efficiency measures (i.e., doing trials right) through achievement of accrual and endpoint goals for clinical trials. It is time to assess the impact of such trials on meeting the critical needs of cancer patients by establishing effectiveness measures (i.e., doing the right trials). Clin Cancer Res; 18(1); 3–5. ©2011 AACR.


Journal of Clinical Oncology | 2012

Where you stand depends on where you sit: Alignment of quality measures to strategic intentions.

David M. Dilts; Christopher M. Peters; Steven K. Cheng; Steven Stadum

Translational research in medicine is facing burdens stemming from an increase in the complexity of science, increase in partnerships across national and international collaborations, and reduction in the finite resources to support all research endeavors. Schools of business offer unique perspectives on translational processes because they address global challenges through research and teaching to transform ideas into successful practice-changing innovations. While there are multiple approaches to investigating translational processes using business management tools, this chapter will focus on three representative lenses: (1) process flows for mass customization, (2) knowledge supply chain, and (3) strategic management. Each lens yields the potential to significantly streamline the translational processes in healthcare for efficiency and effectiveness.


Clinical Cancer Research | 2011

Clinical Trial Development as a Predictor of Accrual Performance—Response

Steven K. Cheng; David M. Dilts

265 Background: Identifying quality measurements for discovering and delivering effective cancer care are dependent on the strategic vision of the cancer center and its respective units yet, each unit encompasses a distinctive set of competencies. We propose a framework for evaluating and structuring the alignment of quality and unit strategy measures within the overall vision of a cancer center. METHODS Prospective analysis was conducted with structured interviews across leadership and key opinion leaders (1) at the executive level, (2) within cancer center units (including basic science, cores, clinical research, and oncology service lines) and (3) across key hospital and research integration points. Quality and strategic areas reviewed were success measures, strategic focus, competitive dimensions, key strategies, and significant barriers. Risk of bias was accounted for through interrater reliability across the consensus of three independent reviewers. Measures of consistency were quantitatively assessed through triangulation of multiple perspectives based on roles and responsibilities. RESULTS 61 individuals were interviewed, generating 875 responses across the five areas. Responses were grouped into 303 categories, and then into 32 specific perspectives. These perspectives were linked with each of the five quality and strategic categories. Assessment and prioritization of quality and strategic measures were calculated based on tabulation of responses. CONCLUSIONS While overarching strategy is well understood throughout the cancer center, quality, and strategic measures must be developed to integrate activities of each functional units across the cancer center to support overall goals. The framework for aligning quality measures to strategy facilitates the ability for centers to develop, align, and prioritize quality measures across the institution. [Table: see text].

Collaboration


Dive into the Steven K. Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Wu

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar

J. Crites

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge