Jay P. Siegel
Johnson & Johnson
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jay P. Siegel.
The New England Journal of Medicine | 2012
Roderick J. A. Little; Ralph B. D'Agostino; Michael L. Cohen; Kay Dickersin; Scott S. Emerson; John T. Farrar; Constantine Frangakis; Joseph W. Hogan; Geert Molenberghs; Susan A. Murphy; James D. Neaton; Andrea Rotnitzky; Daniel O. Scharfstein; Weichung J. Shih; Jay P. Siegel; Hal S. Stern
Missing data in clinical trials can have a major effect on the validity of the inferences that can be drawn from the trial. This article reviews methods for preventing missing data and, failing that, dealing with data that are missing.
Statistics in Medicine | 2012
Roderick J. A. Little; M. L. Cohen; Kay Dickersin; Scott S. Emerson; John T. Farrar; James D. Neaton; Weichung Joe Shih; Jay P. Siegel; Hal S. Stern
This article summarizes recommendations on the design and conduct of clinical trials of a National Research Council study on missing data in clinical trials. Key findings of the study are that (a) substantial missing data is a serious problem that undermines the scientific credibility of causal conclusions from clinical trials; (b) the assumption that analysis methods can compensate for substantial missing data is not justified; hence (c) clinical trial design, including the choice of key causal estimands, the target population, and the length of the study, should include limiting missing data as one of its goals; (d) missing-data procedures should be discussed explicitly in the clinical trial protocol; (e) clinical trial conduct should take steps to limit the extent of missing data; (f) there is no universal method for handling missing data in the analysis of clinical trials - methods should be justified on the plausibility of the underlying scientific assumptions; and (g) when alternative assumptions are plausible, sensitivity analysis should be conducted to assess robustness of findings to these alternatives. This article focuses on the panels recommendations on the design and conduct of clinical trials to limit missing data. A companion paper addresses the panels findings on analysis methods.
Clinical Trials | 2004
David L. DeMets; Robert M. Califf; Dennis O. Dixon; Susan S. Ellenberg; Thomas R. Fleming; Peter Held; Desmond G. Julian; Richard S. Kaplan; Robert J. Levine; James D. Neaton; Milton Packer; Stuart J. Pocock; Frank Rockhold; Belinda Seto; Jay P. Siegel; Steve Snapinn; David C. Stump; Robert Temple; Richard J. Whitley
As clinical trials have emerged as the major research method for evaluating new interventions, the process for monitoring intervention safety and benefit has also evolved. The Data Monitoring Committee (DMC) has become the standard approach to implement this responsibility for many Phase III trials. Recent draft guidelines on the operation of DMCs by the Food and Drug Administration (FDA) have raised issues that need further clarification or discussion, especially for industry sponsored trials. These include, the time when DMCs are needed, the role of the independent statistician to support the DMC, and sponsor participation at DMC meetings. This paper provides an overview of these issues, based on the discussions at the January, 2003 workshop sponsored by Duke Clinical Research Institute.
Clinical Trials | 2005
Norris E Alderson; Gregory Campbell; Ralph B. D'Agostino; Susan S. Ellenberg; Stacy Lindborg; Robert T. O'Neill; Don Rubin; Jay P. Siegel
Dr Alderson: I am Norris Alderson. I am Associate Commissioner for Science at FDA, and I had the privilege of working with a Planning Committee to arrange for this workshop. I was the only non-statistician in the group, I must tell you, and this was really an experience for me. I want to introduce this panel because I think it is unique from the perspective that it has representatives from FDA, industry, and academia. They are Dr Susan Ellenberg from CBER, Dr Jay Siegel from Centocor, Professor Don Rubin from Harvard University, Dr Gregory Campbell from CBER, Dr Stacy Lindborg from Eli Lilly, Dr Robert O’Neill from CDER, and Professor Ralph D’Agostino, Boston University. Our task when we set up this panel was to give this group the opportunity to summarize, from their perspective, what they have heard, and also think about what is next. Speaking on behalf of the Planning Committee, we are interested in questions, as well as comments from the audience, as related to the use of Bayesian methods as a tool, particularly in the context of the critical path initiative, reduce the time for review and approval of new public health products.
Clinical Trials | 2007
Jay P. Siegel
© Society for Clinical Trials 2007 10.1177/1740774507076808 From a medical perspective, the potential to target therapies to subpopulations most likely to benefit and/or least likely to exhibit toxicity is very attractive. From a business perspective, the potential is increasingly recognized for targeting to benefit drug development through, inter alia, a greater probability of success, generation of intellectual property and an improved risk-benefit profile leading to product differentiation, improved market penetration and support of formulary and pricing decisions. The science and technology to allow targeting of therapies have rapidly advanced and continue to do so. The focus of this paper is on issues in developing therapies using genomic targeting, but the basic principles apply to the development of therapies using other approaches to targeting as well. Given this potential of targeted therapy, many have predicted that we will soon see an explosion of targeted therapy and personalized medicine. However, to realize the full potential of targeted therapy, the approach to developing drugs must be appropriately adapted. In order to launch a new drug with a validated and commercially available pharmacogenomic-targeting diagnostic test, careful planning must begin during the drug discovery process. It is critical to understand key pathways involved in disease pathophysiology, and in drug pharmacodynamics, toxicity, pharmacokinetics and metabolism in order to identify one or a few genes or markers of interest. A limited number of targeting hypotheses must then be identified. Because drug response may be a complex trait determined by multiple genes, each hypothesis may involve more than one gene. Also, if not already available, a rapid outcome measure (clinical or pharmacodynamic) appropriate for use in early evaluation of the targeting hypothesis must be identified. Utilizing this information, one must then plan clinical development in a manner that will lead to the identification of a central targeting hypothesis confirmation of that hypothesis and timely development and validation of a suitable test. Two alternatives to this prospective approach have yielded limited success in developing genomictargeted drugs – broad pharmacogenomic testing and informal approaches. Broad pharmacogenomic testing (testing hundreds or thousands of genes and exploring a nearly limitless number of gene combinations) can be of substantial value, for example, in understanding the drug, in identifying and later developing new indications and in developing new drugs for the same indication. However, it will rarely lead to successful and timely targeting of the drug under study. For broad testing of many genes to lead to targeting, one must accumulate and mine large amounts of data, generate hypotheses (typically many hypotheses emerge from such testing), test the hypotheses in new patient groups to narrow the hypotheses, confirm those results in new patients and develop, standardize and validate diagnostic tests for commercial use. This process will require many serial studies, typically extending over a period of time well in excess of the typical drug development period. Even when specific genes of potential interest are identified and tested early in development, a successful approach to targeted development requires careful planning. Lack of careful planning can lead to many pitfalls, including, at any stage of development, inadequate sample/study sizes to address pharmacogenomic hypotheses, inadequate sample handling or availability, acquisition of the wrong samples or incorrectly collected samples, uninterpretable data, irreproducible results, untimely data or inability to validate assay upscaling and improvement. Two of the critical aspects of well-planned development are highlighted in the following sections: planning for validation and evolution of the assay, and planning hypothesis-driven development.
Clinical Trials | 2013
David L. DeMets; Janet Wittes; Jay P. Siegel
Dan Sargent: I would recommend powering the trial based on just the treatment effect on the subgroup, but then enroll patients regardless of biomarker status, and in secondary analyses, look for efficacy more broadly. The optimal strategy depends on the prevalence of the marker. For a low prevalence marker (5% or 10%), the aforementioned is probably not a good strategy (in that case an enrichment design is likely the only option), but for a moderate prevalence marker (40% or 50%), I think such a strategy makes sense. This strategy allows a sufficiently strong signal from the biomarker-negative patients of no activity that the phase III strategy is clear, or conversely, it can demonstrate that the biomarker does not matter and you should do an unselected phase III trial.
Drug Information Journal | 2012
Andrea C. Masciale; Patricia L. DeSantis; Jay P. Siegel
The Prescription Drug User Fee Act of 1992 (PDUFA) established pharmaceutical review performance goals and authorized the US Food and Drug Administration (FDA) to collect user fees in conjunction with pharmaceutical marketing applications. There have been 3 subsequent reauthorizations of PDUFA; the most recent, referred to as PDUFA IV, was enacted with the Food and Drug Administration Amendments Act of 2007. PDUFA IV is set to expire on September 30, 2012, and it is expected that another reauthorization (herein referred to as PDUFA V) will be enacted before PDUFA IV expires. Industry and FDA, with stakeholder input, have held technical discussions to develop and agree upon performance goals for PDUFA V, which are proposed for congressional consideration. The discussions took place amid concerns that drug approvals were taking longer under PDUFA IV than under previous PDUFA programs. This article presents an analysis of the FDA’s Center for Drug Evaluation and Research application approval data, assessing changes in time from submission to approval and identifying and addressing hypotheses regarding the causes of those changes. The analyses support the potential for the proposed goals and process changes in the PDUFA V agreement to lead to improvements in overall approval time.
Clinical Trials | 2012
Susan S. Ellenberg; Bryan Luce; Thomas R. Fleming; Jay P. Siegel; Brian L. Strom; Miguel A. Hernán; Robert Temple; Dave Sackett; Cheryl Bourguignon; Justin E. Bekelman; Donald A. Berry; Matthew Rotelli; David Judkins; Sandy Schwartz; Steve Goodman
Susan Ellenberg: Our four distinguished panelists are going to comment on what they knew before they came here, what they have heard since they have been here or what they anticipate other people are going to say later on. Our distinguished panelists are Bryan Luce, United BioSource Corporation; Brian Strom, the George Pepper Professor of Public Health and Preventive Medicine at the University of Pennsylvania’s Perelman School of Medicine; Jay Siegel, Chief Biotechnology Officer and Head of Global Regulatory Affairs for Pharmaceuticals at Johnson and Johnson; and Tom Fleming, Professor of Biostatistics at the University of Washington.
Archive | 1997
Susan S. Ellenberg; Jay P. Siegel
The powerful methods for analyzing survival data (particularly censored survival data) that were introduced in the late 1960’s and early 1970’s represented major advances in the statistical assessment of such data. It is important to recognize, however, that these may not always be most appropriate to address the fundamental study question in any situation in which time-to-event techniques could be applied. In this paper, several such situations are considered, and the particular implications for regulatory decision-making are discussed.
The New England Journal of Medicine | 2002
Jay P. Siegel