Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenle Zhao is active.

Publication


Featured researches published by Wenle Zhao.


Hepatology | 2014

Detection of anti‐isoniazid and anti–cytochrome P450 antibodies in patients with isoniazid‐induced liver failure

Imir G. Metushi; Corron Sanders; Wei-Chen Lee; Anne M. Larson; Iris Liou; Timothy J. Davern; Oren K. Fix; Michael L. Schilsky; Timothy M. McCashland; J. Eileen Hay; Natalie Murray; A. Obaid S Shaikh; Andres T. Blei; Daniel Ganger; Atif Zaman; Steven Han; Robert J. Fontana; Brendan M. McGuire; Raymond T. Chung; Alastair D. Smith; Robert S. Brown; Jeffrey S. Crippin; Edwin Harrison; Adrian Reuben; Santiago Munoz; Rajender Reddy; R. Todd Stravitz; Lorenzo Rossaro; Raj Satyanarayana; Tarek Hassanein

Isoniazid (INH)‐induced hepatotoxicity remains one of the most common causes of drug‐induced idiosyncratic liver injury and liver failure. This form of liver injury is not believed to be immune‐mediated because it is not usually associated with fever or rash, does not recur more rapidly on rechallenge, and previous studies have failed to identify anti‐INH antibodies (Abs). In this study, we found Abs present in sera of 15 of 19 cases of INH‐induced liver failure. Anti‐INH Abs were present in 8 sera; 11 had anti–cytochrome P450 (CYP)2E1 Abs, 14 had Abs against CYP2E1 modified by INH, 14 had anti‐CYP3A4 antibodies, and 10 had anti‐CYP2C9 Abs. INH was found to form covalent adducts with CYP2E1, CYP3A4, and CYP2C9. None of these Abs were detected in sera from INH‐treated controls without significant liver injury. The presence of a range of antidrug and autoAbs has been observed in other drug‐induced liver injury that is presumed to be immune mediated. Conclusion: These data provide strong evidence that INH induces an immune response that causes INH‐induced liver injury. (Hepatology 2014;59:1084–1093)


Pharmaceutical Statistics | 2012

Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness

Wenle Zhao; Yanqiu Weng; Qi Wu; Yuko Y. Palesch

To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efrons biased coin design (BCD) and the lower boundary (best case) from the Soares and Wus big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chens biased coin design with imbalance tolerance method, and Chens Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Weis urn design.


Contemporary Clinical Trials | 2011

Block urn design — A new randomization algorithm for sequential trials with two or more treatments and balanced or unbalanced allocation

Wenle Zhao; Yanqiu Weng

Permuted block design is the most popular randomization method used in clinical trials, especially for trials with more than two treatments and unbalanced allocation, because of its consistent imbalance control and simplicity in implementation. However, the risk of selection biases caused by high proportion of deterministic assignments is a cause of concern. Efrons biased coin design and Weis urn design provide better allocation randomness without deterministic assignments, but they do not consistently control treatment imbalances. Alternative randomization designs with improved performances have been proposed over the past few decades, including Soares and Wus big stick design, which has high allocation randomness, but is limited to two-treatment balanced allocation scenarios only, and Bergers maximal procedure design which has a high allocation randomness and a potential for more general trial scenarios, but lacks the explicit function for the conditional allocation probability and is more complex to implement than most other designs. The block urn design proposed in this paper combines the advantages of existing randomization designs while overcoming their limitations. Statistical properties of the new algorithm are assessed and compared to currently available designs via analytical and computer simulation approaches. The results suggest that the block urn design simultaneously provides consistent imbalance control and high allocation randomness. It can be easily implemented for sequential clinical trials with two or more treatments and balanced or unbalanced allocation.


Annals of Emergency Medicine | 2012

An overview of the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project.

William J. Meurer; Roger J. Lewis; Danilo Tagle; Michael D. Fetters; Laurie J. Legocki; Scott M. Berry; Jason T. Connor; Valerie Durkalski; Jordan J. Elm; Wenle Zhao; Shirley M. Frederiksen; Robert Silbergleit; Yuko Y. Palesch; Donald A. Berry; William G. Barsan

Randomized clinical trials, which aim to determine the efficacy and safety of drugs and medical devices, are a complex enterprise with myriad challenges, stakeholders, and traditions. Although the primary goal is scientific discovery, clinical trials must also fulfill regulatory, clinical, and ethical requirements. Innovations in clinical trials methodology have the potential to improve the quality of knowledge gained from trials, the protection of human subjects, and the efficiency of clinical research. Adaptive clinical trial methods represent a broad category of innovations intended to address a variety of long-standing challenges faced by investigators, such as sensitivity to previous assumptions and delayed identification of ineffective treatments. The implementation of adaptive clinical trial methods, however, requires greater planning and simulation compared with a more traditional design, along with more advanced administrative infrastructure for trial execution. The value of adaptive clinical trial methods in exploratory phase (phase 2) clinical research is generally well accepted, but the potential value and challenges of applying adaptive clinical trial methods in large confirmatory phase clinical trials are relatively unexplored, particularly in the academic setting. In the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) project, a multidisciplinary team is studying how adaptive clinical trial methods could be implemented in planning actual confirmatory phase trials in an established, National Institutes of Health-funded clinical trials network. The overarching objectives of ADAPT-IT are to identify and quantitatively characterize the adaptive clinical trial methods of greatest potential value in confirmatory phase clinical trials and to elicit and understand the enthusiasms and concerns of key stakeholders that influence their willingness to try these innovative strategies.


Statistical Methods in Medical Research | 2015

Minimal sufficient balance—a new strategy to balance baseline covariates and preserve randomness of treatment allocation

Wenle Zhao; Michael D. Hill; Yuko Y. Palesch

In many clinical trials, baseline covariates could affect the primary outcome. Commonly used strategies to balance baseline covariates include stratified constrained randomization and minimization. Stratification is limited to few categorical covariates. Minimization lacks the randomness of treatment allocation. Both apply only to categorical covariates. As a result, serious imbalances could occur in important baseline covariates not included in the randomization algorithm. Furthermore, randomness of treatment allocation could be significantly compromised because of the high proportion of deterministic assignments associated with stratified block randomization and minimization, potentially resulting in selection bias. Serious baseline covariate imbalances and selection biases often contribute to controversial interpretation of the trial results. The National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial and the Captopril Prevention Project are two examples. In this article, we propose a new randomization strategy, termed the minimal sufficient balance randomization, which will dually prevent serious imbalances in all important baseline covariates, including both categorical and continuous types, and preserve the randomness of treatment allocation. Computer simulations are conducted using the data from the National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial. Serious imbalances in four continuous and one categorical covariate are prevented with a small cost in treatment allocation randomness. A scenario of simultaneously balancing 11 baseline covariates is explored with similar promising results. The proposed minimal sufficient balance randomization algorithm can be easily implemented in computerized central randomization systems for large multicenter trials.


Contemporary Clinical Trials | 2011

Quantifying the cost in power of ignoring continuous covariate imbalances in clinical trial randomization.

Jody D. Ciolino; Wenle Zhao; Renee Martin; Yuko Y. Palesch

Motivated by potentially serious imbalances of continuous baseline covariates in clinical trials, we investigated the cost in statistical power of ignoring the balance of these covariates in treatment allocation design for a logistic regression model. Based on data from a clinical trial of acute ischemic stroke treatment, computer simulations were used to create scenarios varying from the best possible baseline covariate balance to the worst possible imbalance, with multiple balance levels between the two extremes. The likelihood of each scenario occurring under simple randomization was evaluated. The power of the main effect test for treatment was examined. Our simulation results show that the worst possible imbalance is highly unlikely, but it can still occur under simple random allocation. Also, power loss could be nontrivial if balancing distributions of important continuous covariates were ignored even if adjustment is made in the analysis for important covariates. This situation, although unlikely, is more serious for trials with a small sample size and for covariates with large influence on primary outcome. These results suggest that attempts should be made to balance known prognostic continuous covariates at the design phase of a clinical trial even when adjustment is planned for these covariates at the analysis.


Clinical Trials | 2010

A web-based clinical trial management system for a sham-controlled multicenter clinical trial in depression

Valerie Durkalski; Wenle Zhao; Catherine Dillon; Jaemyung Kim

Background Clinical trial investigators and sponsors invest vast amounts of resources and energy into conducting trials and often face daily challenges with data management, project management, and data quality control. Rather than waiting months for study progress reports, investigators need the ability to use real-time data for the coordination and management of study activities across all study team members including site investigators, oversight committees, data and safety monitoring boards, and medical safety monitors. Web-based data management systems are beginning to meet this need but what distinguishes one system from the other are user needs/requirements and cost. Purpose To illustrate the development and implementation of a web-based data and project management system for a multicenter clinical trial designed to test the superiority of repeated transcranial magnetic stimulation versus sham for the treatment of patients with major depression. Methods The authors discuss the reasons for not using a commercially available system for this study and describe the approach to developing their own web-based system for the OPT-TMS study. Timelines, effort, system architecture, and lessons learned are shared with the hope that this information will direct clinical trial researchers and software developers towards more efficient, user-friendly systems. Results The developers use a combination of generic and custom application code to allow for the flexibility to adapt the system to the needs of the study. Features of the system include: central participant registration and randomization; secure data entry at the site; participant progress/study calendar; safety data reporting; device accounting; monitor verification; and user-configurable generic reports and built-in customized reports. Limitations Hard coding was more time-efficient to address project-specific issues compared with the effort of creating a generic code application. As a consequence of this strategy, the required maintenance of the system is increased and the value of using this system for other trials is reduced. Conclusion Web-based central computerized systems offer time-saving, secure options for managing clinical trial data. The choice of a commercially available system or an internally developed system is determined by the requirements of the study and users. Pros and cons to both approaches were discussed. If the intention is to use the system for various trials (single and multi-center, phases I—III) across various therapeutic areas, then the overall design should be a generic structure that simplifies the general application with minimal loss of functionality.


Stroke | 2015

Imaging in StrokeNet: Realizing the Potential of Big Data

David S. Liebeskind; Gregory W. Albers; Karen Crawford; Colin P. Derdeyn; Mark S. George; Yuko Y. Palesch; Arthur W. Toga; Steven Warach; Wenle Zhao; Thomas G. Brott; Ralph L. Sacco; Pooja Khatri; Jeffrey L. Saver; Steven C. Cramer; Steven L. Wolf; Joseph P. Broderick; Max Wintermark

Imaging of stroke and neurovascular disorders has profoundly enhanced clinical practice and related research during the past 40 years since the introduction of computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography enabled mapping of the brain. Various imaging techniques have been developed to study stroke pathophysiology and inform medical decision-making in prevention, prehospital care, acute monitoring of revascularization, during the subacute ICU course, and recovery settings. The technology to acquire such imaging with sophisticated scanners, software for rapid postprocessing, analysis, computer vision methods, and telemedicine platforms to instantly beam such information around the world now warrant reconsideration of the potential of stroke imaging in the era of big data. These dramatic changes in neuroimaging and the vast potential to catapult stroke care depend on large-scale, multi-institutional research initiatives to establish their role. These initiatives would require that the current infrastructure and philosophy of translational research must be modernized to incorporate such advances. In this position paper, we describe the historical context, conceptual framework, current issues, logical analyses for strategic planning, and the proposed aims of future stroke imaging initiatives to advance data science with the recently established National Institutes of Health (NIH) StrokeNet.1 The StrokeNet consists of 25 regional stroke center hubs, each associated with a group of spoke hospitals that are capable of conducting stroke research. The network will be responsible for conducting future multicenter NIH stroke trials and represents an ideal setting to capture large volumes of invaluable neuroimaging data. Our perspective contrasts with the limited translational research use of imaging in most previous stroke trials, recognizing a unique opportunity to maximize data science and leverage this landmark NIH investment to transform stroke trials of prevention, acute treatment, and recovery. The tools already exist for widespread acquisition and transmission of image data, systematic real-time …


Statistics in Medicine | 2014

A better alternative to stratified permuted block design for subject randomization in clinical trials.

Wenle Zhao

Stratified permuted block randomization has been the dominant covariate-adaptive randomization procedure in clinical trials for several decades. Its high probability of deterministic assignment and low capacity of covariate balancing have been well recognized. The popularity of this sub-optimal method is largely due to its simplicity in implementation and the lack of better alternatives. Proposed in this paper is a two-stage covariate-adaptive randomization procedure that uses the block urn design or the big stick design in stage one to restrict the treatment imbalance within each covariate stratum, and uses the biased-coin minimization method in stage two to control imbalances in the distribution of additional covariates that are not included in the stratification algorithm. Analytical and simulation results show that the new randomization procedure significantly reduces the probability of deterministic assignments, and improve the covariate balancing capacity when compared to the traditional stratified permuted block randomization.


Academic Emergency Medicine | 2010

Step‐forward Randomization in Multicenter Emergency Treatment Clinical Trials

Wenle Zhao; Jody D. Ciolino; Yuko Y. Palesch

The authors present a new centralized randomization method for multicenter emergency treatment clinical trials. With this step-forward method, treatment randomization for the next subject is performed immediately after the enrollment of the current subject. This design ensures the readiness of the treatment assignment for each subject at the point of study enrollment, and it simultaneously provides effective control on treatment assignments balance and distributions of covariates. The authors also discuss procedures of the step-forward randomization method along with its implementation for two National Institute of Neurological Disorders and Stroke-funded multicenter acute stroke trials, one double-blinded and one open-labeled. Advantages and limitations are presented based on experience gained in these two trials.

Collaboration


Dive into the Wenle Zhao's collaboration.

Top Co-Authors

Avatar

Yuko Y. Palesch

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valerie Durkalski

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renee Martin

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Dillon

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Edward C. Jauch

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Jaemyung Kim

Medical University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge