Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven J. Novick is active.

Publication


Featured researches published by Steven J. Novick.


Journal of Pharmacological and Toxicological Methods | 2013

Development of a high-throughput electrophysiological assay for the human ether-à-go-go related potassium channel hERG

Daniel J. Gillie; Steven J. Novick; Brian T. Donovan; Lisa A. Payne; Claire Townsend

INTRODUCTION Drug-induced prolongation of the QT interval via block of the hERG potassium channel is a major cause of attrition in drug development. The advent of automated electrophysiology systems has enabled the detection of hERG block earlier in drug discovery. In this study, we have evaluated the suitability of a second generation automated patch clamp instrument, the IonWorks Barracuda, for the characterization of hERG biophysics and pharmacology. METHODS All experiments were conducted with cells stably expressing hERG. Recordings were made in perforated patch mode either on a conventional patch clamp setup or on the IonWorks Barracuda. On the latter, all recordings were population recordings in 384-well patch plates. RESULTS HERG channels activated with a V(1/2)=-3.2±1.6mV (n=178) on the IonWorks Barracuda versus -11.2±6.1mV (n=9) by manual patch clamp. On the IonWorks Barracuda, seal resistances and currents were stable (<30% change) with up to six cumulative drug additions and 1-min incubations per addition. Over 27 experiments, an average of 338 concentration-response curves were obtained per experiment (96% of the 352 test wells on each plate). HERG pharmacology was examined with a set of 353 compounds that included well-characterized hERG blockers. Astemizole, terfenadine and quinidine inhibited hERG currents with IC(50) values of 159nM, 224nM and 2μM, respectively (n=51, 10 and 18). This set of compounds was also tested on the PatchXpress automated electrophysiology system. We determined through statistical methods that the two automated systems provided equivalent results. DISCUSSION Evaluating drug effects on hERG channels is best performed by electrophysiological methods. HERG activation and pharmacology on the IonWorks Barracuda automated electrophysiology platform were in good agreement with published electrophysiology results. Therefore, the IonWorks Barracuda provides an efficient way to study hERG biophysics and pharmacology.


Journal of Biomolecular Screening | 2015

The Acute Extracellular Flux (XF) Assay to Assess Compound Effects on Mitochondrial Function

Ruolan Wang; Steven J. Novick; James B. Mangum; Kennedy L. Queen; David A. Ferrick; George W. Rogers; Julie B. Stimmel

Numerous investigations have linked mitochondrial dysfunction to adverse health outcomes and drug-induced toxicity. The pharmaceutical industry is challenged with identifying mitochondrial liabilities earlier in drug development and thereby reducing late-stage attrition. Consequently, there is a demand for reliable, higher-throughput screening methods for assessing the impact of drug candidates on mitochondrial function. The extracellular flux (XF) assay described here is a plate-based method in which galactose-conditioned HepG2 cells were acutely exposed to test compounds, then real-time changes in the oxygen consumption rate and extracellular acidification rate were simultaneously measured using a Seahorse Bioscience XF-96 analyzer. The acute XF assay was validated using marketed drugs known to modulate mitochondrial function, and data analysis was automated using a spline curve fitting model developed at GlaxoSmithKline. We demonstrate that the acute XF assay is a robust, sensitive screening platform for evaluating drug-induced effects on mitochondrial activity in whole cells.


Journal of Virology | 2009

Pharmacovirological Impact of an Integrase Inhibitor on Human Immunodeficiency Virus Type 1 cDNA Species In Vivo

Christine Goffinet; Ina Allespach; Lena Oberbremer; Pamela L. Golden; Scott A. Foster; Brian A. Johns; Steven J. Novick; Karen Chiswell; Edward P. Garvey; Oliver T. Keppler

ABSTRACT Clinical trials of the first approved integrase inhibitor (INI), raltegravir, have demonstrated a drop in the human immunodeficiency virus type 1 (HIV-1) RNA loads of infected patients that was unexpectedly more rapid than that with a potent reverse transcriptase inhibitor, and apparently dose independent. These clinical outcomes are not understood. In tissue culture, although their inhibition of integration is well documented, the effects of INIs on levels of unintegrated HIV-1 cDNAs have been variable. Furthermore, there has been no report to date on an INIs effect on these episomal species in vivo. Here, we show that prophylactic treatment of transgenic rats with the strand transfer INI GSK501015 reduced levels of viral integrants in the spleen by up to 99.7%. Episomal two-long-terminal-repeat (LTR) circles accumulated up to sevenfold in this secondary lymphoid organ, and this inversely correlated with the impact on the proviral burden. Contrasting raltegravirs dose-ranging study with HIV patients, titration of GSK501015 in HIV-infected animals demonstrated dependence of the INIs antiviral effect on its serum concentration. Furthermore, the in vivo 50% effective concentration calculated from these data best matched GSK501015s in vitro potency when serum protein binding was accounted for. Collectively, this study demonstrates a titratable, antipodal impact of an INI on integrated and episomal HIV-1 cDNAs in vivo. Based on these findings and known biological characteristics of viral episomes, we discuss how integrase inhibition may result in additional indirect antiviral effects that contribute to more rapid HIV-1 decay in HIV/AIDS patients.


Journal of Biopharmaceutical Statistics | 2015

Dissolution Curve Comparisons Through the F2 Parameter, a Bayesian Extension of the f2 Statistic

Steven J. Novick; Yan Shen; Harry Yang; John J. Peterson; Dave LeBlond; Stan Altan

Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.


Journal of Biopharmaceutical Statistics | 2015

Testing assay linearity over a pre-specified range.

Harry Yang; Steven J. Novick; David LeBlond

Validation of linearity is a regulatory requirement. Although many methods are proposed, they suffer from several deficiencies including difficulties of setting fit-for-purpose acceptable limits, dependency on concentration levels used in linearity experiment, and challenges in implementation for statistically lay users. In this article, a statistical procedure for testing linearity is proposed. The method uses a two one-sided test (TOST) of equivalence to evaluate the bias that can result from approximating a higher-order polynomial response with a linear function. By using orthogonal polynomials and generalized pivotal quantity analysis, the method provides a closed-form solution, thus making linearity testing easy to implement.


Journal of Biomolecular Screening | 2014

The Effect of Initial Purity on the Stability of Solutions in Storage

Ioana Popa-Burke; Steven J. Novick; Charles A. Lane; Robin Hogan; Pedro Torres-Saavedra; Brian Hardy; Brenda Ray; Melissa Lindsay; Iris V. Paulus; Luke A. D. Miller

Many modern compound-screening technologies are highly miniaturized, resulting in longer-lasting solution stocks in compound management laboratories. As the ages of some stocks stretch into years, it becomes increasingly important to ensure that the DMSO solutions remain of high quality. It can be a burden to check the quality of a large library of compound solutions continuously, and so a study was devised to link the effects of initial compound purity and physicochemical properties of the compounds with the current purity of DMSO solutions. Approximately 5000 compounds with initial purity of at least 80% were examined. Storage conditions were held or observed to be relatively constant and so were eliminated as potential predictors. This allowed the evaluation of the effects of other factors on the stability of solutions, such as initial purity, number of freeze-thaw cycles, age of the solution, and multiple calculated physicochemical parameters. Of all the factors investigated, initial purity was the only one that had a clear effect on stability. None of the other parameters investigated (physicochemical properties, number of freeze-thaw cycles, age of solutions) had a statistically significant effect on stability.


Journal of Chemometrics | 2013

Directly testing the linearity assumption for assay validation

Steven J. Novick; Harry Yang

The ICH Q2(R1) (International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use) guideline for testing linearity in validation of analytical procedures suggests that “linearity should be evaluated by visual inspection of a plot of signals as a function of analyte concentration or content.” The EP6‐A guideline recommends more quantitative methods that compare straight‐line and higher‐order polynomial curve fits. In this paper, a new equivalence test is proposed to compare the quality of a straight‐line fit to that of a higher‐order polynomial. By using orthogonal polynomials and generalized pivotal quantity analysis, one may estimate the probability of equivalence between a straight line and a polynomial curve fit either in the assay signal space (the Y values) or in the concentration space (the X values). In the special case of the linear‐to‐quadratic polynomial comparison, an equivalence test may be constructed via a two one‐sided T test. Copyright


Statistics in Biopharmaceutical Research | 2012

A Bayesian Approach to Show Assay Equivalence with Replicate Measurements Over a Specified Response Range

Steven J. Novick; Karen Chiswell; John J. Peterson

Drug discovery scientists routinely develop and use in-vitro assays; for example, to identify “hits,” or to quantify the efficacious concentrations of compounds in a lead series. New and improved assays are developed to replace existing ones as the new assays may be cheaper, faster, or easier to use. An existing assay typically cannot be replaced until the new format is determined to produce equivalent measurements to the original on a test set of compounds with a diverse range of activity. In this article we propose two definitions for assessing assay equivalence across a range of responses, and apply Bayesian methods to estimate the probability of assay equivalence. Data are modeled via orthogonal regression for the case where the relative variability of the two assays is unknown a priori, and replicate measurements for each assay and compound are sufficient to identify the full set of model parameters in a likelihood model. The article reports results of a simulation experiment to explore the performance of the two metrics for testing assay equivalence under a variety of experimental designs and model parameter settings. These metrics measure similarity over a predefined assay range. This range provides a practical and focused measure of similarity which also has the effect of rendering the resulting measures robust to various distribution forms over the range of interest. The two metrics are also applied to a real data example to test equivalence between two in-vitro assays.


Journal of Biomolecular Screening | 2013

Analysis of Compound Weighing Precision in Drug Discovery

Ioana Popa-Burke; Steven J. Novick; Melissa Mantilla; Keith McGrath

Early drug discovery laboratories often call for the precise weighing of 1- to 5-mg solids into 4- to 5-g glass vials. For the balance used in this study (Mettler Toledo XP205), the manufacturer rates its accuracy at ±0.01 mg over the working range of 1 mg to 220 g and its precision or repeatability at 0.015 mg for 10-g weights. The manufacturer ratings were confirmed using standard steel weights, but these calibrators do not well represent the weighing precision of drug compound. For example, when pre-taring a 4- to 5-g vial on the balance and then weighing 1- to 5-mg calibration weights, although no bias was observed, precision dropped appreciably. When measuring solid sample in the range of 1 to 5 mg, deviation of the measured weight from the actual (true) weight was even worse, in the range of ±20% to 50%. Balance settings and environmental factors exert a strong influence on weighing precision. Although most environmental factors, such as air draughts, temperature, vibrations, and levelness, can be optimized to the extent practical in laboratory settings, problems due to static electricity are often overlooked. By controlling static electricity, we demonstrate how we optimized the process to where measurements were within ±10% of actual weight when weighing solid sample in the range of 2 to 5 mg and ±20% when weighing 1 mg into a 4- to 5-g vial. Our weighing process and method to calculate actual weight are given in detail.


Statistics in Biopharmaceutical Research | 2012

A Generalized Pivotal Quantity Approach to the Parametric Tolerance Interval Test for Dose Content Uniformity Batch Testing

Richard A. Lewis; Steven J. Novick

The Food and Drug Administration (FDA) has proposed a parametric tolerance interval test (PTIT) for batch-release testing of inhalation devices. The proposed test examines dose uniformity based on several inhalation units from a batch, with two observations per unit. An underlying assumption is that the observations are a random sample from a univariate normal distribution. Because there are two observations per unit, it may be more appropriate to model the data as stemming from a bivariate normal distribution. We take a bivariate approach and use generalized confidence interval methodology to derive a parametric tolerance interval for the distribution of doses within a batch. We then use Monte Carlo simulation to compare results based on this bivariate approach with those based on the FDA-proposed PTIT.

Collaboration


Dive into the Steven J. Novick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brenda Ray

Research Triangle Park

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge