David C. Norris
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David C. Norris.
Health Affairs | 2014
Dolores Acevedo-Garcia; Nancy McArdle; Erin Hardy; Unda Ioana Crisan; Bethany Romano; David C. Norris; Mikyung Baek; Jason Reece
Improving neighborhood environments for children through community development and other interventions may help improve childrens health and reduce inequities in health. A first step is to develop a population-level surveillance system of childrens neighborhood environments. This article presents the newly developed Child Opportunity Index for the 100 largest US metropolitan areas. The index examines the extent of racial/ethnic inequity in the distribution of children across levels of neighborhood opportunity. We found that high concentrations of black and Hispanic children in the lowest-opportunity neighborhoods are pervasive across US metropolitan areas. We also found that 40xa0percent of black and 32xa0percent of Hispanic children live in very low-opportunity neighborhoods within their metropolitan area, compared to 9xa0percent of white children. This inequity is greater in some metropolitan areas, especially those with high levels of residential segregation. The Child Opportunity Index provides perspectives on child opportunity at the neighborhood and regional levels and can inform place-based community development interventions and non-place-based interventions that address inequities across a region. The index can also be used to meet new community data reporting requirements under the Affordable Care Act.
Housing Policy Debate | 2016
Dolores Acevedo-Garcia; Nancy McArdle; Erin Hardy; Keri Nicole Dillman; Jason Reece; Unda Ioana Crisan; David C. Norris; Theresa L. Osypuk
Abstract We use the Location Affordability Index (LAI) and the newly developed Child Opportunity Index (COI) to assess, for the first time, the tradeoff between neighborhood opportunity and housing/transportation affordability facing low-income renter families in the 100 largest metropolitan areas. In addition to describing the opportunity/affordability relationship, we explore the level of balance between neighborhoods’ relative cost burden and their corresponding opportunity levels to determine whether children of different racial/ethnic groups are more (or less) likely to experience cost-opportunity imbalance. Our multilevel analyses show that housing affordability is largely accounted for by the neighborhood opportunity structure within each metropolitan area. The metropolitan characteristics examined account for only a small proportion of the between-metro variance in the opportunity/affordability gradient for housing, presumably because the neighborhood opportunity structure already reflects metro area factors such as fragmentation and segregation. On the other hand, transportation affordability shows a weaker association with neighborhood opportunity. The COI/LAI association is much weaker for transportation than for housing, and a large part of the variation in the transportation gradient occurs at the metropolitan area level, not the neighborhood level. Sprawl is particularly associated with transportation affordability, with lower sprawl areas having lower transportation-cost burden. We discuss the implications of the empirical findings for defining affordability in housing assistance programs. We recommend that housing policy for low-income renter families adopt an expanded notion of affordability (housing, transportation, and opportunity) and explicitly consider equity (e.g. cost-opportunity imbalance) in the implementation of this expanded affordability definition.
F1000Research | 2017
David C. Norris
Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent ‘confirmatory’ Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as ‘dose-finding’, but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug’s population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace ‘the’ MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.
bioRxiv | 2017
David C. Norris
Background Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent registration trials risk suboptimal dosing that compromises statistical power and lowers the probability of technical success (PTS) for the investigational drug. While much methodological progress has been made toward adaptive dose-finding, and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). But a new methodology, Dose Titration Algorithm Tuning (DTAT), now holds forth the promise of individualized ‘MTDi’ dosing. Relative to such individualized dosing, current ‘one-size-fits-all’ dosing practices amount to a constraint that imposes costs on society. This paper estimates the magnitude of these costs. Methods Simulated dose titration as in (Norris 2017) is extended to 1000 subjects, yielding an empirical MTDi distribution to which a gamma density is fitted. Individual-level efficacy, in terms of the probability of achieving remission, is assumed to be an Emax-type function of dose relative to MTDi, scaled (arbitrarily) to identify MTDi with the LD50 of the individual’s tumor. (Thus, a criterion 50% of the population achieve remission under individualized dosing in this analysis.) Current practice is modeled such that all patients receive a first-cycle dose at ‘the’ MTD, and those for whom MTDi < MTDthe experience a ‘dose-limiting toxicity’ (DLT) that aborts subsequent cycles. Therapy thus terminated is assumed to confer no benefit. Individuals for whom MTDi ≥ MTDthe tolerate a full treatment course, and achieve remission with probability determined by the Emax curve evaluated at MTDthe/MTDi. A closed-form expression is obtained for the population remission rate, and maximized numerically over MTDthe as a free parameter, thus identifying the best result achievable under one-size-fits-all dosing. A sensitivity analysis is performed, using both a perturbation of the assumed Emax function, and an antipodal alternative specification. Results Simulated MTDi follow a gamma distribution with shape parameter α ≈ 1.75. The population remission rate under one-size-fits-all dosing at the maximizing value of MTDthe proves to be a function of the shape parameter—and thus the coefficient of variation (CV)—of the gamma distribution of MTDi. Within a plausible range of CV(MTDi), one-size-fits-all dosing wastes approximately half of the drug’s population-level efficacy. In the sensitivity analysis, sensitivity to the perturbation proves to be of second order. The alternative exposure-efficacy specification likewise leaves all results intact. Conclusions The CV of MTDi determines the efficacy lost under one-size-fits-all dosing at ‘the’ MTD. Within plausible ranges for this CV, failure to individualize dosing can effectively halve a drug’s value to society. In a competitive environment dominated by regulatory hurdles, this may reduce the value of shareholders’ investment in the drug to zero.
JAMA | 2014
David C. Norris
In the trial reported by Dr Petersen and colleagues,1 financial incentives to physicians and practice teams were found to exert an “unexpectedly” impermanent effect on an outcome measure that combined blood pressure control with appropriate clinical response to uncontrolled blood pressure. In their study protocol,2 Petersen et al indicated the combined measure would be used partly as a precaution against “gaming” by physicians of a purely process of care measure. In making this argument, the authors appeared to presume that blood pressure measures are themselves immune to manipulation. Were this presumption proven false, this trial’s dependence on endogenously measured blood pressures would compromise its internal validity.
bioRxiv | 2018
David C. Norris
Background I have previously evaluated the efficiency of one-size-fits-all dosing for single agents in oncology (Norris 2017b). By means of a generic argument based on an Emax-type dose-response model, I showed that one-size-fits-all dosing may roughly halve a drug’s value to society. Since much of the past decade’s ‘innovation’ in oncology dose-finding methodology has involved the development of special methods for combination therapies, a generalization of my earlier investigations to combination dosing seems called-for. Methods Fundamental to my earlier work was the premise that optimal dose is a characteristic of each individual patient, distributed across the population like any other physiologic characteristic such as height. I generalize that principle here to the 2-dimensional setting of combination dosing with drugs A and B, using a copula to build a bivariate joint distribution of (MTDi,A, MTDi,B) from single-agent marginal densities of MTDi,A and MTDi,B, and interpolating ‘toxicity isocontours’ in the (a, b)-plane between the respective monotherapy intercepts. Within this framework, three distinct notional toxicities are elaborated: one specific to drug A, a second specific to drug B, and a third ‘nonspecific’ toxicity clinically attributable to either drug. The dose-response model of (Norris 2017b) is also generalized to this 2-D scenario, with the addition of an interaction term to provide for a complementary effect from combination dosing. A population of 1,000 patients is simulated, and used as a basis to evaluate population-level efficacy of two pragmatic dose-finding designs: a dose-titration method that maximizes dose-intensity subject to tolerability, and the well-known POCRM method for 1-size-fits-all combination-dose finding. Hypothetical ‘oracular’ methods are also evaluated, to define theoretical upper limits of performance for individualized and 1-size-fits-all dosing respectively. Results In our simulation, pragmatic titration attains 89% efficiency relative to theoretically optimal individualized dosing, whereas POCRM attains only 55% efficiency. The passage from oracular individualized dosing to oracular 1-size-fits-all dosing incurs an efficiency loss of 33%, while the parallel passage (within the ‘pragmatic’ realm) from titration to POCRM incurs a loss of 38%. Conclusions In light of the 33% figure above, the greater part of POCRM’s 38% efficiency loss relative to titration appears attributable to POCRM’s 1-size-fits-all nature, rather than to any pragmatic difficulties it confronts. Thus, appeals to pragmatic considerations would seem neither to justify the decision to use 1-size-fits-all dose-finding designs, nor to excuse their inefficiencies
Drug Discovery Today | 2018
David C. Norris
Failure to individualize drug dosing may waste 50% of the value of pharmaceutical innovation coming off the bench, driving the unacceptable failure rates of drug development programs and unsustainable drug costs. An immense opportunity is thus presented to investors in pharmaceutical innovation who are willing to develop and field innovative Phase 1 trial methodologies that solve this problem. The principle of Dose Titration Algorithm Tuning (DTAT) offers a reasoned strategy for accomplishing this. Figure options Download full-size image Download high-quality image (58 K) Download as PowerPoint slide
bioRxiv | 2017
David C. Norris
Background Coherence notions have a long history in statistics, as rhetorical devices that support the critical examination of statistical doctrines and practices. Within the special domain of dose-finding methodology, a widely-discussed coherence criterion has been advanced as a means to guard the conceptual integrity of formal dose-finding designs from ad hoc tinkering. This is not, however, the only possible coherence criterion relevant to dose finding. Indeed, a new coherence criterion emerges naturally when the near-universal practice of cohort-wise dose escalation is examined from a clinical perspective. Methods The practice of enrolling drug-naive patients into an escalation cohort is considered from a realistic perspective that acknowledges patients’ heterogeneity with respect to pharmacokinetics and pharmacodynamics. A new coherence criterion thereby emerges, requiring that an escalation dose be tried preferentially in participants who have already tolerated a lower dose, rather than in new enrollees who are drug-naive. The logical implications of this ‘precautionary coherence’ (PC) criterion are worked out in the setting of a 3+3 design. A ‘3+3/PC’ design that satisfies this criterion is described and visualized. A simulation study is performed, evaluating the long-run performance of this new design, relative to optimal 1-size-fits-all dosing. Results Under the PC criterion, the 3+3 dose-escalation design necessarily transmutes into a dose titration design. Two simple rules suffice to enable abandonment of low starting doses, and termination of escalation. The process of conducting the 3+3/PC trial itself models the application of a dose titration algorithm (DTA) that carries over readily into clinical care. The 3+3/PC trial also yields an interval-censored ‘dose-survival curve’ having a semantics that should prove familiar to oncology trialists. Simulated 3+3/PC trials yield DTAs over a median of 6 dose levels, achieving 50% improved population-level efficacy compared to optimal 1-size-fits-all dosing. Conclusions Dose individualization can be accomplished within a trial conducted along ‘algorithmic’ lines resembling those of the inveterate 3+3 design. The dose-survival curve arising from this ‘3+3/PC’ design has semantics that should prove familiar and conceptually accessible to oncology trialists, and also seems capable of supporting more formal statistical treatments of the design. In the presence of sufficient heterogeneity in individualized optimal dosing, a 3+3/PC trial outperforms any conceivable 1-size-fits-all dose-finding design. This fact eliminates the rationale for the latter designs, and should put an end to the further development and promulgation of 1-size-fits-all dose finding.
F1000Research | 2016
David C. Norris; Andrew Wilson
In a 2014 report on adolescent mental health outcomes in the Moving to Opportunity for Fair Housing Demonstration (MTO), Kessler et al. reported that, at 10- to 15-year follow-up, boys from households randomized to an experimental housing voucher intervention experienced 12-month prevalence of post-traumatic stress disorder (PTSD) at several times the rate of boys from control households. We reanalyze this finding here, bringing to light a PTSD outcome imputation procedure used in the original analysis, but not described in the study report. By bootstrapping with repeated draws from the frequentist sampling distribution of the imputation model used by Kessler et al., and by varying two pseudorandom number generator seeds that fed their analysis, we account for several purely statistical components of the uncertainty inherent in their imputation procedure. We also discuss other sources of uncertainty in this procedure that were not accessible to a formal reanalysis.
JAMA | 2014
David C. Norris
Dr Kessler and colleagues1 reported that, at follow-up 10 to 15 years later, boys from households that received housing vouchers in the Moving to Opportunity Demonstration experienced 12-month prevalence of posttraumatic stress disorder (PTSD) at several times the rate of boys from control households. Part of the explanation for this intriguing finding may lie in a phenomenon described in the Birmingham Youth Violence Study, in which “high levels of community violence exposure attenuated the relationships between home and school violence and adjustment, perhaps reflecting desensitization to violence or a process whereby community levels of violence establish ‘norms’ that affect the interpretation and impact of violence in other settings.”2