Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nina Engelhardt is active.

Publication


Featured researches published by Nina Engelhardt.


The Journal of Clinical Pharmacology | 1998

Pharmacokinetics and Pharmacodynamics of Diphenhydramine 25 mg in Young and Elderly Volunteers

Joseph M. Scavone; David J. Greenblatt; Jerold S. Harmatz; Nina Engelhardt; Richard I. Shader

Thirty‐seven young and elderly male and female volunteers 21 to 76 years of age received a single 25‐mg oral dose of diphenhydramine or matching placebo in a double‐blind, randomized, two‐way crossover study. Plasma diphenhydramine concentrations, self‐ratings of sedation, mood, and autonomic effects, performance on the digit‐symbol substitution test (DSST), and heart rate were determined for 24 hours after administration. Information acquisition and recall were tested at 2.5 and 24 hours after administration. Age and gender did not significantly influence diphenhydramine peak plasma concentration, time of peak concentration, elimination half‐life, area under the plasma concentration curve, or apparent oral clearance. Effects on psychomotor performance, sedation, mood, and memory did not differ between diphenhydramine and placebo in either group. Thus, the pharmacokinetics of single 25‐mg oral doses of diphenhydramine are not influenced by age or gender. This dose of diphenhydramine produces essentially undetectable pharmacodynamic effects in both the young and elderly.


Journal of Clinical Psychopharmacology | 2004

Rater training in multicenter clinical trials: issues and recommendations.

Kenneth A. Kobak; Nina Engelhardt; Janet B. W. Williams; Joshua D. Lipsitz

The growing rate of failed clinical trials in neuroscience has led to increased attention being paid to methodologic factors that may contribute to this failure. An issue that has been largely overlooked is that of rater training and rater competency. Given that scores on clinician-administered symptom rating scales form the foundation on which the success of a study is built, it is surprising that so little attention has been paid to this issue. There are several issues related to rater training and rater competency, which warrant examination. These include who is qualified to rate, what components should be included, how and when training should be provided, and, perhaps most important, whether rater training is effective. 1. Who is qualified to administer outcome measures in clinical trials? There is great variety in the backgrounds and experience of persons administering rating scales in clinical trials conducted in the United States, ranging from psychiatrists to study coordinators with bachelor’s degrees (often in fields unrelated to psychiatry) with little, if any, clinical experience. Academic credentials alone fail to insure competence in this area, because few formal academic programs include training on the use of the clinician-administered rating scales that are typically used in clinical trials. Most training that does occur takes place at the investigative site. Despite the lack of empirical data supporting acquisition of a specific set of rater skills, we believe that the following general skills are essential to conduct a competent clinical interview using clinician-administered symptom rating scales: Conceptual understanding. Raters should have didactic training in psychopathology (particularly in the disorder of interest), so they have a good conceptual understanding of the constructs being evaluated. An example of such training would be a course in psychopathology that covers current theories of depression, including diagnostic constructs and criteria. Clinical experience. Raters should have enough clinical experience with patients who have the disorder being evaluated at all levels of severity to recognize and judge the severity of each of the symptoms rated in the scale (eg, ‘‘if he does not know what retardation is, he will be unable to recognize it when it is present and unable to rate it’’). Unfortunately, some raters get little training before seeing patients in clinical trials and often learn on clinical trial patients, with clinical trial data. Often, it is their first exposure to patients with the disorder being studied. Guest Editorial


Clinical Pharmacology & Therapeutics | 1993

Cognitive effects of β‐adrenergic antagonists after single doses: Pharmacokinetics and pharmacodynamics of propranolol, atenolol, lorazepam, and placebo

David J. Greenblatt; Joseph M. Scavone; Jerold S. Harmatz; Nina Engelhardt; Richard I. Shader

The behavioral effects of two β‐adrenergic receptor antagonists, selected to represent differing lipophilicity, were evaluated in a double‐blind, single‐dose, parallel‐group study. A group of 55 healthy volunteers (mean age, 28 years) received single oral doses of placebo, atenolol (50 mg), propranolol (40 mg), or lorazepam (2 mg). Plasma drug concentrations, self‐ratings of sedation and mood, observer ratings of sedation, and performance on the Digit Symbol Substitution Test (DSST) were assessed at multiple times during 24 hours after drug administration. Information acquisition and recall were tested at 3 and 24 hours after drug administration. Lorazepam significantly increased sedation and fatigue, impaired DSST performance, and impaired memory. The time course of these changes was highly consistent with plasma lorazepam concentrations. In contrast, atenolol and propranolol produced at most small changes in self‐ratings and observer ratings and did not alter DSST performance or memory. Under experimental conditions that are sensitive to the depressant effects of a typical benzodiazepine, single doses of atenolol and propranolol produced no meaningful changes, compared with placebo.


Journal of Clinical Psychopharmacology | 2006

Rating the raters: Assessing the quality of Hamilton Rating Scale for depression clinical interviews in two industry-sponsored clinical drug trials

Nina Engelhardt; Alan Feiger; Kenneth O. Cogger; Dawn Sikich; David J. DeBrota; Joshua D. Lipsitz; Kenneth A. Kobak; Kenneth R. Evans; William Z. Potter

Objective: The quality of clinical interviews conducted in industry-sponsored clinical drug trials is an important but frequently overlooked variable that may influence the outcome of a study. We evaluated the quality of Hamilton Rating Scale for Depression (HAM-D) clinical interviews performed at baseline in 2 similar multicenter, randomized, placebo-controlled depression trials sponsored by 2 pharmaceutical companies. Methods: A total of 104 audiotaped HAM-D clinical interviews were evaluated by a blinded expert reviewer for interview quality using the Rater Applied Performance Scale (RAPS). The RAPS assesses adherence to a structured interview guide, clarification of and follow-up to patient responses, neutrality, rapport, and adequacy of information obtained. Results: HAM-D interviews were brief and cursory and the quality of interviews was below what would be expected in a clinical drug trial. Thirty-nine percent of the interviews were conducted in 10 minutes or less, and most interviews were rated fair or unsatisfactory on most RAPS dimensions. Conclusions: Results from our small sample illustrate that the clinical interview skills of raters who administered the HAM-D were below what many would consider acceptable. Evaluation and training of clinical interview skills should be considered as part of a rater training program.


The Journal of Clinical Pharmacology | 1991

Validity of Self‐Reports of Caffeine Use

John S. Kennedy; Lisa L. von Moltke; Jerold S. Harmatz; Nina Engelhardt; David J. Greenblatt

The relationship between self‐reports of caffeine ingestion on two occasions and measured plasma concentrations of caffeine and its major metabolites was examined. A subject population [25 men and 25 women, age 20–45 years (mean: 28.7 yr)] that was enrolled in a benzodiazepine pharmacokinetic study underwent general medical screening on two occasions, each including detailed caffeine histories. Before beginning their scheduled study, plasma samples were obtained and evaluated by HPLC for caffeine, paraxanthine, theophylline, and theobromine. These values were compared with estimates of caffeine consumption in mg/day generated from both histories. There was no significant difference between plasma levels of caffeine, metabolites, or caffeine plus metabolites for categories corresponding to reports of low, intermediate or high caffeine use. A self‐reported caffeine consumption of greater than 300 mg/day (high) did correlate, however, with a significant smoking history. The authors conclude that self‐reports of caffeine ingestion do not accurately reflect acute exposure, and that if caffeine use is of importance in a given setting, reports should be confirmed by biochemical means.


Therapeutic Innovation & Regulatory Science | 2018

Risk-Based Data Monitoring: Quality Control in Central Nervous System (CNS) Clinical Trials

Cynthia McNamara; Nina Engelhardt; William Z. Potter; Christian Yavorsky; Matthew Masotti; Guillermo Di Clemente

Monitoring the quality of clinical trial efficacy outcome data has received increased attention in the past decade, with regulatory guidance encouraging it to be conducted proactively, and remotely. However, the methods utilized to develop and implement risk-based data monitoring (RBDM) programs vary, and there is a dearth of published material to guide these processes in the context of central nervous system (CNS) trials. We reviewed regulatory guidance published within the past 6 years, generic white papers, and studies applying RBDM to data from CNS clinical trials. Methodologic considerations and system requirements necessary to establish an effective, real-time risk-based monitoring platform in CNS trials are presented. Key RBDM terms are defined in the context of CNS trial data, such as “critical data,” “risk indicators,” “noninformative data,” and “mitigation of risk.” Additionally, potential benefits of, and challenges associated with implementation of data quality monitoring are highlighted. Application of methodological and system requirement considerations to real-time monitoring of clinical ratings in CNS trials has the potential to minimize risk and enhance the quality of clinical trial data.


Journal of Psychiatric Research | 2014

Effects of structured interview guides and rater monitoring in clinical trials.

Joshua D. Lipsitz; Kenneth A. Kobak; Janet B. W. Williams; Nina Engelhardt

We read with interest the recent study of Kahn and colleagues examining “Magnitude of placebo response and response variance in anti-depressant clinical trials using structured, taped, and appraised rater interviews compared to traditional rating interviews” (Khan et al., 2014). We are encouraged to see experienced investigators examining effects of monitoring and feedback (Kobak et al., 2006) including use of the RAPS scale (Lipsitz et al., 2004). In this study the authors compared different approaches to assessment using the Montgomery Asberg Depression Scale in placebo controlled antidepressant trials as implemented by multiple raters at a single site, finding larger magnitude of decrease and greater response variance in the placebo group when raters used structured interviews with external monitoring. The fact that raters within a single site had higher placebo change in studies using a structured interview guide provides only limited evidence without examining the corresponding change in the active drug comparator. It may well be that a structured interview guide and external monitoring track overall change more accurately, both for drug and placebo. The more critical issue is the relative size of the change scores for drug and placebo, which was not reported. It is important to note that the external monitoring and feedback (i.e., “outside appraisal”) methodology was developed with the goal of improving reliability on the trial level meaning to improve overall reliability of raters from multiple clinical sites. This strategy emerged directly from experiences with conventional training exercises (e.g., group ratings at startup meetings), in which the most prominent differences in ratings seemed to occur across raters from different sites. This makes intuitive sense since raters at a particular site typically learn about the scale from and review initial cases with the same senior personnel and are likely to be more closely calibrated relative to their counterparts at other sites. It is an interesting question whether this external monitoring and feedback approach might also improve reliability internally, across a small number of raters within a single site, beyond their initial calibration. The results of this study suggest that effects of monitoring as measured within each site are complex, at least in early stages of implementing such a monitoring procedure (as the total number of monitored cases in this analysis was only 34) and at a highly experienced clinical site such as that of Kahn and colleagues. Given that assessments in this sample differed both with regard to use of structured interview guides and external monitoring, it is impossible to attribute cause of difference to one or the other. It is important to distinguish, conceptually and technically, the distinct methodologies discussed in this report. Structured interview


Archives of General Psychiatry | 1989

Pharmacokinetic Determinants of Dynamic Differences Among Three Benzodiazepine Hypnotics: Flurazepam, Temazepam, and Triazolam

David J. Greenblatt; Jerold S. Harmatz; Nina Engelhardt; Richard I. Shader


Journal of Psychiatric Research | 2006

Enriched rater training using Internet based technologies: a comparison to traditional rater training in a multi-site depression trial.

Kenneth A. Kobak; Nina Engelhardt; Joshua D. Lipsitz


Journal of Clinical Psychopharmacology | 2005

A new approach to rater training and certification in a multicenter clinical trial

Kenneth A. Kobak; Joshua D. Lipsitz; Janet B. W. Williams; Nina Engelhardt; Kevin M. Bellew

Collaboration


Dive into the Nina Engelhardt's collaboration.

Top Co-Authors

Avatar

Kenneth A. Kobak

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Joshua D. Lipsitz

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Feiger

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Z. Potter

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge