Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald Showalter is active.

Publication


Featured researches published by Donald Showalter.


Applied Psychological Measurement | 1985

The effect of number of rating scale categories on levels of interrater reliability: A monte carlo investigation

Domenic V. Cicchetti; Donald Showalter; Peter Tyrer

A computer simulation study was designed to in vestigate the extent to which the interrater reliability of a clinical scale is affected by the number of cate gories or scale points (2, 3, 4, ... ,100). Results in dicate that reliability increases steadily up to 7 scale points, beyond which no substantial increases occur, even when the number of scale poirits is increased to as many as 100. These findings hold under the follow ing conditions: (1) The research investigator has insuf ficient a priori knowledge to use as a reliable guide line for deciding on an appropriate number of scale points to employ, and (2) the dichotomous and ordinal categories being considered all have an underlying metric or continuous scale format.


Child Neuropsychology | 1995

Diagnosing Autism using ICD-10 criteria: A comparison of neural networks and standard multivariate procedures

Domenic V. Cicchetti; Fred R. Volkmar; Ami Klin; Donald Showalter

Abstract In a sample of 976 consecutive cases derived from the recent world-wide Field Trial of Autism and other Pervasive Developmental Disorders, we tested the accuracy of the 15 ICD-10 criteria for the diagnosis of Autism, by comparing neural network models (NN) to more conventional multivariate competitors, namely, linear and quadratic discriminant function analyses and logistic regression. NNs were less accurate than competitors, both in terms of cross-validation results as well as in levels of shrinkage from training to test conditions. The clinical research implications of these results are discussed.


Educational and Psychological Measurement | 1988

A Computer Program for Determining the Reliability of Dimensionally Scaled Data when the Numbers and Specific Sets of Examiners may Vary at Each Assessment

Domenic V. Ciccheti; Donald Showalter

Using a variant of the intraclass correlation coefficient (ICC), this program computes the reliability of dimensionally scaled variables when both the number and specific set of judges vary from one assessment to the next.


Psychiatry Research-neuroimaging | 1997

A new method for assessing interexaminer agreement when multiple ratings are made on a single subject: applications to the assessment of neuropsychiatric symptomatology

Domenic V. Cicchetti; Donald Showalter; Robert A. Rosenheck

A new method is introduced for assessing levels of interexaminer agreement when multiple ratings are made on a single subject, with an application in psychiatric research. It is designed to provide an overall level of interexaminer agreement and separate indices of agreement for each examiner. These indices are based on biostatistical and clinical criteria to determine whether the ratings of any given examiner are appreciably higher or lower than the group average, or a consensus diagnosis. A number of examples, from ongoing psychiatric research, are provided to illustrate conditions favoring the application of the new methodology. Finally, the necessary software for performing the analyses is available to clinical investigators with interest in this area of assessment.


Psychiatry Research-neuroimaging | 2009

Evaluating the reliability of multiple assessments of PTSD symptomatology: Multiple examiners, one patient

Domenic V. Cicchetti; Alan Fontana; Donald Showalter

The objective of this investigation was to assess the inter-examiner reliability of PTSD symptomatology by 12 clinical examiners who evaluated independently a single-case Vietnam-Era veteran, using videotaped clinician interviews with The Clinician Administered PTSD Scale-1 (CAPS-1). A second patient was utilized for cross-validation purposes. Data were analyzed using a specially designed Kappa statistic. In previous reliability assessments of the CAPS-1, a pair of examiners assessed multiple patients, and demonstrated evidence of high reliability and validity. As in previous reliability assessments, reliability was assessed both for frequency and intensity of PTSD symptomatology in both patients. Results indicated that the reliability levels of the CAPS-1 were consistently and almost exclusively in the excellent to perfect levels of inter-examiner agreement, as based upon both global assessments and on a symptom-by-symptom basis. The results of this investigation are interpreted in the broader framework of their applicability to assessing inter-examiner agreement in clinical trials or other large multi-site studies.


Psychiatry Research-neuroimaging | 1997

A computer program for assessing interexaminer agreement when multiple ratings are made on a single subject.

Domenic V. Cicchetti; Donald Showalter

This report describes a computer program for applying a new statistical method for determining levels of agreement, or reliability, when multiple examiners evaluate a single subject. The statistics that are performed include the following: an overall level of agreement, expressed as a percentage, that takes into account all possible levels of partial agreement; the same statistical approach for deriving a separate level of agreement of every examiner with every other examiner; and tests of the extent to which a giver examiners rating (say a symptom score of three on a five-category ordinal rating scale) deviates from the group or overall average rating. These deviation scores are interpreted as standard Z statistics. Finally, both statistical and clinical criteria are provided to evaluate levels of interexaminer agreement.


Educational and Psychological Measurement | 1990

A Computer Program for Calculating Subject-by-Subject Kappa or Weighted Kappa Coefficients

Domenic V. Cicchetti; Donald Showalter; Paul L. McCarthy

This computer program calculates individual subject kappa or weighted kappa coefficients for each of the following three types of categorical data: (a) nominal (dichotomous/polychotomous), (b) ordinal (dichotomous /continuous), and (c) mixed scales of measurement (containing both nominal and ordinal features). Additional output includes criteria for determining levels of both statistical and clinical significance as well as specific tests of examiner bias.


Educational and Psychological Measurement | 1984

A Computer Program for Assessing the Reliability of Nominal Scales Using Varying Sets of Multiple Raters

Domenic V. Cicchetti; James Didriksen; Donald Showalter

This program computes multiple judge reliability levels under the following conditions: different sets of judges perform the ratings; the number of judges is a constant; and the scale of measurement is nominal.


Archive | 1996

Success by Regular Classroom Teachers in Implementing a Model Elementary School AIDS Education Curriculum

David J. Schonfeld; Ellen C. Perrin; Marcia Quackenbush; Linda L. O’Hare; Donald Showalter; Domenic V. Cicchetti

The study that will be described in this chapter is the third phase of a five-year project funded by the National Institute of Mental Health (School-based AIDS Education & Children’s Health Concepts MH47251) to investigate the process of conceptual development by which healthy elementary school-age children acquire an understanding of the concepts of health as related to HIV infection and AIDS and the efficacy of school-based education in promoting the acquisition of these concepts. The first phase of the study involved the collection of normative data regarding children’s health concepts and AIDS through the administration of a semi-structured interview (ASK—AIDS Survey for Kids) to a cross-sectional sample (N=361) of elementary school-age children attending regular education classes in four public schools in New Haven, Connecticut (U.S.A.). This phase provided the data necessary for the creation of a developmentally-based curricula and the standardization of the research interview (ASK) that was utilized as the principal outcome measure for subsequent phases. (Schonfeld, Johnson, Perrin, O’Hare & Cicchetti, 1993)


International journal of statistics in medical research | 2015

Establishing Reliability When Multiple Examiners Evaluate a Single Case-Part II: Applications to Symptoms of Post-Traumatic Stress Disorder (PTSD)

Domenic V. Cicchetti; Alan Fontana; Donald Showalter

In an earlier article, the authors assessed the clinical significance of each of 19 Clinician Administered PTSD Scale items and composite scores (CAPS-1) [1] when 12 clinicians evaluated a Vietnam era veteran. A second patient was also evaluated by the same 12 clinicians and used for cross-validation purposes [2]. The objectives of this follow-up research are: (1) to describe and apply novel bio-statistical methods for establishing the statistical significance of these reliability estimates when the same 12 examiners evaluated each of the two Vietnam era patients. This approach is also utilized within the broader contexts of the ideographic and nomothetic conceptualizations to science, and the interplay between statistical and clinical or practical significance; (2) to detail the steps for applying the new methodology; and (3) to investigate whether the quality of the symptoms (frequency, intensity); item content; or specific clinician affect the levels of rater reliability. The more typical (nomothetic) reliability research design focuses on group averages and broader principles related to biomedical issues, rather than the focus on the individual case (ideographic approach). Both research designs (ideographic and nomothetic) have been incorporated in this follow-up research endeavor.

Collaboration


Dive into the Donald Showalter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis S. Charney

Icahn School of Medicine at Mount Sinai

View shared research outputs
Researchain Logo
Decentralizing Knowledge