Kevin C. H. Parker
Kingston General Hospital
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin C. H. Parker.
Cognitive Psychology | 1990
Kenneth S. Bowers; Glenn Regehr; Claude G. Balthazard; Kevin C. H. Parker
Abstract Most recent work concerned with intuition has emphasized the errors of intuitive judgment in the context of justification. The present research instead views intuition as informed judgment in the context of discovery. Two word tasks and a gestalt closure task were developed to investigate this concept of intuition. Two of these tasks demonstrated that people could respond discriminatively to coherence that they could not identify, and a third task demonstrated that this tacit perception of coherence guided people gradually to an explicit representation of it in the form of a hunch or hypothesis. While such hunches may surface quite suddenly into consciousness, we propose that the underlying cognitive processes which produce them are more continous than discontinuous in nature. Specifically, we argue that clues to coherence automatically activate the problem solvers relevant mnemonic and semantic networks. Eventually the level of patterned activation is sufficient to cross a threshold of consciousness, and at that point, it is represented as a hunch or hypothesis. The largely unconscious processes involved in generating hunches is quite different from the conscious processes required to test them—thereby vindicating the classical distinction between the context of discovery and the context of justification.
Psychological Bulletin | 1988
Kevin C. H. Parker; R. Karl Hanson; John Hunsley
We estimated the average reliability, stability, and validity of the Minnesota Multiphasic Personality Inventory (MMPI), Rorschach Inkblot Test, and Wechsler Adult Intelligence Scale (WAIS) from articles published in the Journal of Personality Assessment and the Journal of Clinical Psychology betwee
Clinical Psychology Review | 2000
Leslie Atkinson; Angela Paglia; Jennifer Coolbear; Alison Niccols; Kevin C. H. Parker; Sharon Guger
This meta-analysis addresses the association between attachment security and each of three maternal mental health correlates. The meta-analysis is based on 35 studies, 39 samples, and 2,064 mother-child pairs. Social-marital support (r = .14; based on 16 studies involving 17 samples and 902 dyads), stress (r = .19; 13 studies, 14 samples, and 768 dyads), and depression (r = .18; 15 studies, 19 samples, and 953 dyads) each proved significantly related to attachment security. All constructs showed substantial variance in effect size. Ecological factors and approach to measuring support may explain the heterogeneity of effect sizes within the social-marital support literature. Effect sizes for stress varied according to the time between assessment of stress and assessment of attachment security. Among studies of depression, clinical samples yielded significantly larger effect sizes than community samples. We discuss these results in terms of measurement issues (specifically, overreliance on self-report inventories) and in terms of the need to study the correlates of change in attachment security, rather than just the correlates of attachment security per se.
Journal of Social and Personal Relationships | 2000
Leslie Atkinson; Alison Niccols; Angela Paglia; Jennifer Coolbear; Kevin C. H. Parker; Lori Poulton; Sharon Guger; Gill Sitarenios
This meta-analysis of maternal sensitivity and infant/toddler attachment security includes 41 studies with 2243 dyads. Its purpose is to explore the impact of time between assessments of maternal sensitivity and attachment security on the strength of association between these two constructs. We also examined the interrelationships between this moderator variable and other moderators identified in the literature, such as age and risk status of the sample. We found an overall effect size of r = .27 linking sensitivity to security. However, time between assessment of sensitivity and attachment security moderates this effect size, such that: (1) effect sizes decrease dramatically as one moves from concurrent to nonconcurrent assessments, and (2) temporally distant assessments are a sufficient condition for small effect size; that is, if the time between assessments is large, then a relatively small effect size linking sensitivity and attachment is certain. We also found that time between sensitivity and attachment assessments may account for earlier findings indicating that effect sizes linking sensitivity to security differ according to age of child and sample risk status. Findings are discussed in terms of internal working models and environmental stability.
Criminal Justice and Behavior | 2007
Stephen Butler; Pasco Fearon; Leslie Atkinson; Kevin C. H. Parker
This study presents data from 85 young offenders referred for court-ordered mental health assessments. A model of interactive risk was tested, in which parent-child relationships, social-contextual adversity, and antisocial thinking were predicted to be associated with aggressive and delinquent behavior in a multiplicative fashion. For aggression, strong associations were found with parent-adolescent alienation, but there were no interactions with social-contextual risk or antisocial thinking. For delinquency, parent-adolescent relationship quality interacted with both social-contextual risk and antisocial thinking. Better parent-adolescent trust-communication was associated with an attenuated effect of social-contextual risk and antisocial thinking on delinquency. Greater parent-adolescent alienation, however, was associated with relatively high levels of delinquent behavior irrespective of social-contextual risk, whereas adolescents reporting less attachment-alienation showed greater delinquency as social-contextual risk increased.
Journal of Clinical Psychology | 1988
John Hunsley; R. Karl Hanson; Kevin C. H. Parker
A sample of MMPI research published between 1970 and 1981 was analyzed to yield reliability and stability estimates for the MMPI scales. In fundamental agreement with previous research, moderately high levels of reliability and stability were found for all scales. Reliability values ranged from .71 to .84; stability values ranged from .63 to .86. These findings are based on thousands of adult subjects from college, psychiatric, medical, alcohol or drug rehabilitation, and prison populations. The present scale estimates have wide generalizability and, therefore, should be of value to clinicians and researchers in various settings.
Psychological Science | 1999
Kevin C. H. Parker; John Hunsley; R. Karl Hanson
We published “MMPI, Rorschach, and WAIS: A Meta-Analytic Comparison of Reliability, Stability, and Validity” 10 years ago (Parker, Hanson, & Hunsley, 1988). Little has changed since then to make us alter our fundamental conclusion: “The MMPI [Minnesota Multiphasic Personality Inventory] and Rorschach are both valid, stable, and reliable under certain circumstances. When either test is used in the manner for which it was designed and validated, its psychometric properties are likely to be adequate for either clinical or research purposes” (p. 373). The article has been cited by Rorschach proponents as having settled the question of Rorschach validity (e.g., Shontz & Green, 1992). Our research cannot support this conclusion about the Rorschach, or about the MMPI or Wechsler Adult Intelligence Scale (WAIS) for that matter. Our results provided some limited evidence to support the validity of the Rorschach. Rather than global statements about the whole test, meta-analytic reviews of individual Rorschach scales are needed (cf. Wood, Nezworski, & Stejskal, 1996); such reviews have yet to be completed. Although Garb, Florio, and Grove (1998) compared the validity of the MMPI and Rorschach, they did not bring new data to the problem; they used our old data in a way we avoided deliberately. There is one particular statement by Garb et al. (1998) that we wish to address: “Second, and even more damaging, in many of the studies coded by Parker et al., a small effect size meant that a test was valid” (pp. 402–403). We wish to make two points in response. First, effect size had no part in the coding process. We coded measurement category on the basis of the narrative material in the introduction and method sections before inspecting the numeric data. If a convergentvalidity effect size was small, we knew nothing about the effect size until after the measurement category was coded. Second, there is a particular design that was confusing to code. When a subject is tested twice with the same test (or equivalent tests), there are two components of variance that can be computed: the difference between the scores and the correlation between the scores. Because we coded every statistic we found, we coded both. The correlation between the scores fell into a reliability code, and the difference between the scores fell into a validity code. The code was discriminant validity if the authors expected no difference, convergent validity if the authors expected some difference, and unknown validity if the authors had no stated expectation. Each of these three codings can be seen in examples cited by Garb et al. The data from Griffin, Finch, Edwards, and Kendall (1976) are repeated measures data of the type just described. The correlation between an MMPI scale and its equivalent Mini-Mult scale was a partwhole correlation, coded as reliability. Griffin et al. identified that they were modifying the Mini-Mult in response to findings of significant differences between equivalent scales. A priori, we did not know if they would succeed and eliminate the difference or fail and retain it, so the expected findings were uncertain, and we coded them as “unknown validity” (“exploratory studies” in Garb’s renaming). Another example is seen in the study by Hersen and Greaves (1971), who examined the number of Rorschach responses after experimental manipulations designed to produce a difference in scores. They expected that data for subjects unaware of the manipulation and for untreated control subjects would be the same, and these data were coded (on paper) as discriminant-validity data. The differences between aware, treated subjects and control subjects and between aware, treated subjects and unaware, treated subjects were coded as convergent validity. There were two data-entry errors for this study that Garb et al. identified accurately, but their effects were negligible. Because we were using median data points, the corrected median for this study shifts slightly, from .11 to .16; the mean for Rorschach t tests goes from .08 to .09 in Table 2 of our 1988 article. Garb et al. criticized inclusion of studies in which Rorschach scales were used as the validating criterion for other Rorschach scales (e.g., Last & Weiss, 1976). We consistently included studies in which scales were used as validating criteria for another scale from the same instrument, regardless of whether the scales were drawn from the WAIS, the MMPI (e.g., Griffin et al., 1976), or the Rorschach, as long as the scales were indeed different. We agree with Garb et al. that method variance probably inflated effect sizes in such cases. But, there is no evidence that this inflation varied systematically from MMPI to WAIS to Rorschach. Garb et al. found a significant difference between convergentvalidity results for the Rorschach and MMPI when the effect of type of statistic was ignored. Their finding of a difference probably reflects a true difference in the distribution of scores, but what is the meaning of the difference? Cohen (1983) described the predictable impact of using coarse statistics. We found a pattern of results congruent with Cohen’s predictions (shown in our Table 2). We cannot prove that this is the cause of the difference, but we think it unwise to ignore the confounding effect of class of statistic ( r vs. F vs. t) in these data, regardless of cause of the confound. We believe that Garb et al. took these data beyond where they can go. We lack confidence in the conclusion Garb et al. drew not because we think their analyses are flawed, but because we think our data set cannot support the detail of their analyses. Suppose we accept their conclusion. What does it mean if “the Rorschach is not as valid as the MMPI” (p. 404)? Does it mean the Rorschach is invalid? No. Based on such limited data, any unqualified claim of invalidity (or validity) cannot be supported. Garb et al. raised the central issue of the utility of the Rorschach in their last paragraph. The cost of MMPI training, administration, and
Psychological Assessment | 1994
Kevin C. H. Parker; Leslie Atkinson
The Wechsler Intelligence Scale for Children ― Third Edition (WISC-III; Wechsler, 1991) manual incorporates a detailed and careful series of factor analyses. It recommends using approximations of the Verbal Comprehension, Perceptual Organization, Freedom From Distractibility, and Processing Speed factor scores. These approximations are simple sums of the scores of the subtests that load most highly on a factor. These simple sum factor estimates suffer from reduced factorial specificity. The simple estimates share substantially more variance with the factor of General Intelligence, or the g factor, and less variance with the other unrotated factor than the best estimates of the factor
Psychological Assessment | 1995
Kevin C. H. Parker; Leslie Atkinson
Standard procedures for estimating factor scores for the Wechsler Adult Intelligence Scale-Revised (WAIS-R ; D. Wechsler, 1981) involve equally weighted sums of the subtests that load most highly on the factor being estimated. We argue that factor scores derived in this manner lack discriminant validity ; they are strongly biased toward g (the first unrotated factor) and away from the other 2 unrotated factors. If regression-like weights are applied to all of the WAIS-R subtests and the products are summed, the resulting differentially weighted factors give results that show similar convergent validity and much greater discriminant validity with respect to the original factors.
The Canadian Journal of Psychiatry | 1992
Kevin C. H. Parker; Arthur P. Froese
This paper describes a number of steps we have initiated to study our chronic waiting list problems. We describe a program involving monthly data collection which has enabled us to document the effectiveness of some strategies and predictive variables. During 1989 the data were supplemented with information collected by a questionnaire mailed to every other referral. We found that an initial response to the questionnaire was a powerful predictor of successfully kept first appointments six to 12 months later. The significance of these differences, the impact of our tracking procedures and the issues and causes, along with some strategies, are discussed.