Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Merton S. Krause is active.

Publication


Featured researches published by Merton S. Krause.


Journal of Consulting and Clinical Psychology | 1998

Exploring individual change.

Merton S. Krause; Kenneth I. Howard; Wolfgang Lutz

In the analysis of the impact of clinical interventions, the received wisdom has been that posttreatment scores, with pretreatment scores equated by random assignment or statistically partialed out, should be used to evaluate treatment outcomes. However, posttreatment scores are not generally more reliable than, nor equivalent to, change scores, even with pretreatment scores partialed out of both. Moreover, there are data-analytic methods that indicate how individual patients change, in terms of response curves over time, rather than indicate only how much groups change on the average. These methods take researchers back to the individual data that they ought to use for choosing the specific models of change to be used. To maximize relevance for clinical practice, the results of treatment research should always be reported at this most disaggregated or individual change level, as well as, when appropriate, at more aggregated statistical levels.


Psychological Bulletin | 1997

Trials and Tribulations in the Meta-Analysis of Treatment Differences: Comment on Wampold et al. (1997)

Kenneth I. Howard; Merton S. Krause; Stephen M. Saunders; S. Mark Kopta

A fair test of the Dodo bird conjecture that different psychotherapies are equally effective would entail separate comparisons of every pair of therapies. A meta-analysis of overall effect size for any particular set of such pairs is only relevant to the Dodo bird conjecture when the mean absolute value of differences is 0. The limitations of the underlying randomized clinical trials and the problem of uncontrolled causal variables make clinically useful treatment differences unlikely to be revealed by such heterogeneous meta-analyses. To enhance implications for practice, the authors recommend an intensified focus on patient-treatment interactions, cost-effectiveness variables, and separate metaanalyses for each pair of treatments.


Journal of Clinical Psychology | 1999

Between-group psychotherapy outcome research and basic science revisited.

Merton S. Krause; Kenneth I. Howard

Case studies involving the measurement of every plausibly causal variable and every important outcome variable and covering the widest possible range of cases in terms of these variables are the highest priority for psychotherapy research. Such case studies looked at together will give us the best initial understanding of what variables are probably causal and what treatments yield the best results for particular kinds of patients, therapists, and settings. The accumulation of such case studies will show us where we would benefit by doing comparative controlled experiments of distinct therapies or by employing optimum-seeking designs for a particular therapy. Collaboration by the practitioner community will be needed to do this. The truly difficult and necessary work of applied psychotherapy research still lies ahead of us, hardly touched.


Psychotherapy | 2007

Empirically certified treatments or therapists: The issue of separability.

Merton S. Krause; Wolfgang Lutz; Stephen M. Saunders

Forms of psychotherapy treatment are not neatly separable from one another in actual practice. They differ behaviorally in what they emphasize, but nevertheless they overlap and so cannot be unambiguously compared for effectiveness. Furthermore, forms of psychotherapy are not separable in practice from the therapists who apply them, so apparent differences in effectiveness between forms of treatment are always confounded by differences in effectiveness between therapists. Therapists, however, are separable from one another, and it is therapists not treatment forms that actually treat patients. Therefore, what should primarily be given preference in practice is not treatments empirically certified on the basis of their results in randomized clinical trials but psychotherapists empirically certified to practice on the basis of their results in actual practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved).


Psychotherapy | 2006

How we really ought to be comparing treatments for clinical purposes.

Merton S. Krause; Wolfgang Lutz

Without having disaggregated clinical trial comparison groups in terms of their various different outcome-relevant sorts of cases, researchers are forced to choose treatments for subsequent individual patients on only the evidence of outcome comparisons between groups of unknown and probably unequal mixes of various differently outcome-relevant sorts of prior cases. (PsycINFO Database Record (c) 2010 APA, all rights reserved).


Psychotherapy Research | 2011

The role of sampling in clinical trial design

Merton S. Krause; Wolfgang Lutz; Jan R. Boehnke

Abstract A treatments recovery rate depends upon the percentage of clients who received the treatment and recovered. This rate is not logically interpretable as the personal probability of recovery of any individual client assigned to this treatment unless the rate is 0% or 100%. So clinical trials need to be designed to help us learn how to distinguish before treatment the sorts of clients who recover in response to each available form of treatment from those who do not. This requires our developing sufficiently comprehensive sampling of clients and client covariates as part of the design of clinical trials, which would be more likely and efficiently achieved were there centralized programmatic planning and coordination of the development of these aspects of clinical trial design.


Psychotherapy Research | 2009

What should be used for baselines against which to compare treatments’ effectiveness?

Merton S. Krause; Wolfgang Lutz

Abstract None of the kinds of control groups that are commonly relied on in randomized clinical trials can provide a clinically ethical and meaningful lower baseline against which to demonstrate that any specific form of psychotherapy is worth providing. The outcome of the natural course of a malady is what logically sets the lower bound baseline for how effective a treatment for the malady ought to be, but natural courses of psychological maladies are now generally neither ethically nor feasibly determinable. So it is the available ethically adequate treatments for a psychological malady that properly must define a lower baseline against which to compare new treatments for their effectiveness. Optimal mental health is the obvious upper bound baseline, and so treatments ought to always be also evaluated against it to show how ineffective they are.


Psychotherapy Research | 2016

Case sampling for psychotherapy practice, theory, and policy guidance: Qualities and quantities

Merton S. Krause

Abstract Random sampling of cases is usually infeasible for psychotherapy research, so opportunistic and purposive sampling must be used instead. Such sampling does not justify generalizations from sample to population-distribution statistics, but does justify reporting what independent-variable value configurations are associated with what dependent-variable value configurations. This allows only the generalization that these associations occur at least that frequently in the population sampled from, which is enough for suggesting and testing some psychotherapy theories and informing some psychotherapy practice. Although psychotherapy practice is a longitudinal process, formal psychotherapy outcome research is so far most feasible and most widely done in the form of two-phase cross-sectional input-outcome studies. Thus, the analysis of sampling for psychotherapy research here will be in terms of the independent- and dependent-variable value configurations produced in such two-phase studies.


Psychotherapy Research | 2018

Mathematical expression and sampling issues of treatment contrasts: Beyond significance testing and meta-analysis to clinically useful research synthesis

Merton S. Krause

Abstract The more two treatments’ outcome distributions overlap, the more ambiguity there is about which would be better for some clients. Effect size and t-statistics ignore this ambiguity by indicating nothing about the contrasted treatments’ outcome ranges, although the wider these are the smaller are these statistics and the more other influences than these given treatments matter for outcomes. Treatment contrast data analysis logically requires valid measurement of all the influences on outcomes. Each influence, measured or not, is somehow sampled in every treatment contrast, and the nature of this sampling affects the contrast’s two outcome distributions. Sampling also affects replications of a treatment contrast, which requires sampling that produces the same statistically expected outcome distributions for each replicate as a logical prerequisite of proper meta-analysis. Because scientific human psychology is most fundamentally about individual persons and cases, rather than aggregations of persons or cases, contrasted treatments’ outcome distributions ought eventually be disaggregated to whatever input dimension gradation configurations collapse their ranges to zero through jointly taking account of every influence on outcomes. Only then are the data about individual persons or cases and so relevant to psychotherapy theory.


Methodology: European Journal of Research Methods for The Behavioral and Social Sciences | 2009

Reversion Toward the Mean Independently of Regression Toward the Mean

Merton S. Krause

There is another important artifactual contributor to the apparent improvement of persons subjected to an experimental intervention which may be mistaken for regression toward the mean. This is the phenomenon of random error and extreme selection, which does not at all involve the population regression of posttest on pretest scores but involves a quite different and independent reversion of subjects’ scores toward the population mean. These two independent threats to the internal validity of intervention evaluation studies, however, can be detected and differentiated on the sample data of such studies.

Collaboration


Dive into the Merton S. Krause's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge