Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Perleth is active.

Publication


Featured researches published by Matthias Perleth.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE-Leitlinien: 3. Bewertung der Qualität der Evidenz (Vertrauen in die Effektschätzer) ☆

Joerg J. Meerpohl; Gero Langer; Matthias Perleth; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

This article introduces the GRADE approach to rating the quality of evidence. GRADE specifies four categories (high, moderate, low, and very low) that are applied to a body of evidence, not to individual studies. In the context of a systematic review, quality reflects our confidence that the estimates of the effect are correct. In the context of recommendations, quality reflects our confidence that the effect estimates are adequate to support a particular recommendation. Randomised trials begin as high quality evidence, observational studies as low quality. Quality as used in GRADE means more than risk of bias and so may also be compromised by imprecision, inconsistency, indirectness of study results, and publication bias. In addition, several factors can increase our confidence in an estimate of effect. GRADE provides a systematic approach for considering and reporting each of these factors. GRADE separates the process of assessing quality of evidence from the process of making recommendations. Judgments about the strength of a recommendation depend on more than just the quality of evidence.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

[GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables].

Gero Langer; Joerg J. Meerpohl; Matthias Perleth; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

This article is the first of a series providing guidance for the use of the GRADE system of rating quality of evidence and grading strength of recommendations in systematic reviews, health technology assessments, and clinical practice guidelines addressing alternative management options. The GRADE process begins with asking an explicit question, including specification of all important outcomes. After the evidence has been collected and summarised, GRADE provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect. Recommendations are characterised as strong or weak (alternative terms: conditional or discretionary) according to the quality of the supporting evidence and the balance between desirable and undesirable consequences of the alternative management options. GRADE suggests summarising evidence in succinct, transparent, and informative Summary of Findings tables that show the quality of evidence and the magnitude of relative and absolute effects for each important outcome and/or as evidence profiles that provide, in addition, detailed information about the reason for the quality of evidence rating. Subsequent articles in this series will address GRADEs approach to formulating questions, assessing quality of evidence, and developing recommendations.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE-Leitlinien: 4. Bewertung der Qualität der Evidenz – Studienlimitationen (Risiko für Bias)

Joerg J. Meerpohl; Gero Langer; Matthias Perleth; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

In the GRADE approach, randomised trials start as high-quality evidence and observational studies as low-quality evidence, but both can be rated down if most of the relevant evidence comes from studies that suffer from a high risk of bias. Well-established limitations of randomised trials include failure to conceal allocation, failure to blind, loss to follow-up, and failure to appropriately consider the intention-to-treat principle. More recently, recognised limitations include stopping early for apparent benefit and selective reporting of outcomes according to the results. Key limitations of observational studies include use of inappropriate controls and failure to adequately adjust for prognostic imbalance. Risk of bias may vary across outcomes (e.g., loss to follow-up may be far less for all-cause mortality than for quality of life), a consideration that many systematic reviews ignore. In deciding whether to rate down for risk of bias - whether for randomised trials or observational studies-authors should not take an approach that averages across studies. Rather, for any individual outcome, when there are some studies with a high risk, and some with a low risk of bias, they should consider including only the studies with a lower risk of bias.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE Leitlinien: 6. Einschätzung der Qualität der Evidenz – Unzureichende Präzision ☆

Michael Kulig; Matthias Perleth; Gero Langer; Joerg J. Meerpohl; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

GRADE suggests that examination of 95% confidence intervals (CIs) provides the optimal primary approach to decisions regarding imprecision. For practice guidelines, rating down the quality of evidence (i.e., confidence in estimates of effect) is required when clinical action would differ if the upper versus the lower boundary of the CI represented the truth. An exception to this rule occurs when an effect is large, and consideration of CIs alone suggests a robust effect, but the total sample size is not large and the number of events is small. Under these circumstances, one should consider rating down for imprecision. To inform this decision, one can calculate the number of patients required for an adequately powered individual trial (termed the optimal information size or OIS). For continuous variables, we suggest a similar process, initially considering the upper and lower limits of the CI, and subsequently calculating an OIS. Systematic reviews require a somewhat different approach. If the 95% CI excludes a relative risk (RR) of 1.0 and the total number of events or patients exceeds the OIS criterion, precision is adequate. If the 95% CI includes appreciable benefit or harm (we suggest a RR of under 0.75 or over 1.25 as a rough guide) rating down for imprecision may be appropriate even if OIS criteria are met.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE Leitlinien: 5. Einschätzung der Qualität der Evidenz – Publikationsbias

Alexandra Nolting; Matthias Perleth; Gero Langer; Joerg J. Meerpohl; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

In the GRADE approach, randomized trials are classified as high quality evidence and observational studies as low quality evidence but both can be rated down if a body of evidence is associated with a high risk of publication bias. Even when individual studies included in best-evidence summaries have a low risk of bias, publication bias can result in substantial overestimates of effect. Authors should suspect publication bias when available evidence comes from a number of small studies most of which have been commercially funded. A number of approaches based on examination of the pattern of data are available to help assess publication bias. The most popular of these is the funnel plot; all, however, have substantial limitations. Publication bias is likely frequent, and caution in the face of early results, particularly with small sample size and number of events, is warranted.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE Leitlinien: 8. Einschätzung der Qualität der Evidenz – Indirektheit☆

Andrej Rasch; Matthias Perleth; Gero Langer; Joerg J. Meerpohl; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

Zusammenfassung Direkte Evidenz ergibt sich aus Analysen, die Interventionen von Interesse in relevanten Populationen direkt miteinander vergleichen und hierbei patientenrelevante Zielkriterien messen. Aus vier Grunden kann die Evidenz aber als indirekt bezeichnet werden. Erstens, die Patienten in den Studien konnen von der relevanten Population abweichen (der Begriff Ubertragbarkeit wird oft fur diese Form der Indirektheit verwendet). Zweitens, die untersuchte Intervention (und/oder der Komparator) konnen von den relevanten Interventionen abweichen. Dabei hangen die Entscheidungen hinsichtlich der Indirektheit von Patienten und Intervention vom Verstandnis ab, ob biologische oder soziale Faktoren so wesentlich abweichen, dass substanzielle Differenzen in der Effektgrose zu erwarten sind. Drittens, die Zielkriterien konnen von den primar relevanten Endpunkten abweichen, wenn z.B. Surrogatendpunkte (die selbst nicht relevant sind) mit der Annahme erhoben werden, dass die Veranderung der Surrogate die patientenrelevanten Endpunkte widerspiegelt. Der vierte Typ der Indirektheit unterscheidet sich konzeptionell von den ersten drei. Er liegt vor, wenn zwischen Interventionen gewahlt werden muss, die nicht im direkten Vergleich zueinander untersucht wurden. In solchen Fallen werden fur den Vergleich von Behandlungen spezifische statistische Methoden benotigt, die jedoch zu einer Abwertung der Qualitat von einer oder zwei Stufen fuhren konnen. Eine Herabstufung hangt vom Umfang der Unterschiede zwischen den Patientenpopulationen, den begleitenden Interventionen, der Ergebnismessung und der Studienmethodik der in Frage kommenden Intervention im Vergleich zu einem anderen Komparator ab.Direct evidence comes from research that directly compares the interventions in which we are interested when applied to the populations in which we are interested and measures outcomes important to patients. Evidence can be indirect in one of four ways. First, patients may differ from those of interest (the term applicability is often used for this form of indirectness). Second, the intervention tested may differ from the intervention of interest. Decisions regarding indirectness of patients and interventions depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect. Third, outcomes may differ from those of primary interest - for instance, surrogate outcomes that are not themselves important, but measured in the presumption that changes in the surrogate reflect changes in an outcome important to patients. A fourth type of indirectness, which is conceptually different from the first three, occurs when clinicians must choose between interventions that have not been tested in head to head comparisons. Making comparisons between treatments under these circumstances requires specific statistical methods and will be rated down in quality by one or two levels depending on the extent of differences between the patient populations, co-interventions, measurements of the outcome, and the methods of the trials of the candidate interventions against some other comparator.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE Leitlinien: 7. Einschätzung der Qualität der Evidenz – Inkonsistenz

Matthias Perleth; Gero Langer; Joerg J. Meerpohl; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann

This article deals with inconsistency of relative, rather than absolute, treatment effects in binary/dichotomous outcomes. A body of evidence is not rated up in quality if studies yield consistent results, but may be rated down in quality if inconsistent. Criteria for evaluating consistency include similarity of point estimates, extent of overlap of confidence intervals, and statistical criteria including tests of heterogeneity and I(2). To explore heterogeneity, systematic review authors should generate and test a small number of a priori hypotheses related to patients, interventions, outcomes, and methodology. When inconsistency is large and unexplained, rating down quality for inconsistency is appropriate, particularly if some studies suggest substantial benefit, and others no effect or harm (rather than only large versus small effects). Apparent subgroup effects may be spurious. Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low p-values; and have a biological rationale.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2013

GRADE-Leitlinien: 9. Heraufstufen der Qualität der Evidenz☆

Christina Kien; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Joerg J. Meerpohl; Maria Flamm; Gero Langer; Matthias Perleth; Holger J. Schünemann

The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition and indirect evidence.


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2012

GRADE-Leitlinien: 2. Formulierung der Fragestellung und Entscheidung über wichtige Endpunkte ☆

Gero Langer; Joerg J. Meerpohl; Matthias Perleth; Gerald Gartlehner; Angela Kaminski-Hartenthaler; Holger J. Schünemann


Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen | 2008

Health Technology Assessment: Mehr als die Bewertung von Kosten und Nutzen?

Ansgar Gerhardus; Matthias Perleth

Collaboration


Dive into the Matthias Perleth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcial Velasco Garrido

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge