Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where André Knottnerus is active.

Publication


Featured researches published by André Knottnerus.


Journal of Clinical Epidemiology | 2012

Informed consent forms fail to reflect best practice.

Peter Tugwell; André Knottnerus; Leanne Idzerda

Informed consent is a critical component of all clinical research; however, this process is often not appropriately administered with research participants. Informed consent has typically emphasized the provision of information over support to people making a difficult decision. In a study by Brehaut et al, informed consent documents were assessed for the degree to which they conform to the International Patient Decision Aid Standards for supporting decision making. They found that informed consent documents do not meet most validated standards for encouraging good decision making. Vested interest’s influence on guidelines is not always from industry involvement! Norris et al examined the relationship between guideline panel members’ conflicts of interest and guideline recommendations for mammography screening in asymptomatic women. They found that a guideline with at least 1 author who is a radiologist was more likely to recommend routine screening. In addition, the odds of a recommendation in favor of routine screening were related to the number of recent publications on breast disease diagnosis and treatment by the lead guideline author. The authors conclude that recommendations regarding mammography screening may reflect the specialty and intellectual interests of the guideline authors. Akl et al propose a new strategy to manage conflicts of interest within the context of guideline development. They propose giving primary responsibility to methodologists and making the content experts members of the guideline committee. This is a reversal in the dominant power structure. Following a series of semi-structured interviews with both methodologists and content experts, the authors found that methodologists believe this change will lead to more rigorous guidelines, while the content experts were worried that the methodologists’ lack of content knowledge could hurt the guidelines. In an interesting response to this article, Sniderman and Furberg contest this change in structure and propose their thoughts on addressing this issue of conflict of interest. Bias at the level of the systematic review is also alive and well! Orestis and Ioannidis investigated whether interpretation bias in meta-analyses might be an issue. They found that when interpreting meta-analyses that included their own study, authors who had published significant results were more likely to believe that a strong association existed as compared with methodologists with no vested interests.


Journal of Clinical Epidemiology | 2011

Comparative effectiveness reviews need to pay as much attention to external validity as to internal validity risks of bias

Peter Tugwell; André Knottnerus; Leanne Idzerda

This month we present the second part of the extremely interesting series from the Agency for Healthcare Research and Quality (AHRQ) on Comparative Effectiveness Reviews (CER) [1]. Atkins et al state in the opening paragraph of their article, the defining characteristic of comparative effectiveness research is that it includes ‘‘the conduct and synthesis of research comparing the benefits and harms of different interventions in real world settings with the purpose of determining which interventions are most effective for which patients in real world settings under specific circumstances.’’ A CER must therefore make judgments about whether the available research evidence reflects ‘real world’ practice, and should make clear for which patients or populations and which circumstances the review’s conclusions can be used to make clinical or policy decisions. Their article describes a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review (SR) using the PICO[S] framework (Population, Intervention, Comparator, Outcome, and Setting domains). Relevo and Balshem discuss the challenges of the complex search methodologies required to identify evidence for CERs. The search methods they describe attempt to overcome the bias inherent in the publication and distribution of clinical evidence. Bibliographic databases and search strategies are discussed, with special emphasis placed on searching for observational studies and harms data. Other techniques described include obtaining summary reports from regulatory agencies, the use of key articles, citation tracking, hand searching, and personal communications. Norris et al address the controversial issue of whether and when to include observational studies in systematic reviews of interventions. This centers around the fact that systematic reviewers disagree about the ability of observation studies to answer questions about intended effects of interventions. Many decision-makers, clinicians, and consumer groups now expect, even require this. Norris et al conclude that because it is unusual to find sufficient evidence from randomized controlled trials (RCTs) to answer all the key questions, particularly concerning the balance of benefits and harms, CERs should also routinely consider the inclusion of observational designs and propose a framework for doing so. Members of AHRQ and the Campbell and Cochrane Collaborations


Journal of Clinical Epidemiology | 2012

Why are reporting guidelines not more widely used by journals

Peter Tugwell; André Knottnerus; Leanne Idzerda

Reporting guidelines have become almost an industry in itself. Vandenbroucke [1] raised some important issues when the JCE co-published the STREGA guidelines in 2009. He asked what exactly should the role of publication guidelines be and who needs them. An article in this issue by Larson and Cortazal contributes to this debate by providing an update on the development and adoption of general publication guidelines for various study designs. They provide examples of guidelines adapted for specific topics and recommend next steps. To assess the extent to which guidelines are being used and cited, they searched PubMed for the years after the first publication of each guideline through December 2010. A useful summary table of guidelines shows the number of times that published guidelines have been cited; this ranges from 2 to 565 citations. The authors recommend more aggressive promotion of guideline adoption among journals, educating peer reviewers in their use, and incorporating guideline use into the curriculum of medical, nursing, and public health schools. In an Invited Commentary, Gartlehner raises a number of important issues on the applicability of systematic reviews, stimulated by a recent article by Koppenaal et al [2], who proposed the adaptation of the PRECIS (PragmaticExplanatory Continuum Indicator Summary) instrument. Elsewhere in this issue, Wong and Callaham provide the first update on the self-reported knowledge skills and competencies of medical editorship since 1998. The authors surveyed the clinical journals with the highest citation rates for editor demographics, training, experience, industry ties, views on publication ethics, and involvement with scientific publication organizations. They found that most editors of major clinical medical journals had training in medical editing topics, saw ethical issues regularly, and were aware of scientific publication organizations. On the other hand, their knowledge (assessed by multiple choice questions based on well-known sources such as COPE, ICJME, and WAME) of four commonly seen publication ethic topics (authorship, peer review, plagiarism, and conflict of interest) was surprisingly poor. The authors conclude that more attention is needed to improve training for medical editors. They suggest that there should be a single gold standard of publication ethics standards rather than the current situation of multiple source documents from different organizations. Should the term ‘minimally clinically important difference’ estimates be changed to ‘smallest worthwhile effect’ and be restricted to benefit-harm trade-off methods in


Journal of Clinical Epidemiology | 2011

GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology.

Gordon H. Guyatt; Andrew D Oxman; Holger J. Schünemann; Peter Tugwell; André Knottnerus


Journal of Clinical Epidemiology | 2008

STROBE--a checklist to Strengthen the Reporting of Observational Studies in Epidemiology.

André Knottnerus; Peter Tugwell


Journal of Clinical Epidemiology | 2011

Updating systematic reviews--when and how?

Peter Tugwell; André Knottnerus; Leanne Idzerda


Journal of Clinical Epidemiology | 2011

How can clinical epidemiology better support evidence-based guidelines and policies in low-income countries?

Peter Tugwell; André Knottnerus; Leanne Idzerda


Journal of Clinical Epidemiology | 2012

What is ‘best evidence’?

Peter Tugwell; André Knottnerus; Leanne Idzerda


Journal of Clinical Epidemiology | 2004

The new look and focus of JCE

André Knottnerus; Peter Tugwell


Journal of Clinical Epidemiology | 2013

Are we doing enough to ensure quality of trials

Peter Tugwell; André Knottnerus; Leanne Idzerda

Collaboration


Dive into the André Knottnerus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew D Oxman

Norwegian Institute of Public Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge