Federico Soldani
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Federico Soldani.
Acta Psychiatrica Scandinavica | 2003
S. Nassir Ghaemi; Federico Soldani
The basic method of meta-analysis is to combine data from different studies by assigning weights to those studies by sample size and by information (effect estimates and estimates of error). The result is a common odds ratio and confidence interval that sums all studies, rather than a crude average, or a vote count (e.g. Six studies were positive, three studies were negative). Consequently, meta-analysis has the potential to provide summarizing clarity to a confusing literature (1). Sometimes metaanalysis identifies a finding that was not statistically recognizable in the individual studies, usually because of small sample size (type II error). Other times, as in the paper by Tondo et al. in this issue (2), meta-analysis refutes a common clinical belief (in this case, the belief that anticonvulsants are more effective than lithium for rapid-cycling bipolar disorder, RCBD). In fact, this meta-analysis suggests that RCBD is a globally treatmentrefractory condition, in which treatment response (to whatever mood stabilizer) is worse than in nonrapid-cycling bipolar disorder. Unfortunately, meta-analysis seems a bit esoteric. In fact, it is simply a quantitative analogue to what clinicians and researchers do when evaluating the scientific literature. In light of their experience and of the published evidence they come to believe that a treatment is more or less effective. Instead of personal judgment alone or a merely narrative review of what has been published, a systematic review attempts to assess, in a way that others can reproduce, all the available evidence: a metaanalysis simply adds a numerical dimension where appropriate, without devaluing the narrative approach. Meta-analytical summaries do not constitute meta-physical truths, but instead seek to present the most objective way of reviewing the evidence. Previous articles in this journal are good examples of the benefits of this techinique (3, 4). Yet meta-analysis has its critics, primarily because of the risk of three biases: the garbage in, garbage out problem, where poorly performed individual studies are seen as not worthy of assessment in systematic reviews; concerns about heterogeneity, where studies with different methods are seen as too different to allow for quantitative summing; and publication bias (the file drawer problem), where negative studies are less likely to be published (5). It is often argued that, because of these biases, meta-analysis of treatment studies should be limited to randomized clinical trials (RCTs), as in the Cochrane Collaboration. Despite the fact that RCTs represent the most reliable way to establish a cause–effect relationship, we believe that this perspective represents a reification of the concept of randomization. The problem of rapid-cycling bipolar disorder (RCBD) is exactly the type of question where the need to apply meta-analysis, carefully and cautiously, to observational data is demonstrated. There are basically no randomized data to metaanalyze, and, further, it is likely that there never will be, because of the expense and difficulty of studying this population. Yet, there are many clinical misconceptions, which meta-analysis clears up by correcting two methodological errors. First, there is the apples and oranges error, where one compares results from different studies, based on different samples, instead of only making comparisons within samples. In this meta-analysis, the reader is guided to the four comparative studies in which lithium and anticonvulsants were compared in the same population, and no differences exist. Secondly, there is a lack of recognition of the concept of levels of evidence. As discussed in the evidence-based medicine (EBM) literature, this concept allows one to weigh the evidence, putting more weight on more rigorous, larger, better designed studies. This central classification, primarily based on the epidemiological concept of validity and secondarily on power (sample size), constitutes an indispensable framework to understand the quality of the inferences we draw. In Table 1, we offer our version of the five levels of evidence, derived from the work of leaders in the EBM movement (6), as adapted by us for the state of the psychiatric literature. The data suggesting benefit with anticonvulsants involve level III This issue of Acta Psychiatrica Scandinavica also indudes short biographies of S. Nassir Ghaemi and Federico Soldani. Acta Psychiatr Scand 2003: 108: 1–3 Printed in UK. All rights reserved Copyright a Blackwell Munksgaard 2003
Acta Psychiatrica Scandinavica | 2005
Federico Soldani; S. Nassir Ghaemi; Ross J. Baldessarini
Readers of the literature on experimental therapeutics in general medicine, and perhaps particularly in psychiatry, are familiar with a standard dilemma; one study appears one month with result X, and another study of the same topic appears the next month with result not-X. Over the years, research about almost every type of treatment becomes inundated with confusing and conflicting reports. A primary reason for this problem, we submit, is a prevalent lack of consideration, by both researchers and clinicians, of important aspects of scientific method as applied to clinical medicine, especially the methods of epidemiology and biostatistics (1, 2). Stimulated by this impression, we are studying selected topics in the recent literature on psychiatric experimental therapeutics. This Clinical Knowledge and Research Methodology Audit (ClinKARMA) Project explores how clinical expertise and study methodologies are integrated in current psychiatric therapeutics research so as to increase chances of providing valid and clinically meaningful conclusions. Our initial impression from this work is that insufficient attention to methods and appropriate constraints on formulating conclusions contributes to confusion, as well as to uncritical acceptance of propositions that sometimes appear to be driven by marketing preferences in pharmaceutically funded studies (3), or by clinical or personal preferences of particular investigators. We are persuaded that greater attention to methods can greatly improve these circumstances and contribute to efforts to build a rational and evidence-based psychiatric therapeutics. In Table 1, we summarize common problems that we have encountered in the methodological quality of psychiatric research on treatment. In this issue (4), we report on a survey of a random sample of approximately 20% of studies on therapeutics research about bipolar disorder in the five most-cited psychiatric journals within a recent 5-year period. The findings strongly suggest that consideration of methods and application of sound principles to formulating conclusions and inferences was remarkably limited in this sample, and also likely to be widespread based on our informal surveys of the broader research literature in psychiatry. We propose that a path to methodological improvement in research reports on psychiatric therapeutics should consider integration, causation, terminology, and sources. First, integration of epidemiological and statistical methods with clinical expertise seems essential. In general, epidemiological methods seek to limit problems of bias or systematic error, whereas statistics seeks to limit risk of chance findings or random error. These disciplines deal primarily with study design and data analysis respectively. Although these disciplines have developed their own identities, the importance of integrating their methods with psychiatric knowledge cannot be overemphasized. Such integration may have been limited in the past by separation of experts in research methods from investigators with clinical expertise, and through lack of shared approaches, models, and language. Even in today’s relatively sophisticated clinical research environment, methodologists sometimes are consulted following data collection rather than early in the planning and design of studies. Secondly, epidemiology involves the study of causes of disease, and not merely surveys to estimate distributions of disorders (5). Moreover, epidemiological techniques extend to the analysis of a range of causal relations in medical research. Most treatment studies involve a causal question: does treatment A cause outcome B, and adverse effects C and D? Epidemiologists address such questions by way of careful study design and pertinent data analysis. In assessing the validity and reproducibility of any study’s findings, a useful mnemonic for both investigators and general readers are the three Cs of Abramson and Abramson (6), dealing with confounding bias, chance, and causation (Table 2). Confounding bias is a type of systematic error, and chance or random error can produce spurious outcomes. Only after passing these two hurdles can one begin to think about causation. Too often, as reported in Acta Psychiatr Scand 2005: 112: 1–3 All rights reserved DOI: 10.1111/j.1600-0447.2005.00576.x Copyright 2005 Blackwell Munksgaard
Acta Psychiatrica Scandinavica | 2005
Federico Soldani; S. N. Ghaemi; Ross J. Baldessarini
Objective: To assess frequencies of types of publications about bipolar disorder (BD) and evaluate methodological quality of treatment studies.
Annals of General Psychiatry | 2008
Evangelia M. Tsapakis; Federico Soldani; Leonardo Tondo; Ross J. Baldessarini
Address: 1MRC Social Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Kings College London, London, UK, 2“Aghios Charalambos” Mental Health Unit, Heraklion, Crete, Greece, 3Department of Psychiatry, Harvard Medical School and Psychopharmacology Program, McLean Division of Massachusetts General Hospital, Boston, Massachusetts, USA, 4Department of Epidemiology, Harvard School of Public Health, Boston, Massachusetts, USA and 5Lucio Bini Mood Disorder Centre and Department of Psychology, University of Cagliari, Sardinia, Italy * Corresponding author
Bipolar Disorders | 2003
S. Nassir Ghaemi; Douglas J. Hsu; Federico Soldani; Frederick K. Goodwin
Depression and Anxiety | 2004
Paolo Cassano; Lorenzo Lattanzi; Federico Soldani; Serena Navari; Giulia Battistini; Alfredo Gemignani; Giovanni B. Cassano
Australian and New Zealand Journal of Psychiatry | 2005
Federico Soldani; Patrick F. Sullivan; Nancy L. Pedersen
American Journal of Psychiatry | 2004
Federico Soldani; S. Nassir Ghaemi; Leonardo Tondo; Hagop S. Akiskal; Frederick K. Goodwin
Archive | 2008
M. Tsapakis; Federico Soldani; Leonardo Tondo; Ross J. Baldessarini
Evidence-based Mental Health | 2004
Federico Soldani; S. N. Ghaemi