Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniele Fanelli is active.

Publication


Featured researches published by Daniele Fanelli.


PLOS ONE | 2009

How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data

Daniele Fanelli

The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first meta-analysis of these surveys. To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, “cooking” of data, etc… Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis. A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others. Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.


Scientometrics | 2012

Negative results are disappearing from most disciplines and countries

Daniele Fanelli

Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have “tested” a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.


PLOS ONE | 2010

Positive results increase down the Hierarchy of the Sciences.

Daniele Fanelli

The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a fields level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.


Science Translational Medicine | 2016

What does research reproducibility mean

Steven N. Goodman; Daniele Fanelli; John P. A. Ioannidis

The language and conceptual framework of “research reproducibility” are nonstandard and unsettled across the sciences. The language and conceptual framework of “research reproducibility” are nonstandard and unsettled across the sciences. In this Perspective, we review an array of explicit and implicit definitions of reproducibility and related terminology, and discuss how to avoid potential misunderstandings when these terms are used as a surrogate for “truth.”


Proceedings of the National Academy of Sciences of the United States of America | 2013

US studies may overestimate effect sizes in softer research

Daniele Fanelli; John P. A. Ioannidis

Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.


PLOS Biology | 2015

Meta-research: Evaluation and Improvement of Research Methods and Practices.

John P. A. Ioannidis; Daniele Fanelli; Debbie Drake Dunne; Steven N. Goodman

As the scientific enterprise has grown in size and diversity, we need empirical evidence on the research process to test and apply interventions that make it more efficient and its results more reliable. Meta-research is an evolving scientific discipline that aims to evaluate and improve research practices. It includes thematic areas of methods, reporting, reproducibility, evaluation, and incentives (how to do, report, verify, correct, and reward science). Much work is already done in this growing field, but efforts to-date are fragmented. We provide a map of ongoing efforts and discuss plans for connecting the multiple meta-research efforts across science worldwide.


Nature | 2013

Redefine misconduct as distorted reporting

Daniele Fanelli

Against an epidemic of false, biased and falsified findings, the scientific community’s defences are weak. Only the most egregious cases of misconduct are discovered and punished. Subtler forms slip through the net, and there is no protection from publication bias. Delegates from around the world will discuss solutions to these problems at the 3rd World Conference on Research Integrity (wcri2013.org) in Montreal, Canada, on 5–8 May. Common proposals, debated in Nature and elsewhere, include improving mentorship and training, publishing negative results, reducing the pressure to publish, pre-registering studies, teaching ethics and ensuring harsh punishments. These are important but they overestimate the benefits of correcting scientists’ minds. We often forget that scientific knowledge is reliable not because scientists are more clever, objective or honest than other people, but because their claims are exposed to criticism and replication. The key to protecting science, therefore, is to strengthen self-correction. Publication, peerreview and misconduct investigations should focus less on what scientists do, and more on what they communicate. What is wrong with current approaches? By defining misconduct in terms of behaviours, as all countries do at present, we have to rely on whistle-blowers to discover it, unless the fabrication is so obvious as to be apparent from papers. It is rare for misconduct to have witnesses; and surveys suggest that when people do know about a colleagues’ misbehaviour, they rarely report it. Investigators, then, face the arduous task of reconstructing what a scientist did, establishing that the behaviour deviated from accepted practices and determining whether such deviation expressed an intention to deceive. Only the most clear-cut cases are ever exposed. Take the scandal of Diederik Stapel, the Dutch star psychologist who last year was revealed to have been fabricating papers for almost 20 years. How was this possible? First, Stapel insisted on collecting data by himself, which kept away potential whistle-blowers. Second, researchers had no incentive to replicate his experiments, and when they did, they lacked sufficient information to explain discrepancies. This was mainly because, third, Stapel was free to omit from papers details that would have revealed lies and statistical flaws. In tackling these issues, a good start would be to redefine misconduct as distorted reporting: ‘any omission or misrepresentation of the information necessary and sufficient to evaluate the validity and significance of research, at the level appropriate to the context in which the research is communicated’. Some might consider this too broad. But it is no more so than the definition of falsification used by the US Office of Science and Technology Policy: “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record”. Unlike this definition, however, mine points unambiguously to misconduct whenever there is a mismatch between what was reported and what was done. Authors should be held accountable for what they write, and for recording what they did. But who decides what information is necessary and sufficient? That would be experts in each field, who should prepare and update guidelines. This might seem daunting, but such guidelines are already being published for many biomedical techniques, thanks to initiatives such as the EQUATOR Network (equator-network.org) or Minimum Information for Biological and Biomedical Investigations (mibbi. sourceforge.net). The main task of journal editors and referees would then be to ensure that researchers comply with reporting requirements. They would point authors to the appropriate guidelines, perhaps before the study had started, and make sure that all the requisite details were included. If authors refused or were unable to comply, their paper (or grant application or talk) would be rejected. The publication would indicate which set or sets of guidelines were followed. By focusing on reporting practices, the community would respect scientific autonomy but impose fairness. A scientist should be free to decide, for example, that ‘fishing’ for statistical significance is necessary. However, guidelines would require a list of every test used, allowing others to infer the risk of false positives. Carefully crafted guidelines could make fabrication and plagiarism more difficult, by requiring the publication of verifiable details. And they could help to uncover questionable practices such as ghost authorship, exploiting subordinates, post hoc hypotheses or dropping outliers. Graduate students could, in addition to learning the guidelines, train by replicating published studies. Special research funds could be reserved for independent replications of unchallenged claims. The current defence against misconduct is prepared for the wrong sort of attack: the community tries to regulate research like any other profession, but it is different. The reliability of scientific ‘products’ is ensured not by individual practice, but by collective dialogue. ■


Proceedings of the National Academy of Sciences of the United States of America | 2017

Meta-assessment of bias in science

Daniele Fanelli; Rodrigo Costas; John P. A. Ioannidis

Significance Science is said to be suffering a reproducibility crisis caused by many biases. How common are these problems, across the wide diversity of research fields? We probed for multiple bias-related patterns in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was on average relatively small. However, we consistently observed that small, early, highly cited studies published in peer-reviewed journals were likely to overestimate effects. We found little evidence that these biases were related to scientific productivity, and we found no difference between biases in male and female researchers. However, a scientist’s early-career status, isolation, and lack of scientific integrity might be significant risk factors for producing unreliable results. Numerous biases are believed to affect the scientific literature, but their actual prevalence across disciplines is unknown. To gain a comprehensive picture of the potential imprint of bias in science, we probed for the most commonly postulated bias-related patterns and risk factors, in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was overall relatively small. However, we consistently observed a significant risk of small, early, and highly cited studies to overestimate effects and of studies not published in peer-reviewed journals to underestimate them. We also found at least partial confirmation of previous evidence suggesting that US studies and early studies might report more extreme effects, although these effects were smaller and more heterogeneously distributed across meta-analyses and disciplines. Authors publishing at high rates and receiving many citations were, overall, not at greater risk of bias. However, effect sizes were likely to be overestimated by early-career researchers, those working in small or long-distance collaborations, and those responsible for scientific misconduct, supporting hypotheses that connect bias to situational factors, lack of mutual control, and individual integrity. Some of these patterns and risk factors might have modestly increased in intensity over time, particularly in the social sciences. Our findings suggest that, besides one being routinely cautious that published small, highly-cited, and earlier studies may yield inflated results, the feasibility and costs of interventions to attenuate biases in the literature might need to be discussed on a discipline-specific and topic-specific basis.


PLOS ONE | 2015

Misconduct Policies, Academic Culture and Career Stage, Not Gender or Pressures to Publish, Affect Scientific Integrity

Daniele Fanelli; Rodrigo Costas; Vincent Larivière

The honesty and integrity of scientists is widely believed to be threatened by pressures to publish, unsupportive research environments, and other structural, sociological and psychological factors. Belief in the importance of these factors has inspired major policy initiatives, but evidence to support them is either non-existent or derived from self-reports and other sources that have known limitations. We used a retrospective study design to verify whether risk factors for scientific misconduct could predict the occurrence of retractions, which are usually the consequence of research misconduct, or corrections, which are honest rectifications of minor mistakes. Bibliographic and personal information were collected on all co-authors of papers that have been retracted or corrected in 2010-2011 (N=611 and N=2226 papers, respectively) and authors of control papers matched by journal and issue (N=1181 and N=4285 papers, respectively), and were analysed with conditional logistic regression. Results, which avoided several limitations of past studies and are robust to different sampling strategies, support the notion that scientific misconduct is more likely in countries that lack research integrity policies, in countries where individual publication performance is rewarded with cash, in cultures and situations were mutual criticism is hampered, and in the earliest phases of a researcher’s career. The hypothesis that males might be prone to scientific misconduct was not supported, and the widespread belief that pressures to publish are a major driver of misconduct was largely contradicted: high-impact and productive researchers, and those working in countries in which pressures to publish are believed to be higher, are less-likely to produce retracted papers, and more likely to correct them. Efforts to reduce and prevent misconduct, therefore, might be most effective if focused on promoting research integrity policies, improving mentoring and training, and encouraging transparent communication amongst researchers.


Journal of Insect Physiology | 2001

Nestmate recognition in Parischnogaster striatula (Hymenoptera Stenogastrinae), visual and olfactory recognition cues.

P. Zanetti; Francesca R. Dani; S Destri; Daniele Fanelli; Alessandro Massolo; Gloriano Moneti; G Pieraccini; Stefano Turillazzi

The recognition of nestmates from alien individuals is a well known phenomenon in social insects. In the stenogastrine wasp Parischnogaster striatula, we investigated the ability of females to recognize nestmates and the cues on which such recognition is based. Recognition of nestmates was observed in naturally occurring interactions between wasps approaching a nest and the resident females on that nest. This recognition was confirmed in experiments in which nestmates or alien conspecifics were presented to resident females. In naturally occurring interactions, nestmates generally approach their nest with a direct flight, while aliens usually hover in front of the nest before landing. In experiments in which the presented wasps were placed close to the nest in a direct manner, antennation of the presented wasp generally occurred, indicating that chemical cues are involved. Experiments in which dead alien individuals, previously washed in hexane, and then reapplied with extracts were recognized by colonies giving further evidence that chemical cues mediate nestmate recognition. Epicuticular lipids, known to be nestmate recognition cues in social insects, were chemically analysed by GC-MS for 44 P. striatula females from two different populations (13 different colonies). Discriminant analysis was performed on the data for the lipid mixture composition. The discriminant model showed that, in the samples from these two populations, 68.2% and 81.9% of the specimens could be correctly assigned to their colony.

Collaboration


Dive into the Daniele Fanelli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ferric C. Fang

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

P. Zanetti

University of Florence

View shared research outputs
Top Co-Authors

Avatar

Rita Cervo

University of Florence

View shared research outputs
Researchain Logo
Decentralizing Knowledge