Publication Bias



Publication bias results in it being easier to find studies with a ‘positive’ result

What is publication bias?

Systematic reviews aim to find and assess for inclusion all high quality studies addressing the question of the review. But finding all studies is not always possible and we have no way of knowing what we have missed. Does it matter if we miss some of the studies? It will certainly matter if the studies we have failed to find differ systematically from the ones we have found. Not only will we have less information available than if we had all the studies, but we might come up with the wrong answer if the studies we have are unrepresentative of all those that have been done.

We have good reason to be concerned about this, as many researchers have shown that those studies with significant, positive, results are easier to find than those with non-significant or ‘negative’ results. The subsequent over-representation of positive studies in systematic reviews may mean that our reviews are biased toward a positive result.

Reporting bias is a group of related biases potentially leading to over-representation of significant or positive studies in systematic reviews Publication bias is just one type of a group of biases termed reporting bias. We have quite a lot of evidence that these biases exist, so it is fair to assume that most systematic reviews will be subject to reporting bias to some extent.

Publication bias and other related biases can be summarised as statistically significant, ‘positive’ results being:

  • more likely to be published (publication bias)
  • more likely to be published rapidly (time lag bias)
  • more likely to be published in English (language bias)
  • more likely to be published more than once (multiple publication bias)
  • more likely to be cited by others (citation bias)

All of these reporting biases make positive studies easier to find than those with non-significant results, something that we can try to minimise by extensive searching.

Managing publication bias

If we accept that your review will almost certainly be subject to publication bias to some extent, we are left with the problem of estimating how big a problem it is in your review, and what to do about it. There are several methods for getting an idea about how much of a problem this may be, and the method available in RevMan is the funnel plot. This is available in RevMan as a tool for reviewers, but is not on The Cochrane Library. This means you should use the funnel plot option to investigate the presence of publication bias in your review and then discuss this in the Discussion section of the text of your review. If you suspect there may be a problem in your review, you need to bear this in mind when making your conclusions and recommendations. The likeliest scenario is that the results of your review are biased to the positive.

Scientists usually interact their newest searchings for by releasing results as clinical documents in journals that are often available online (albeit frequently at a price), making sure fast sharing of most recent understanding.

However unfavorable findings– those that do not agree with what the researchers hypothesised– are usually overlooked, inhibited or just not put forward for magazine.

Yet unfavorable findings can save researchers useful time and also resources by not repeating currently done experiments, so it is important that all results, no matter the end result, are released (Dual Diagnosis).

Released vs. Unpublished Researches

” Released” suggests that the research has actually been published in a peer-reviewed journal. Studies are most likely to be published if they have positive searchings for, improve previously accepted hypotheses, as well as can potentially amass citations for the journal (e.g. if they have marvelous findings). Studies are much less likely to be published if they don’t improve previously published data or if they refute a previously released hypothesis.

Around 50% of studies are approximated to be unpublished. Generally, those research studies are more likely to have less significant or negative outcomes; that does not imply the outcomes aren’t valid– simply that journals are much less likely to publish an article or delay magazine if a therapy is shown to have no result. For instance, a significant study which showed a deworming program in India was ineffective for lowering mortality or improving weight gain was delayed from magazine for 8 years (Hawkes).


This” swept under the carpet “phenomeneon takes place as a result of withholding negative results from publication. This might be willful or unintentional. As well as fraud, study sponsors might supply incentives in a deliberate effort to alter searchings for. Journal editors may be much more inclined to publish research studies that will certainly offer duplicate or gain various other incentives.

Comparable Prejudices

Magazine prejudice refers to a whole study being excluded. Comparable biases consist of:

  • Citation predisposition: finding literature resources by scanning reference lists from published write-ups. Less referrals resources are for that reason most likely to be left out from a meta analysis.
  • Circulation predisposition: when the nature of a research’s direction or the study’s results are unevenly reported.
  • Gray-literature bias:disregarding literary works that’s more challenging to locate, like government records or unpublished scientific research studies.
  • Language bias:the exemption of foreign language researches from your evaluation.
  • Media attention prejudice: studies that turn up in the news are most likely to be consisted of in evaluations than those that do not.
  • Outcome-reporting prejudice: when positive outcomes are more probable to be consisted of in a meta analysis than negative outcomes. Unfavorable outcomes can additionally be misrepresented as favorable ones.
  • Time-lag prejudice: studies with substantial results have a much shorter mean time to magazine (4.7 years) while those with non-significant outcomes have a typical time of 8.0. years.