Why small meta-analyses miss publication bias
P-value driven methods were underpowered to detect publication bias: analysis of Cochrane review meta-analyses.
Not medical advice. For informational purposes only. Always consult a healthcare professional. Terms
Surprising Findings
P-values suggest more bias just because more studies are included—even when there is no actual bias.
Most people assume a more significant p-value means stronger evidence of bias, but here, the significance increases purely due to sample size, not effect. This contradicts the intuitive interpretation of p-values.
Practical Takeaways
Be skeptical of publication bias tests in meta-analyses with fewer than 20 studies.
Not medical advice. For informational purposes only. Always consult a healthcare professional. Terms
Surprising Findings
P-values suggest more bias just because more studies are included—even when there is no actual bias.
Most people assume a more significant p-value means stronger evidence of bias, but here, the significance increases purely due to sample size, not effect. This contradicts the intuitive interpretation of p-values.
Practical Takeaways
Be skeptical of publication bias tests in meta-analyses with fewer than 20 studies.
Publication
Journal
Journal of clinical epidemiology
Year
2019
Authors
L. Furuya-Kanamori, Chang Xu, Lifeng Lin, T. Doan, H. Chu, L. Thalib, S. Doi
Related Content
Claims (4)
Putting together lots of small studies gives a better guess about what's really going on in the whole population than looking at just one small study.
The more studies a meta-analysis includes, the more likely it is to show 'statistically significant' signs of bias—even if there's no real bias, just because there are more studies.
When scientists use certain tests to check if a review of studies is missing some results, those tests don't work as well if there aren't many studies in the review — they miss problems about a third more often when there are only a few studies.
When researchers combine few studies in a review, common statistical tests often miss signs that some results were hidden — making it look like there's no bias when there might actually be.