Skip to main content


We're creating a new version of this page. See preview

  • Debate
  • Open Access
  • Open Peer Review

Fake facts and alternative truths in medical research

BMC Medical EthicsBMC series – open, inclusive and trusted201819:4

  • Received: 15 August 2017
  • Accepted: 16 January 2018
  • Published:
Open Peer Review reports



Fake news and alternative facts have become commonplace in these so-called “post-factual times.” What about medical research - are scientific facts fake as well? Many recent disclosures have fueled the claim that scientific facts are suspect and that science is in crisis. Scientists appear to engage in facting interests instead of revealing interesting facts. This can be observed in terms of what has been called polarised research, where some researchers continuously publish positive results while others publish negative results on the same issue – even when based on the same data. In order to identify and address this challenge, the objective of this study is to investigate how polarised research produce “polarised facts.” Mammography screening for breast cancer is applied as an example.

Main body

The main benefit with mammography screening is the reduced breast cancer mortality, while the main harm is overdiagnosis and subsequent overtreatment. Accordingly, the Overdiagnosis to Mortality Reduction Ratio (OMRR) is an estimate of the risk-benefit-ratio for mammography screening. As there are intense interests involved as well as strong opinions in debates on mammography screening, one could expect polarisation in published results on OMRR. A literature search identifies 8 studies publishing results for OMRR and reveals that OMRR varies 25-fold, from 0.4 to 10. Two experts in polarised research were asked to rank the attitudes of the corresponding authors to mammography screening of the identified publications. The results show a strong correlation between the OMRR and the authors’ attitudes to screening (R = 0.9).


Mammography screening for breast cancer appears as an exemplary field of strongly polarised research. This is but one example of how scientists’ strong professional interests can polarise research. Instead of revealing interesting facts researchers may come to fact interests. In order to avoid this and sustain trust in science, researchers should disclose professional and not only financial interests when submitting and publishing research.


  • Conflict of interest
  • Polarized research
  • Mammography screening
  • Breast cancer
  • Overdiagnosis
  • Mortality


Science is built of facts the way a house is built of bricks: but an accumulation of facts is no more science than a pile of bricks is a house” (Henri Poincaré).

Fake news and alternative facts have become commonplace in these so-called “post-factual times.” What about medical research? Are scientific facts fake as well? A wide range of scientific results have been shown to be false [1]. Even much cited studies don’t hold up and are hard to replicate [27]. Initially strong effects of clinical interventions reported in highly cited articles are frequently contradicted [8]. Scientific results are fashioned by who finances research [9] and by researchers’ ties to industry [10]. Spoof research is frequently accepted [11], and scientific truth and objectivity is challenged [12, 13]. All this fuels the claim that scientific facts are suspect and that science is in crisis [14].

One source of crisis in science is when facts are based on confirmative empirical testing [15] or that research hypotheses, models, and approaches are directed by strong interests. The latter can be observed in polarised fields of research. Polarisation occurs when “reputable scientists hold radically opposed views leading to the segregation of the scientific community into groups in part constituted by their opposition to other groups in the field. Polarisation goes beyond mere disagreement. It occurs when researchers begin (1) to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data and (2) to discount arguments and data that would normally be taken as important in a scientific debate” [16]. In polarised research scientists come to engage in facting interests instead of revealing interesting facts.

Main text

How then are we to identify and address such “polarised facts?” One approach is to reveal polarised research fields and to put polarisation on par with other forms of conflicts of interests in scientific publishing. Let me use mammography screening as an example to illustrate how polarised facts can be investigated. In this field there are two main points of disagreement: a) What is the benefit of mammography screening, e.g., in terms of reduced breast cancer mortality, and b) what is the harm of this type of screening, e.g., in terms of overdiagnosis? Some researchers tend to claim that the mortality reduction is high, while the overdiagnosis rate is low [17], while others claim that the mortality reduction rate is moderate, while overdiagnosis is high [18]. What is at stake is the risk/benefit-ratio in a utilitarian perspective. Hence, one way to illustrate the polarisation in this field is to scrutinize the divergence in the Overdiagnosis to Mortality Reduction Ratio (OMRR), that is, the ratio of overdiagnosis over the rate of mortality reduction. “Overdiagnosis is the term used when a condition is diagnosed that would otherwise not go on to cause symptoms or death” [19]. Mortality from breast cancer is defined as deaths with breast cancer coded as the underlying cause of death and mortality reduction is defined in terms of reduced breast cancer mortality in a screened group compared to a non-screening group in the assessment of a screening program.

Accordingly, the research questions of this brief study are: What is the OMRR in publicly funded mammography screening programs of women aged 50–69 years old? How is this related to the corresponding authors’ attitudes towards screening? A straight forward literature search identifies 8 studies who have addressed the first question. The studies and their results are shown in Table 1.
Table 1

Overdiagnosis to mortality reduction ratio (OMRR) for various studies and the corresponding author’s attitudes to mammography screening as assessed by experts in polarised research (1: Very negative to screening, 2: Negative to screening, 3: Neutral to screening, 4: Positive to screening, 5: Very positive to screening)

Institution/corresponding author

Overdiagnosis to mortality reduction ratio (OMRR)

Attitudes to screening


EUROSCREEN group/Dr. Eugenio Paci

4:8 = 0.5


[25, 26]

Florentine screening program/Dr. Eugenio Paci

6:10 = 0.6



The Norwegian Research Council (NRC)/Professor Roar Johnsen

5:1 = 5



The Norwegian Breast Cancer Screening Program (NBCSP)/Professor Solveig Hofvind

17:10 = 1.7



Cochrane Collaboration/Director Peter Gøtzsche

10:1 = 10


[30, 31]

The Swedish Two-County randomized trial of mammographic screening for breast cancer

4.3: 8.8 = 0.5



The UK Breast Screening Programme in England/Dr. Prue C Allgood

2.3: 5.7 = 0.4

Marmot report (UK)/Professor Sir Michael Marmot

3:1 = 3



U.S. Preventive Services Task Force (USPSTF)/Dr. Albert L. Siu

19:7 = 2.71



In order to assess the researchers’ attitudes to screening specific questions suggested to identify “polarised conflict of interest.” were adapted to this particular case and were sent to the corresponding authors of the identified publications. However, the corresponding authors found it difficult to answer the questions. As expected, “researchers within a polarised group in a polarised field may not themselves be able to identify the field as polarised or see themselves as belonging to a polarised group”. In order to overcome this problem, two experts on polarised conflict of interest were asked to classify the corresponding authors of the identified publications. Inclusion criteria for these experts were that they were experts on science ethics in general and polarised research in particular, and exclusion criteria were if they had been involved in mammography screening programs or their primary evaluations. The research question and the included articles were not revealed. For a description of the literature search, the questions to the authors, and the questions to the experts, and a discussion of the applied methods, see Additional file 1. The classification of the corresponding interests is given in Table 1.

The correlation between the OMRR and the authors’ attitudes to screening as assessed by experts in polarised research was strong (R = 0.9). The scatter plot is shown in Fig. 1.
Fig. 1
Fig. 1

Scatter plot of the relationship between OMRR and attitudes to screening. (1: Very negative to screening, 2: Negative to screening, 3: Neutral to screening, 4: Positive to screening, 5: Very positive to screening)

This indicates that research results in this field are strongly formed by professional interests and attitudes to screening. The same effect of “facted interests” may be observed in other fields of polarised research, and in-depth studies are needed and encouraged.

Of course, personal interest can be a good thing in science. It can motivate important and ground-breaking research. However, it can also bias judgments, cloak influences, direct methodological choices, and skew the presentation of results. Moreover, framed facts can influence important health policy decisions. Accordingly, it is crucial to acknowledge that this represents “genuine conflicts of interest” threatening “the objectivity of science” [12] and trust in science. This can be done by a) making researchers state their “polarised conflict of interest” when submitting manuscripts, b) making reviewers explicitly assess polarisation, and c) apply external experts to assess polarisation when reviewers (and/or editors) are too ingrained in the research to be able to make the assessment.

Polarisation may be a general trend resulting from disagreements on research methodology or assessment of evidence (according to GRADE or other). However, it may also result from self-interest [20], intellectual laziness [21, 22], mental shortcuts, or hyper-partisanism [23]. Moreover, emotional conflicts of interests are more difficult to handle than financial conflicts [24].

While philosophers of science and sociologists long have revealed the challenges of value-laden facts and underscored the constitutive value of disinterestedness in science [12], it is high time we scientists acknowledge this in practice.


Scientists appear to engage in facting interests as much as in revealing interesting facts. Published research on mammography screening for breast cancer illustrates the problem of science being directed by strong professional interests, where some researchers continuously publish positive results while others publish negative results on the same issue – even when based on the same data. Analysing this as polarised research may provide a way to address an important issue threatening to undermine trust in scientific results and medical researchers. Hence editors should a) make researchers state their “polarised conflict of interest” when submitting manuscripts, b) make reviewers explicitly assess polarisation, and c) apply external experts to assess polarisation when reviewers (and/or editors) are too ingrained in the research to be able to make the assessment.

How exactly to assess polarised conflict of interest may need more elaboration and collaborate work. However, Table 2 suggests some questions to ask when assessing polarised conflict of interest. This is a first step illustrating methodological and empirical feasibility.
Table 2

Relevant questions to ask when assessing polarised conflict of interest




Is the topic or the field of the submitted manuscript subject to significant controversy (with respect to methods, results, conclusions, or recommendations)?

Which are the groups (the “poles”) and what do they disagree on?

Where does the manuscript lie with respect to these groups (poles)?

Do the suggested or considered reviewers belong to the same pole as the authors?

Can you find qualified reviewers that are independent of the identified groups?

Do the authors state their polarised conflict of interest?

Do you or co-editors have a specific stance on the controversy? If yes, how will you handle this? (stating conflict of interest, using alternative editors etc)

Reviewers and Editors

Based on your expertize in this field, are there groups with competing views on methods, theories, outcomes, or/and policies in the field (of the manuscript)? (Polarisation awareness)

If yes, do you and the author(s) belong to the same group? (Polarisation idenfitication)

Based on your reading of the manuscript, if the results, conclusion or recommendations of the study were the opposite (data and methods being the same) would you assess the manuscript differently? (Own stance in polarisation)


“If the results of your current (well planned and well conducted) project point in the opposite direction of the results of your previous research on this topic, would your first reaction be to reanalyse the data and reconsider your methods, or to reconsider your previous conclusions?” (Result polarisation)

“If your findings were the exact same as the opposing researchers in this field of research, would your policy recommendations be any different from the recommendations of the opposing group?” (Interpretation polarisation)

When calculating outcome measures from your results (e.g., risk/benefit ratios) and these result from the methods, models or evidence criteria that you use, would you still use the same methods, models or evidence criteria if the outcome measures were very different (opposing)? (Methods polarisation)

Is your institution, department, or organization is providing services related to your research? If yes, do you find it appropriate to proclaim “nothing to declare” in the conflict of interest statement? (Affiliation polarisation)



The European screening network


The Norwegian breast cancer screening program


The Norwegian research council


Overdiagnosis to mortality reduction ratio


U.S. Preventive Services Task Force



I am most thankful for the responses from the authors of the referred studies and two to experts in polarised research for their help with classifying the research. I am also thankful from thoughtful comments and suggestions from the reviewers.


I have not received any external funding for this research.

Availability of data and materials

All applied data are available in the publication.

Author's contributions

I am the sole author of this text and have written the whole text for which I carry the full responsibility.

Ethics approval and consent to participate

All named contributors provided written consent to participate in the study.

Consent for publication

All authors consented to having their data (their opinion on screening) published.

Competing interests

I have no conflict of interest to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

The Institute for the Health Sciences, Norwegian University of Science and Technology (NTNU), PO Box 1, N-2802 Gjøvik, Norway
Centre for Medical Ethics, University of Oslo, Oslo, Norway


  1. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124.View ArticleGoogle Scholar
  2. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712.View ArticleGoogle Scholar
  3. Begley CG, Ioannidis JP. Reproducibility in science. Circ Res. 2015;116(1):116–26.View ArticleGoogle Scholar
  4. Begley CG. Reproducibility: six red flags for suspect work. Nature. 2013;497(7450):433–4.View ArticleGoogle Scholar
  5. Mobley A, Linder SK, Braeuer R, Ellis LM, Zwelling L. A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic. PLoS One. 2013;8(5):e63221.View ArticleGoogle Scholar
  6. Errington TM, Iorns E, Gunn W, Tan FE, Lomax J, Nosek BA. An open investigation of the reproducibility of cancer biology research. elife. 2014;3:e04333.View ArticleGoogle Scholar
  7. Collaboration OS. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716.View ArticleGoogle Scholar
  8. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218–28.View ArticleGoogle Scholar
  9. Bhandari M, Busse JW, Jackowski D, Montori VM, Schunemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ. 2004;170(4):477–80.Google Scholar
  10. Ahn R, Woodbridge A, Abraham A, Saba S, Korenstein D, Madden E, Boscardin WJ, Keyhani S. Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional study. BMJ. 2017;356:i6770.View ArticleGoogle Scholar
  11. Bohannon J. Who's afraid of peer review? Science (New York, NY). 2013;342(6154):60–5.View ArticleGoogle Scholar
  12. Ziman J. Is science losing its objectivity? Nature. 1996;382(6594):751–4.View ArticleGoogle Scholar
  13. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.View ArticleGoogle Scholar
  14. Benessia A, Funtowicz S, Giampietro M, Pereira ÂG, Ravetz J, Saltelli A, Strand R, van der Sluijs JP: Science on the verge: Amazon book, in the series “the rightful place of science” consortium for science, Policy & Outcomes Tempe, AZ and Washington, DC; 2016.Google Scholar
  15. Popper K. Science: Conjectures and refutations. In: McGrew T, Alspector-Kelly M, Allhoff F, editors. The. philosophy of science: an historical anthology. Oxford: Wiley; 2009. p. 471–88.Google Scholar
  16. Ploug T, Holm S. Conflict of interest disclosure and the polarisation of scientific communities. J Med Ethics. 2015;41(4):356–8.View ArticleGoogle Scholar
  17. Duffy SW, Tabar L, Olsen AH, Vitak B, Allgood PC, Chen TH, Yen AM, Smith RA. Absolute numbers of lives saved and overdiagnosis in breast cancer screening, from a randomized trial and from the breast screening Programme in England. J Med Screen. 2010;17(1):25–30.View ArticleGoogle Scholar
  18. Gøtzsche PC, Jørgensen KJ, Zahl P-H, Mæhlen J. Why mammography screening has not lived up to expectations from the randomised trials. Cancer Causes Control. 2012;23(1):15–21.View ArticleGoogle Scholar
  19. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605–13.View ArticleGoogle Scholar
  20. Moore DA, Loewenstein G. Self-interest, automaticity, and the psychology of conflict of interest. Soc Justice Res. 2004;17(2):189–202.View ArticleGoogle Scholar
  21. Earp BD, Hauskeller M. Binocularity in bioethics—and beyond: A Review of Erik Parens, Shaping Our Selves: On Technology, Flourishing, and a Habit of Thinking. Am J Bioeth. 2016;16(2):W3–W6.Google Scholar
  22. Parens E. Shaping our selves: on technology, flourishing, and a habit of thinking. USA: Oxford University Press; 2014.View ArticleGoogle Scholar
  23. Earp BD. The unbearable asymmetry of bullshit. Health Watch. 2016;101:4–5.Google Scholar
  24. Brawley OW, O'Regan RM. Breast cancer screening: time for rational discourse. Cancer. 2014;120(18):2800–2.View ArticleGoogle Scholar
  25. Paci E. Summary of the evidence of breast cancer service screening outcomes in Europe and first estimate of the benefit and harm balance sheet. J Med Screen. 2012;19(Suppl 1):5–13.View ArticleGoogle Scholar
  26. Paci E, Broeders M, Hofvind S, Puliti D, Duffy SW, Group EW. European breast cancer service screening outcomes: a first balance sheet of the benefits and harms. Cancer Epidemiol Biomark Prev. 2014;23(7):1159–63.View ArticleGoogle Scholar
  27. Puliti D, Miccinesi G, Zappa M, Manneschi G, Crocetti E, Paci E. Balancing harms and benefits of service mammography screening programs: a cohort study. Breast Cancer Res. 2012;14(1):1.View ArticleGoogle Scholar
  28. The Research Council of Norway. Research-based evaluation of the norwegian breast cancer screening program. Final report. Oslo: The Research Council of Norway; 2015.Google Scholar
  29. Hofvind S, Roman M, Sebuodegard S, Falk RS. Balancing the benefits and detriments among women targeted by the Norwegian breast cancer screening program. J Med Screen. 2016;23(4):203–9.View ArticleGoogle Scholar
  30. Gøtzsche P, Nielsen M. Screening for breast cancer with mammography. Copenhagen: Cochrane Database of Systematic Reviews; 2011.View ArticleGoogle Scholar
  31. Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev. 2013(6).
  32. Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet. 2012;380(9855):1778–86.View ArticleGoogle Scholar
  33. US Preventive Services Task Force. Final recommendation statement: breast cancer: screening. Rockville: MD USPSTF; 2016.Google Scholar


© The Author(s). 2018