Skip to main content

Fake facts and alternative truths in medical research

Abstract

Background

Fake news and alternative facts have become commonplace in these so-called “post-factual times.” What about medical research - are scientific facts fake as well? Many recent disclosures have fueled the claim that scientific facts are suspect and that science is in crisis. Scientists appear to engage in facting interests instead of revealing interesting facts. This can be observed in terms of what has been called polarised research, where some researchers continuously publish positive results while others publish negative results on the same issue – even when based on the same data. In order to identify and address this challenge, the objective of this study is to investigate how polarised research produce “polarised facts.” Mammography screening for breast cancer is applied as an example.

Main body

The main benefit with mammography screening is the reduced breast cancer mortality, while the main harm is overdiagnosis and subsequent overtreatment. Accordingly, the Overdiagnosis to Mortality Reduction Ratio (OMRR) is an estimate of the risk-benefit-ratio for mammography screening. As there are intense interests involved as well as strong opinions in debates on mammography screening, one could expect polarisation in published results on OMRR. A literature search identifies 8 studies publishing results for OMRR and reveals that OMRR varies 25-fold, from 0.4 to 10. Two experts in polarised research were asked to rank the attitudes of the corresponding authors to mammography screening of the identified publications. The results show a strong correlation between the OMRR and the authors’ attitudes to screening (R = 0.9).

Conclusion

Mammography screening for breast cancer appears as an exemplary field of strongly polarised research. This is but one example of how scientists’ strong professional interests can polarise research. Instead of revealing interesting facts researchers may come to fact interests. In order to avoid this and sustain trust in science, researchers should disclose professional and not only financial interests when submitting and publishing research.

Peer Review reports

Background

Science is built of facts the way a house is built of bricks: but an accumulation of facts is no more science than a pile of bricks is a house” (Henri Poincaré).

Fake news and alternative facts have become commonplace in these so-called “post-factual times.” What about medical research? Are scientific facts fake as well? A wide range of scientific results have been shown to be false [1]. Even much cited studies don’t hold up and are hard to replicate [2,3,4,5,6,7]. Initially strong effects of clinical interventions reported in highly cited articles are frequently contradicted [8]. Scientific results are fashioned by who finances research [9] and by researchers’ ties to industry [10]. Spoof research is frequently accepted [11], and scientific truth and objectivity is challenged [12, 13]. All this fuels the claim that scientific facts are suspect and that science is in crisis [14].

One source of crisis in science is when facts are based on confirmative empirical testing [15] or that research hypotheses, models, and approaches are directed by strong interests. The latter can be observed in polarised fields of research. Polarisation occurs when “reputable scientists hold radically opposed views leading to the segregation of the scientific community into groups in part constituted by their opposition to other groups in the field. Polarisation goes beyond mere disagreement. It occurs when researchers begin (1) to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data and (2) to discount arguments and data that would normally be taken as important in a scientific debate” [16]. In polarised research scientists come to engage in facting interests instead of revealing interesting facts.

Main text

How then are we to identify and address such “polarised facts?” One approach is to reveal polarised research fields and to put polarisation on par with other forms of conflicts of interests in scientific publishing. Let me use mammography screening as an example to illustrate how polarised facts can be investigated. In this field there are two main points of disagreement: a) What is the benefit of mammography screening, e.g., in terms of reduced breast cancer mortality, and b) what is the harm of this type of screening, e.g., in terms of overdiagnosis? Some researchers tend to claim that the mortality reduction is high, while the overdiagnosis rate is low [17], while others claim that the mortality reduction rate is moderate, while overdiagnosis is high [18]. What is at stake is the risk/benefit-ratio in a utilitarian perspective. Hence, one way to illustrate the polarisation in this field is to scrutinize the divergence in the Overdiagnosis to Mortality Reduction Ratio (OMRR), that is, the ratio of overdiagnosis over the rate of mortality reduction. “Overdiagnosis is the term used when a condition is diagnosed that would otherwise not go on to cause symptoms or death” [19]. Mortality from breast cancer is defined as deaths with breast cancer coded as the underlying cause of death and mortality reduction is defined in terms of reduced breast cancer mortality in a screened group compared to a non-screening group in the assessment of a screening program.

Accordingly, the research questions of this brief study are: What is the OMRR in publicly funded mammography screening programs of women aged 50–69 years old? How is this related to the corresponding authors’ attitudes towards screening? A straight forward literature search identifies 8 studies who have addressed the first question. The studies and their results are shown in Table 1.

Table 1 Overdiagnosis to mortality reduction ratio (OMRR) for various studies and the corresponding author’s attitudes to mammography screening as assessed by experts in polarised research (1: Very negative to screening, 2: Negative to screening, 3: Neutral to screening, 4: Positive to screening, 5: Very positive to screening)

In order to assess the researchers’ attitudes to screening specific questions suggested to identify “polarised conflict of interest.” were adapted to this particular case and were sent to the corresponding authors of the identified publications. However, the corresponding authors found it difficult to answer the questions. As expected, “researchers within a polarised group in a polarised field may not themselves be able to identify the field as polarised or see themselves as belonging to a polarised group”. In order to overcome this problem, two experts on polarised conflict of interest were asked to classify the corresponding authors of the identified publications. Inclusion criteria for these experts were that they were experts on science ethics in general and polarised research in particular, and exclusion criteria were if they had been involved in mammography screening programs or their primary evaluations. The research question and the included articles were not revealed. For a description of the literature search, the questions to the authors, and the questions to the experts, and a discussion of the applied methods, see Additional file 1. The classification of the corresponding interests is given in Table 1.

The correlation between the OMRR and the authors’ attitudes to screening as assessed by experts in polarised research was strong (R = 0.9). The scatter plot is shown in Fig. 1.

Fig. 1
figure 1

Scatter plot of the relationship between OMRR and attitudes to screening. (1: Very negative to screening, 2: Negative to screening, 3: Neutral to screening, 4: Positive to screening, 5: Very positive to screening)

This indicates that research results in this field are strongly formed by professional interests and attitudes to screening. The same effect of “facted interests” may be observed in other fields of polarised research, and in-depth studies are needed and encouraged.

Of course, personal interest can be a good thing in science. It can motivate important and ground-breaking research. However, it can also bias judgments, cloak influences, direct methodological choices, and skew the presentation of results. Moreover, framed facts can influence important health policy decisions. Accordingly, it is crucial to acknowledge that this represents “genuine conflicts of interest” threatening “the objectivity of science” [12] and trust in science. This can be done by a) making researchers state their “polarised conflict of interest” when submitting manuscripts, b) making reviewers explicitly assess polarisation, and c) apply external experts to assess polarisation when reviewers (and/or editors) are too ingrained in the research to be able to make the assessment.

Polarisation may be a general trend resulting from disagreements on research methodology or assessment of evidence (according to GRADE or other). However, it may also result from self-interest [20], intellectual laziness [21, 22], mental shortcuts, or hyper-partisanism [23]. Moreover, emotional conflicts of interests are more difficult to handle than financial conflicts [24].

While philosophers of science and sociologists long have revealed the challenges of value-laden facts and underscored the constitutive value of disinterestedness in science [12], it is high time we scientists acknowledge this in practice.

Conclusion

Scientists appear to engage in facting interests as much as in revealing interesting facts. Published research on mammography screening for breast cancer illustrates the problem of science being directed by strong professional interests, where some researchers continuously publish positive results while others publish negative results on the same issue – even when based on the same data. Analysing this as polarised research may provide a way to address an important issue threatening to undermine trust in scientific results and medical researchers. Hence editors should a) make researchers state their “polarised conflict of interest” when submitting manuscripts, b) make reviewers explicitly assess polarisation, and c) apply external experts to assess polarisation when reviewers (and/or editors) are too ingrained in the research to be able to make the assessment.

How exactly to assess polarised conflict of interest may need more elaboration and collaborate work. However, Table 2 suggests some questions to ask when assessing polarised conflict of interest. This is a first step illustrating methodological and empirical feasibility.

Table 2 Relevant questions to ask when assessing polarised conflict of interest

Abbreviations

EUROSCREEN:

The European screening network

NBCSP:

The Norwegian breast cancer screening program

NRC:

The Norwegian research council

OMRR:

Overdiagnosis to mortality reduction ratio

USPSTF:

U.S. Preventive Services Task Force

References

  1. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124.

    Article  Google Scholar 

  2. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712.

    Article  Google Scholar 

  3. Begley CG, Ioannidis JP. Reproducibility in science. Circ Res. 2015;116(1):116–26.

    Article  Google Scholar 

  4. Begley CG. Reproducibility: six red flags for suspect work. Nature. 2013;497(7450):433–4.

    Article  Google Scholar 

  5. Mobley A, Linder SK, Braeuer R, Ellis LM, Zwelling L. A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic. PLoS One. 2013;8(5):e63221.

    Article  Google Scholar 

  6. Errington TM, Iorns E, Gunn W, Tan FE, Lomax J, Nosek BA. An open investigation of the reproducibility of cancer biology research. elife. 2014;3:e04333.

    Article  Google Scholar 

  7. Collaboration OS. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716.

    Article  Google Scholar 

  8. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218–28.

    Article  Google Scholar 

  9. Bhandari M, Busse JW, Jackowski D, Montori VM, Schunemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ. 2004;170(4):477–80.

    Google Scholar 

  10. Ahn R, Woodbridge A, Abraham A, Saba S, Korenstein D, Madden E, Boscardin WJ, Keyhani S. Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional study. BMJ. 2017;356:i6770.

    Article  Google Scholar 

  11. Bohannon J. Who's afraid of peer review? Science (New York, NY). 2013;342(6154):60–5.

    Article  Google Scholar 

  12. Ziman J. Is science losing its objectivity? Nature. 1996;382(6594):751–4.

    Article  Google Scholar 

  13. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

    Article  Google Scholar 

  14. Benessia A, Funtowicz S, Giampietro M, Pereira ÂG, Ravetz J, Saltelli A, Strand R, van der Sluijs JP: Science on the verge: Amazon book, in the series “the rightful place of science” consortium for science, Policy & Outcomes Tempe, AZ and Washington, DC; 2016.

    Google Scholar 

  15. Popper K. Science: Conjectures and refutations. In: McGrew T, Alspector-Kelly M, Allhoff F, editors. The. philosophy of science: an historical anthology. Oxford: Wiley; 2009. p. 471–88.

  16. Ploug T, Holm S. Conflict of interest disclosure and the polarisation of scientific communities. J Med Ethics. 2015;41(4):356–8.

    Article  Google Scholar 

  17. Duffy SW, Tabar L, Olsen AH, Vitak B, Allgood PC, Chen TH, Yen AM, Smith RA. Absolute numbers of lives saved and overdiagnosis in breast cancer screening, from a randomized trial and from the breast screening Programme in England. J Med Screen. 2010;17(1):25–30.

    Article  Google Scholar 

  18. Gøtzsche PC, Jørgensen KJ, Zahl P-H, Mæhlen J. Why mammography screening has not lived up to expectations from the randomised trials. Cancer Causes Control. 2012;23(1):15–21.

    Article  Google Scholar 

  19. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605–13.

    Article  Google Scholar 

  20. Moore DA, Loewenstein G. Self-interest, automaticity, and the psychology of conflict of interest. Soc Justice Res. 2004;17(2):189–202.

    Article  Google Scholar 

  21. Earp BD, Hauskeller M. Binocularity in bioethics—and beyond: A Review of Erik Parens, Shaping Our Selves: On Technology, Flourishing, and a Habit of Thinking. Am J Bioeth. 2016;16(2):W3–W6.

  22. Parens E. Shaping our selves: on technology, flourishing, and a habit of thinking. USA: Oxford University Press; 2014.

    Book  Google Scholar 

  23. Earp BD. The unbearable asymmetry of bullshit. Health Watch. 2016;101:4–5.

  24. Brawley OW, O'Regan RM. Breast cancer screening: time for rational discourse. Cancer. 2014;120(18):2800–2.

    Article  Google Scholar 

  25. Paci E. Summary of the evidence of breast cancer service screening outcomes in Europe and first estimate of the benefit and harm balance sheet. J Med Screen. 2012;19(Suppl 1):5–13.

    Article  Google Scholar 

  26. Paci E, Broeders M, Hofvind S, Puliti D, Duffy SW, Group EW. European breast cancer service screening outcomes: a first balance sheet of the benefits and harms. Cancer Epidemiol Biomark Prev. 2014;23(7):1159–63.

    Article  Google Scholar 

  27. Puliti D, Miccinesi G, Zappa M, Manneschi G, Crocetti E, Paci E. Balancing harms and benefits of service mammography screening programs: a cohort study. Breast Cancer Res. 2012;14(1):1.

    Article  Google Scholar 

  28. The Research Council of Norway. Research-based evaluation of the norwegian breast cancer screening program. Final report. Oslo: The Research Council of Norway; 2015.

    Google Scholar 

  29. Hofvind S, Roman M, Sebuodegard S, Falk RS. Balancing the benefits and detriments among women targeted by the Norwegian breast cancer screening program. J Med Screen. 2016;23(4):203–9.

    Article  Google Scholar 

  30. Gøtzsche P, Nielsen M. Screening for breast cancer with mammography. Copenhagen: Cochrane Database of Systematic Reviews; 2011.

    Book  Google Scholar 

  31. Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography. Cochrane Database Syst Rev. 2013(6). https://doi.org/10.1002/14651858.CD001877.pub5.

  32. Independent UK Panel on Breast Cancer Screening. The benefits and harms of breast cancer screening: an independent review. Lancet. 2012;380(9855):1778–86.

    Article  Google Scholar 

  33. US Preventive Services Task Force. Final recommendation statement: breast cancer: screening. Rockville: MD USPSTF; 2016.

    Google Scholar 

Download references

Acknowledgements

I am most thankful for the responses from the authors of the referred studies and two to experts in polarised research for their help with classifying the research. I am also thankful from thoughtful comments and suggestions from the reviewers.

Funding

I have not received any external funding for this research.

Availability of data and materials

All applied data are available in the publication.

Author information

Authors and Affiliations

Authors

Contributions

I am the sole author of this text and have written the whole text for which I carry the full responsibility.

Corresponding author

Correspondence to Bjørn Hofmann.

Ethics declarations

Ethics approval and consent to participate

All named contributors provided written consent to participate in the study.

Consent for publication

All authors consented to having their data (their opinion on screening) published.

Competing interests

I have no conflict of interest to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Detailed informatin on method and discussion of method and results. (PDF 145 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hofmann, B. Fake facts and alternative truths in medical research. BMC Med Ethics 19, 4 (2018). https://doi.org/10.1186/s12910-018-0243-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12910-018-0243-z

Keywords