- Research article
- Open Access
- Open Peer Review
Expectations for methodology and translation of animal research: a survey of health care workers
BMC Medical Ethicsvolume 16, Article number: 29 (2015)
Health care workers (HCW) often perform, promote, and advocate use of public funds for animal research (AR); therefore, an awareness of the empirical costs and benefits of animal research is an important issue for HCW. We aim to determine what health-care-workers consider should be acceptable standards of AR methodology and translation rate to humans.
After development and validation, an e-mail survey was sent to all pediatricians and pediatric intensive care unit nurses and respiratory-therapists (RTs) affiliated with a Canadian University. We presented questions about demographics, methodology of AR, and expectations from AR. Responses of pediatricians and nurses/RTs were compared using Chi-square, with P < .05 considered significant.
Response rate was 44/114(39%) (pediatricians), and 69/120 (58%) (nurses/RTs). Asked about methodological quality, most respondents expect that: AR is done to high quality; costs and difficulty are not acceptable justifications for low quality; findings should be reproducible between laboratories and strains of the same species; and guidelines for AR funded with public money should be consistent with these expectations. Asked about benefits of AR, most thought that there are sometimes/often large benefits to humans from AR, and disagreed that “AR rarely produces benefit to humans.” Asked about expectations of translation to humans (of toxicity, carcinogenicity, teratogenicity, and treatment findings), most: expect translation >40% of the time; thought that misleading AR results should occur <21% of the time; and that if translation was to occur <20% of the time, they would be less supportive of AR. There were few differences between pediatricians and nurses/RTs.
HCW have high expectations for the methodological quality of, and the translation rate to humans of findings from AR. These expectations are higher than the empirical data show having been achieved. Unless these areas of AR significantly improve, HCW support of AR may be tenuous.
Biomedical animal research (AR) involves some harm to sentient animals including distress (due to confinement, boredom, isolation, and fear), pain, and early death [1-3]. AR is said to be morally permissible because the balance of these costs (harms to the animals) and benefits (to human medical care, quality of life, and survival) is favorable . It is generally assumed that the benefits are great to human medicine . An awareness of the empirical costs and benefits of AR is an important issue in medicine for several reasons. Health care workers (HCW) often perform (and are expected to perform) AR, promote AR directly with trainees and indirectly as role models, and advocate for use of public funds (from granting agencies and charitable foundations) toward medical related AR.
There is a growing literature that raises concerns about the empirical practice of AR in at least two domains. First, the methodological quality of AR is often poor in both experimental design and animal welfare aspects [6-12]. AR publications rarely report the use of eligibility criteria, randomization, allocation concealment, blinding, sample size calculation, primary outcome specification, and study replication [6-10,13,14]. AR publications rarely report performing a systematic review to determine the necessity of the research project, rarely report the use of continuous monitoring of the level of anesthesia or pain control, and often do not report the use of acceptable methods of euthanasia [7,11,12]. Second, the translation rate from AR to humans has been disappointing [15-18]. Extensive AR in the fields of sepsis [19-21], stroke [22,23], spinal cord injury [24,25], traumatic brain injury , cancer [27,28], degenerative neurological diseases [29,30], acquired immunodeficiency syndrome , asthma , and other fields [15-18] has translated to humans in 0-5% of cases. Pharmaceutical companies have found that, of drugs that work well in AR and progress to human clinical trials, only ≤8% are found safe and effective enough for market approval [33,34]. AR to determine toxicology [17,35-37], carcinogenicity [17,37-39], and teratogenicity [17,40] of drugs or compounds is no more accurate than chance, with concordance rates between species generally <40%.
Since most AR is funded by public money through government and charitable granting agencies, it is important to know the public perception of, and the level of public support for AR. Surveys of the public find that the majority are ‘conditional acceptors’ of AR; they accept the practice because of the promise of cures and treatments for life-threatening and debilitating human diseases, so long as animal welfare is at least minimally considered and protected . To our knowledge, no survey has asked for the details of this conditional acceptance of AR. In this survey we ask HCW directly what the minimal acceptable standards in AR methodology might be, and what the minimal acceptable translation rate of AR to human treatments might be. This is important in order to determine how strong the support is for the empirical practice of AR, and how AR could be improved to increase the level of support. We found that HCW have high expectations for the methodological quality of, and the translation rate to humans of findings from AR.
All pediatricians and pediatric intensive care unit nurses and respiratory therapists (RTs) who are affiliated with one Canadian University were e-mailed the survey using an electronic, secure, survey distribution and collection system (REDCap, Research Electronic Data Capture) . A cover letter stated that “we very much value your opinion on this important issue” and that the survey was anonymous and voluntary. We offered the incentive that if the response rate was at least 70% we would donate $1000 to the Against Malaria Foundation or the PICU Social Committee. Non-responders were sent the survey by e-mail at 3-week intervals for 3 additional mailings.
We followed published recommendations . To generate the items for the questionnaire, we searched Medline from 1980 to 2012 for articles about the methodology and translation of AR. This was followed by collaborative creation of the background section and questions for the survey by the authors. Content and construct validation were done using a table of specifications filled out by experts including two ethics philosophy professors, and two pediatricians. Face and content validation were done by pilot testing of the survey, by non-medical, university-educated lay people (n = 9), pediatricians (n = 2), pediatric intensive care nurses (n = 2), and an ethics professor (n = 1). Each pilot test was followed by a semi-structured interview by 1 of the authors to ensure clarity, realism, validity, and ease of completion. A published clinical sensibility tool was used for the expert and pilot testing . After minor modifications, the survey was approved by all the authors.
The background section stated: “In this survey, ‘animals’ means: mammals, such as mice, rats, dogs, and cats. It has been estimated that over 100 million animals are used in the world for research each year. There are many good reasons to justify AR, which is the topic of this survey. Nevertheless, some people argue that these animals are harmed in experimentation, because their welfare is worsened. In this survey, ‘harmful’ means such things as: pain, suffering (disease/injury, boredom, fear, confinement), and early death. This survey is about how AR should be performed. We value your opinion on the very important issue of the methodology of AR.”
We presented demographic questions, 15 questions that asked respondents “about the methods of AR that are commonly discussed by animal researchers”, 4 questions that asked the respondent “to consider what you think the benefits to humans are as a result of AR”, and 8 questions that asked the respondent “for your opinions about what you expect from AR paid for with public funds (for example, funding by government using tax dollars, or charitable foundations using donations).” Response choices included scales of “strongly agree, agree, undecided, disagree, strongly disagree”, “nearly always, often, sometimes, not often, almost never”, and “5-20%, 21-40%, 41-60%, 61-80%, over 80%” depending on the type of question. All the questions are shown in the Tables 1, 2, 3, and 4.
The study was approved by the health research ethics board 2 of our university (study ID Pro00039590) and return of a survey was considered consent to participate.
The web-based tool (REDCap) allows anonymous survey responses to be collected, and later downloaded into an SPSS database for analysis. The proportions of respondents with different answers were expressed as percentages. The responses of the two predefined groups, pediatricians and pediatric intensive care unit nurses/RTs, were compared using the Chi-square statistic, with P ≤ .05 after Bonferroni correction for multiple comparisons considered significant.
Forty-eight responded, but only 44/114 (39%) gave responses to more than the demographic questions. Demographics are given in Table 1.
Expectations regarding methodology of AR
The majority of respondents agreed that: anesthetic use should be monitored during surgery (100%), pain should be monitored after this surgery even over-night (91%), and experimenters in a research study should have similar training on the procedures involved (97%) (Table 2). The majority disagreed that it is acceptable: to use less humane methods of euthanasia to reduce costs or improve results (82% or 52% respectively), to use animals when alternatives are available (73%), to do an animal experiment without a systematic literature review (100%), and to do an animal experiment using suboptimal methods (including randomization, blinding, and primary outcome specification) in order to save costs (82-93%). Only a minority of respondents agreed that failed animal models of a disease should continue to be used (30%), or that stressed animals should be used (37%). Finally, the majority agreed that guidelines consistent with these responses should be required for publicly funded AR (95%).
Perceptions of human benefits from AR
Most respondents believe that discoveries from AR sometimes or often lead to a treatment for human disease directly (77%) or indirectly (84%), and that researchers sometimes or often claim large benefits from AR (91%) (Table 3). The majority did not agree (84%) with the statement that “AR rarely produces benefits to humans.”
Expectations for translation to humans from AR paid for with public funding
The majority of respondents think that drugs tested on animals should correctly predict the following for humans at least 41% of the time: adverse reactions (69% of respondents), disease treatment (62% of respondents), carcinogenicity or teratogenicity (74% of respondents), and treatment of stroke, severe infection, cancer, brain or spinal cord injury (59% of respondents). The majority also expected that replication of AR findings in second laboratories or other strains of the animal should occur at least 61% of the time (95% and 68% of respondents respectively). The majority agreed that misleading (in terms of human benefit and/or harm) animal experiments should occur at most 40% of the time (86% of respondents). Finally, when asked to “assume drugs studied in animals accurately predict effects in humans less than 20% of the time. If this were true, it would significantly reduce your support for animal research”, 40% disagreed (Table 4).
Pediatric Intensive Care nurses and RTs
Sixty-nine of 120 (58%) responded; 52 (75%) nurses and 16 (25%) RTs. Demographics are given in Table 1.
Expectations regarding methodology of AR
The majority of respondents agreed that: anesthetic use should be monitored during surgery (98%), pain should be monitored after this surgery even over-night (96%), and experimenters in a research study should have similar training on the procedures involved (96%) (Table 2). The majority disagreed that it is acceptable: to use less humane methods of euthanasia to reduce costs or improve results (87% or 81%), to use animals when alternatives are available (88%), to do an animal experiment without a systematic literature review (96%), and to do an animal experiment using suboptimal methods (including randomization, blinding, and primary outcome specification) in order to save costs (87-95%). Only a minority of respondents agreed that failed animal models of a disease should continue to be used (27%), or that stressed animals should be used (19%). Finally, the majority agreed that guidelines consistent with these responses should be required for publicly funded AR (91%).
Perceptions of the benefits to humans from AR
Most respondents believe that discoveries from AR sometimes or often lead to a treatment for human disease directly (84%) or indirectly (88%), and that researchers sometimes or often claim large benefits from AR (97%) (Table 3). The majority did not agree (87%) with the statement that “AR rarely produces benefits to humans.”
Expectations for translation to humans from AR paid for with public funding
The majority of respondents think that drugs tested on animals should correctly predict the following for humans at least 41% of the time: adverse reactions (85% of respondents), disease treatment (82% of respondents), carcinogenicity or teratogenicity (89% of respondents), and treatment of stroke, severe infection, cancer, brain or spinal cord injury (88% of respondents). The majority also expected that replication of AR findings in second laboratories or other strains of the animal should occur at least 61% of the time (92% and 83% of respondents respectively). The majority agreed that misleading (in terms of human benefit and/or harm) animal experiments should occur at most 40% of the time (84% of respondents). Finally, when asked to “assume drugs studied in animals accurately predict effects in humans less than 20% of the time. If this were true, it would significantly reduce your support for animal research”, only 6% disagreed (Table 4).
Differences between pediatricians versus nurses/RTs
There were few statistically significant differences. Nurses more often responded that drugs for stroke, severe infection, cancer, brain or spinal cord injury should work in humans. Nurses were more uncertain whether AR “rarely produces benefits to humans”, and would be less supportive of AR if it accurately predicted effects in humans <20% of the time.
There are several important findings from this survey. First, most HCW respondents expect that AR is done with high methodological quality, and that costs and difficulty are not acceptable justifications for lower quality. Most expect that guidelines for AR funded with public money should be consistent with these expectations. Second, most respondents thought that there are either sometimes or often large benefits to humans from AR. Most disagreed that “AR rarely produces benefit to humans.” Third, most respondents expect that AR findings should translate to humans at least 41% of the time, with many expecting this at least 61% of the time. This includes AR findings of adverse events (toxicity), carcinogenicity and teratogenicity, and disease treatments. The majority thought misleading AR results should occur no more often than 20% of the time. If translation from AR to humans was to occur <20% of the time, most would be less supportive of AR. Finally, most respondents expect that AR findings should be reproducible between laboratories and between strains of the same species. There are important implications of these findings for public and HCW acceptance of AR (Table 5).
Previous public surveys have generally asked only whether people support AR for human benefit, and not asked people to evaluate the details of their expectations of AR. For example, the Eurobarometer asks “scientists should be allowed to experiment on animals like dogs and monkeys if this can help sort out human health problems”; in 2010, 44% of Europeans responded ‘agree’ and 37% ‘disagree’ . This support for AR was linked with “greater appreciation of the contributions of science to the quality of life” and “an omnipotent vision of science” . In the UK the 2012 Ipsos MORI determined that most (85%) are ‘conditional acceptors’ of AR; people accept AR “so long as it is for medical research purposes”, “for life-threatening diseases”, “so long as there is no unnecessary suffering”, or “where there is no alternative”, considering AR as a “necessary evil” for human benefit . In the United States the 2011 Gallup’s Values and Beliefs survey found that when asked whether medical testing on animals is morally acceptable or morally wrong, 43% (and 54% of young adults 18-29 yr old) responded ‘morally wrong’ . In a survey in Sweden including patients with rheumatoid arthritis and scientific expert members of research ethics boards, most respondents agreed to AR for at least some type of biomedical research. Support was highest for AR into “fatal diseases” (83.1%), and diseases with “insufficient treatment options” (82.1%) . In a UK survey of scientists promoting AR, lay public, and animal welfarists, the support for AR (on a Likert scale of 7) was 5.33 (1.46), 3.57 (1.70), and 1.48 (0.87) respectively. Scientists and lay public supported animal use only for “medical research”, and not for dissection, personal decoration, or entertainment . These surveys suggest people support AR on the understanding that it is necessary to provide significant benefit for humans with severe diseases, and is done to high ethical standards. However, none asked for the amount of detail as in our survey.
Some qualitative research also suggests there is conditional public acceptance of AR based on a utilitarian analysis of costs (to animals) and benefits (to humans) [49,50]. This conditional acceptance is usually based on the assumption that regulation has assured AR is to high animal welfare standards, of high scientific validity and merit (i.e., high quality research, leading to human benefit and cures), and that there are not alternative research methods [49-51]. Scientists understand this role of regulation as leading to societal acceptance of AR, and see regulation as legitimating AR practice [51-53]. However, our survey suggests that this trust in regulation may be misplaced, because regulation does not result in AR that meets HCW expectations for animal welfare, methodological quality, human benefit, or rates of translation to human medicine and cures (Table 5). Moreover, these studies showed that the public is far less accepting of the use of genetically modified animals in research, based on a deontological approach where this AR is seen as ‘wrong’ [49,50]. We did not ask about the common use of genetically modified animals in AR, and therefore may have underestimated HCW expectations of AR.
There are two main explanations for the poor predictive accuracy of AR for humans. First, it is possible that the poor methodological quality of AR has resulted in a biased literature that has led to many human trials based on inappropriate data. Second, it is possible that animal models are not good ‘causal analogical models’; not useful to extrapolate findings to humans because there are major causal disanalogies between species [54,55]. Animal models are based on this reasoning: when an animal model is similar to the human with respect to traits/properties a,b,c [e.g. fever, hypotension, and kidney injury in sepsis], and when the animal model is found to have property d [e.g. response to protein-C treatment], then it is inferred that the human also likely has property d. This ‘causal analogy’ assumes that there are few causal disanalogies: few properties e,f,g that are unique to either the animal or human and that interact causally with the common properties a,b,c. However, animals are evolved complex systems; they have a myriad of interacting modules at hierarchical levels of organization . As a result of this complexity, animals have emergent properties [e.g. animal traits/functions, like property d] that are dependent on initial conditions [e.g. gene expression profiles, the context of the organism, like properties a,b,c, and e,f,g]. In complex systems [e.g. animals], very small differences in initial conditions [e.g. properties e,f,g specific to a species/strain] can result in dramatic differences in response to the same perturbation [e.g. drug, treatment, or disease leading to property d] [54-58]. There is much empirical data finding major causal disanalogies between animal species: differences in gene expression at baseline and in response to perturbations, and in disease susceptibilities [59-62]. Thus, complexity science suggests there may be an in principle limitation for AR to predict human responses. Our survey suggests that these competing explanations must be sorted out to determine whether translation can meet public expectations in weighing the costs and benefits of AR.
This study has several limitations. Response rates for pediatricians and nurses/RTs were 39% and 58% respectively; thus we cannot rule out biased participation in the survey. Statements presented needed to be short and concise, and this may have left out important details that would have influenced the understanding of and response to the text. The moderate sample size from one University limits the generalizability of our results. Nevertheless, this is the first survey we are aware of that asks any group not just to consider whether they support AR; rather, to consider in detail the expectations for the methodology and translation of AR. Strengths of this study include the rigorous survey development process, and the inclusion of the most common critiques of the empirical practice of AR. Future study should determine the generalizability of our results.
We found HCW respondents had high expectations for the methodological quality of AR, and the translation of findings from AR to human responses to drugs and disease. These expectations are far higher than the empirical data show having been achieved. This disconnect between HCW expectations of AR and the empirical reality of AR suggests that if HCW were better informed they would likely withdraw their conditional support of AR. Improved methodological quality is an achievable goal if this is prioritized by researchers, reviewers, editors, and funders. Whether methodologically optimal AR can achieve better human translation to meet HCW expectations is an open question.
National Research Council Committee on Recognition and Alleviation of Distress in Laboratory Animals. Recognition and alleviation of distress in laboratory animals. Washington, DC: National Academies Press; 2008.
National Research Council Committee on recognition and alleviation of pain in laboratory animals. Recognition and alleviation of pain in laboratory animals. Washington DC: National Academies Press; 2009.
Rollin BE. Animal research: a moral science. EMBO Rep. 2007;8(6):521–5.
Robert B. Lives in the balance: utilitarianism and AR. In: Garrett JR, editor. The ethics of animal research: exploring the controversy. USA: MIT; 2012. p. 81–105.
Matthews RAJ. Medical progress depends on animal models- doesn’t it? J R Soc Med. 2008;101(2):95–8.
Bara M, Joffe AR. The methodological quality of animal research in critical care: the public face of science. Annals Intensive Care. 2014;4:26.
Killkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, Fry D, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One. 2009;4(11):e7284.
Sena E, van der Worp B, Howells D, Macleod M. How can we improve the preclinical development of drugs for stroke? Trends Neurosci. 2007;30:433–9.
Baginskait J. Scientific Quality Issues in the Design and Reporting of Bioscience Research: A Systematic Study of Randomly Selected Original In Vitro, In Vivo and Clinical Study Articles Listed in the PubMed Database. In CAMARADES Monogr 2012. [http://www.dcn.ed.ac.uk/camarades/files/Camarades%20Monograph%20201201.pdf ].
Baker D, Lidster K, Sottomayor A, Amor S. Two years later: journals are not yet enforcing the ARRIVE guidelines on reporting standards for pre-clinical animal studies. PLoS Biol. 2014;12(1):e1001756.
Bara M, Joffe AR. The ethical dimension in published animal research in critical care: the public face of science. Crit Care. 2014;18(1):R15.
Carbone L. Pain in laboratory animals: the ethical and regulatory imperatives. PLoS One. 2011;6:e21578.
Scott S, Kranz JE, Cole J, Lincecum JM, Thompson K, Kelly N, et al. Design, power, and interpretation of studies in the standard murine model of ALS. Amyotroph Lateral Scler. 2008;9:4–15.
Steward O, Popovich PG, Dietrich WD, Kleitman N. Replication and reproducibility in spinal cord injury research. Exp Neurol. 2012;233:597–605.
Horrobin DF. Modern biomedical research: an internally self-consistent universe with little contact with medical reality. Nat Rev Drug Discov. 2003;2:151–4.
Pippin JJ. AR in medical sciences: seeking a convergence of science, medicine, and animal law. South Texas Law Rev. 2013;54:469–511.
Shanks N, Greek R, Greek J. Are animal models predictive for humans? Phil Ethics Humanities Med. 2009;4:2.
Pound P, Bracken MB. Is AR sufficiently evidence based to be a cornerstone of biomedical research? BMJ. 2014;348:g3387.
Dyson A, Singer M. Animal models of sepsis: why does preclinical efficacy fail to translate to the clinical setting. Crit Care Med. 2009;37(Suppl):S30–7.
Opal SM, Patrozou E. Translational research in the development of novel sepsis therapeutics: logical deductive reasoning or mission impossible? Crit Care Med. 2009;37(Suppl):S10–5.
Fink MP. Animal models of sepsis. Virulence. 2013;5(1):143–53.
Jauch EC, Saver JL, Adams HP, Bruno Jr A, Connors JJ, Demaerschalk BM, et al. Guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2013;44:870–947.
Sutherland BA, Minnerup J, Balami JS, Arba F, Buchan AM, Kleinschnitz C. Neuroprotection for ischemic stroke: translation from the bench to the bedside. Int J Stoke. 2012;7:407–18.
Akhtar AZ, Pippin JJ, Sandusky CB. Animal models of spinal cord injury: a review. Rev Neurosci. 2009;19:47–60.
Domingo A, Al-Yahya AA, Asiri Y, Eng JJ, Lam T. A systematic review of the effects of pharmacological agents on walking function in people with spinal cord injury. J Neurotrauma. 2012;29:865–79.
Xiong Y, Mahmood A, Chopp M. Animal models of traumatic brain injury. Nat Rev Neurosci. 2013;14:128–42.
Begley CG, Ellis LM. Drug development: raise standards for preclinical cancer research. Nature. 2012;483:531–3.
Hutchinson L, Kirk R. High drug attrition rates- where are we going wrong? Nat Rev Clin Oncol. 2011;8:189.
Ransohoff RM. Animal models of multiple sclerosis: the good, the bad and the bottom line. Nat Neurosci. 2012;15:1074–7.
Geerts H. Of mice and men: bridging the translational disconnect in CNS drug discovery. CNS Drugs. 2009;23(11):915–26.
Hatziioannou T, Evans DT. Animal models for HIV/AIDS research. Nat Rev Microbiol. 2012;10:852–67.
Holmes AM, Solari R, Holgate ST. Animal models of asthma: value, limitations and opportunities for alternative approaches. Drug Discov Today. 2011;16(15–16):659–70.
Pammolli F, Magazzini L, Riccaboni M. The productivity crisis in pharmaceutical R&D. Nat Rev Drug Discov. 2011;10:428–38.
DiMasi JA, Feldman L, Seckler A, Wilson A. Trends in risks associated with new drug development: success rates for investigational drugs. Clin Pharmacol Ther (St Louis, MO, U S). 2010;87(3):272–7.
Fourches D, Barnes JC, Day NC, Bradley P, Reed JZ, Tropsha A. Chemoinformatics analysis of assertions mined from literature that describe drug-induced liver injury in different species. Chem Res Toxicol. 2010;23:171–83.
Hartung T. Toxicology for the twenty-first century. Nature. 2009;460:208–12.
Knight A. The costs and benefits of animal experiments. UK: Palgrave Macmillan; 2011.
Card JW, Fikree H, Haighton LA, Lee-Brotherton V, Wan J, Sangster B. Lack of human tissue-specific correlations for rodent pancreatic and colorectal carcinogens. Reg Toxicol Pharm. 2012;64:442–58.
Knight A, Bailey J, Balcombe J. Animal carcinogenicity studies: 1. Poor human predictivity. Altern Lab Anim. 2006;34:19–27.
Greek R, Shanks N, Rice MJ. The history and implications of testing thalidomide on animals. J Philos Sci Law. 2011;11:1–32. http://jpsl.org/archives/history-and-implications-testing-thalidomide-animals/.
Ipsos MOR. Views on the use of animals in scientific research. Dept for Business Innovation & Skills; 2012. [file:///C:/DOCUME ~ 1/arijoffe/LOCALS ~ 1/Temp/1512_sri-BIS_animal_research_2012_final_report_September_published_final.pdf]
Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research Electronic Data Capture (REDCap)- a meta-data driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81.
Burns KEA, Duffett M, Kho ME, Meade MO, Adhikari NKJ, Sinuff T, et al. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ. 2008;179(3):245–52.
European Commission. Eurobarometer: science and technology report: European Commission, June 2010: 60–64. [http://ec.europa.eu/public_opinion/archives/ebs/ebs_340_en.pdf]
CrettazvonRoten F. European attitudes towards animal research: overview and consequences for science. Sci Technol Soc. 2009;14(2):349–64.
Goodman JR, Borch CA, Cherry E. Mounting opposition to vivisection. Contexts. 2012;11(2):68–9.
Masterton M, Renberg T, Sporrong SK. Patients’ attitudes towards animal testing: ‘To conduct research on animals is, I suppose, a necessary evil’. BioSocieties. 2014;9:24–41.
Knight S, Vrij A, Bard K, Brandon D. Science versus human welfare? Understanding attitudes toward animal use. J Soc Issues. 2009;65(3):463–83.
Ormandy EH, Schuppli CA, Weary DM. Public attitudes toward the use of animals in research: effects of invasiveness, genetic modification and regulation. Anthropozoos. 2013;26(2):165–84.
Macnaghten P. Animals in their nature: a case study on public attitudes to animals, genetic modification and ‘nature’. Sociology. 2004;38(3):533–51.
Hobson-West P. Ethical boundary-work in the animal research laboratory. Sociology. 2012;46(4):749–663.
Hobson-West P. The role of ‘public opinion’ in the UK animal research debate. J Med Ethics. 2010;36:46–9.
Hobson-West P. What kind of animal is the ‘Three Rs’? ATLA. 2009;37(Suppl2):95–9.
Greek R, Rice MJ. Animal models and conserved processes. Theor Biol Med Model. 2012;9:40.
Greek R, Hansen LA. Questions regarding the predictive value of one evolved complex adaptive system for a second: exemplified by the SOD1 mouse. Progress Biophysics Mol Biol. 2013;113:231–53.
Ahn AC, Tewari M, Poon C, Phillips RS. The limits of reductionism in medicine: could systems biology offer an alternative? PLoS Med. 2006;3(6):e208.
Mazzocchi F. Complexity in biology. EMBO Rep. 2008;9(1):10–4.
Wagner A. Causality in complex systems. Biol Philos. 1999;14:83–101.
Seok J, Warren S, Cuenca AG, Mindrinos MN, Baker HV, Xu W, et al. Genomic responses in mouse models poorly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2013;110:3507–12.
Gentile LF, Nacionales DC, Lopez C, Vanzant E, Cuenca A, Cuenca AG, et al. A better understanding of why murine models of trauma do not recapitulate the human syndrome. Crit Care Med. 2014;42(6):1406–13.
Brawand D, Soumillon M, Necsulea A, Julien P, Csardi G, Harrigan P, et al. The evolution of gene expression levels in mammalian organs. Nature. 2011;478:343–8.
Varki NM, Strobert E, Dick Jr EJ, Benirschke K, Varki A. Biomedical differences between human and nonhuman hominids: potential roles for uniquely human aspects of sialic acid biology. Annual Rev Pathol Mechanisms Dis. 2011;6:365–93.
Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8(6):e1000412.
Institute for Laboratory Animal Research: National Research Council. Guidance for the Description of Animal Research in Scientific Publications. Washington DC: National Academy of Sciences; 2011.
Canadian Council on Animal Care in Science. CCAC Guidelines on: Animal use Protocol Review. Ottawa: Canadian Council on Animal Care; 1997 http://www.ccac.ca/Documents/Standards/Guidelines/Protocol_Review.pdf.
Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ. Press releases by academic medical centers: not so academic? Ann Intern Med. 2009;150:613–8.
Contopoulos-Ioannidis DG, Ntzani EE, Ioannidis JPA. Translation of highly promising basic science research into clinical applications. Am J Med. 2003;114:477–84.
Hackam DG, Redelmeier DA. Translation of research evidence from animals to humans. JAMA. 2006;296(14):1731–2.
Knight A. Systematic reviews of animal experiments demonstrate poor contributions to human healthcare. Rev Recent Clin Trials. 2008;3(2):89–96.
MB was supported for this research by a summer studentship from Alberta Innovates Health Solutions; the funding agency had no role in the design and conduct of the study; collection, management, analysis or interpretation of the data; preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
The authors declare that they have no competing interests.
ARJ contributed to conception and design, acquisition, analysis and interpretation of data, and drafted the paper, and had final approval of the version to be published. MB contributed to design, acquisition and interpretation of data, and revising the manuscript critically for intellectual content, and had final approval of the version to be published. NA contributed to design, and interpretation of data, and revising the manuscript critically for intellectual content, and had final approval of the version to be published. NN contributed to design, and interpretation of data, and revising the manuscript critically for intellectual content, and had final approval of the version to be published. ARJ had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors agree to be accountable for all aspects of the work.