Skip to content

Advertisement

  • Study protocol
  • Open Access

Stakeholder views regarding ethical issues in the design and conduct of pragmatic trials: study protocol

  • 1Email authorView ORCID ID profile,
  • 1,
  • 1, 13,
  • 2,
  • 3, 4,
  • 5,
  • 6,
  • 1, 13,
  • 7,
  • 1,
  • 8, 9,
  • 10,
  • 1, 11,
  • 1, 12 and
  • 1, 13
BMC Medical Ethics201819:90

https://doi.org/10.1186/s12910-018-0332-z

  • Received: 7 August 2018
  • Accepted: 8 November 2018
  • Published:

Abstract

Background

Randomized controlled trial (RCT) trial designs exist on an explanatory-pragmatic spectrum, depending on the degree to which a study aims to address a question of efficacy or effectiveness. As conceptualized by Schwartz and Lellouch in 1967, an explanatory approach to trial design emphasizes hypothesis testing about the mechanisms of action of treatments under ideal conditions (efficacy), whereas a pragmatic approach emphasizes testing effectiveness of two or more available treatments in real-world conditions. Interest in, and the number of, pragmatic trials has grown substantially in recent years, with increased recognition by funders and stakeholders worldwide of the need for credible evidence to inform clinical decision-making. This increase has been accompanied by the onset of learning healthcare systems, as well as an increasing focus on patient-oriented research. However, pragmatic trials have ethical challenges that have not yet been identified or adequately characterized. The present study aims to explore the views of key stakeholders with respect to ethical issues raised by the design and conduct of pragmatic trials. It is embedded within a large, four-year project that seeks to develop guidance for the ethical design and conduct of pragmatic trials. As a first step, this study will address important gaps in the current empirical literature with respect to identifying a comprehensive range of ethical issues arising from the design and conduct of pragmatic trials. By opening up a broad range of topics for consideration within our parallel ethical analysis, we will extend the current debate, which has largely emphasized issues of consent, to the range of ethical considerations that may flow from specific design choices.

Methods

Semi-structured interviews with key stakeholders (e.g. trialists, methodologists, lay members of study teams, bioethicists, and research ethics committee members), across multiple jurisdictions, identified based on their known experience and/or expertise with pragmatic trials.

Discussion

We expect that the study outputs will be of interest to a wide range of knowledge users including trialists, ethicists, research ethics committees, journal editors, regulators, healthcare policymakers, research funders and patient groups. All publications will adhere to the Tri-Agency Open Access Policy on Publications.

Background

Providing the best available care to patients is the backbone of medical practice, and a practical consequence of the ethical principles of non-maleficence and beneficence [1, 2]. Ideally, medical care should be grounded in valid evidence of benefit, safety and cost-effectiveness [3]. A gold standard study design for providing evidence of effectiveness is the randomized controlled trial (RCT) [4].

Randomized controlled trials differ in their purpose and scope and accordingly, can have different methodological approaches. Trials that focus on elucidating a mechanism of action or efficacy under highly-controlled conditions are often described as explanatory or mechanistic trials (herein ‘explanatory RCTs’) [58]. Conversely, trials whose outcomes are measuring intervention effectiveness in a real-world setting, and where the aim is to provide information pertinent to a health care decision, have been described as “practical” [6, 9] or pragmatic (herein ‘pragmatic RCTs’) [5, 1013]. In reality, individual trials will lie somewhere along the spectrum from more explanatory to more pragmatic depending on the degree to which their aim(s) and study design are more aligned to the study of efficacy or effectiveness.1 To help trialists match their design decisions to the purpose of the trial, tools such as the Pragmatic-Explanatory Continuum Indicatory Summary (PRECIS) and its update PRECIS-2 [14] have been developed. PRECIS-2 proposes nine discrete domains in which trialists can make design decisions for the design of pragmatic RCTs. These domains are summarized in Table 1. In general, the design of an RCT should be driven by the desired purpose [6, 9].
Table 1

PRECIS-2 domains and descriptors

PRECIS-2 domain

Description

Eligibility

To what extent are the participants in the trial similar to those who would receive this intervention if it was part of usual care? A more pragmatic trial would have criteria that ensure participants are essentially identical to those in usual care; a more explanatory approach would have lots of exclusions (e.g. those who don’t comply, respond to treatment, or are not at high risk for primary outcome, are children or elderly), or uses many selection tests not used in usual care.

Recruitment

How much extra effort is made to recruit participants over and above what that would be used in the usual care setting to engage with patients? For example, a very pragmatic trial may have recruitment through usual appointments or clinic; a very explanatory trial may have targeted invitation letters, advertising in newspapers, radio plus incentives and other routes that would not be used in usual care.

Setting

How different is the setting of the trial and the usual care setting? For example, a very pragmatic trial may use identical settings to usual care; a very explanatory trial may include a single centre, or only specialised trial or academic centres.

Organisation

How different are the resources, provider expertise and the organisation of care delivery in the intervention arm of the trial and those available in usual care? For example, a very pragmatic trial may use identical organisation to usual care; a very explanatory trial may increase staff levels, give additional training, require more than usual experience or certification and increase resources.

Flexibility (delivery)

How different is the flexibility in how the intervention is delivered and the flexibility likely in usual care? For example, a very pragmatic trial may have identical flexibility to usual care allowing healthcare professionals to modify delivery of the intervention; a very explanatory trial may include a strict protocol, monitoring and measures to improve compliance, with specific advice on allowed co-interventions and complications

Flexibility (adherence)

How different is the flexibility in how participants must adhere to the intervention and the flexibility likely in usual care? For example, a very pragmatic trial may involve no more than usual encouragement to adhere to the intervention; a very explanatory approach may involve exclusion based on adherence, and measures to improve adherence if found wanting.

Follow-up

How different is the intensity of measurement and follow-up of participants in the trial and the likely follow-up in usual care? For example, a very pragmatic trial may have no follow up than would be the case in usual care; a very explanatory approach may have more frequent, longer visits, unscheduled visits triggered by primary outcome event or intervening event, and more extensive data collection.

Primary outcome

To what extent is the trial’s primary outcome relevant to participants? For example, a very pragmatic trial would have an outcome is of obvious importance to participants; a very explanatory trial may use a surrogate, physiological outcome, central adjudication or use assessment expertise that is not available in usual care, or the outcome is measured at an earlier time than in usual care.

Primary analysis

To what extent are all data included in the analysis of the primary outcome? For example, a very pragmatic trial would use intention to treat with all available data; a very explanatory analysis may exclude ineligible post-randomisation participants, or include only completers or those following the treatment protocol

Adapted from Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147 and https://www.precis-2.org/Help/Documentation/HowTo

While the notion of more explanatory and more pragmatic attitudes within trials was first raised over 50 years ago [5], interest in pragmatic RCTs has grown substantially in recent years. Bibliometric analyses illustrate year on year increases in the number of publications with pragmatic trials as their topic [15, 16]. Further, a recent Delphi survey of 48 UK Clinical Trials Units (CTUs) found that over 40% of CTU Directors ranked pragmatic trials as a critical topic of interest [17]. Similarly, in a prioritization exercise among researchers and methodologists working on trials in low and middle income countries, 84% of respondents indicated that methodological research regarding pragmatic trials was a critical research topic [18].

There are several reasons for this increased interest. First, funding agencies such as the Patient Centered Outcomes Research Institute (PCORI) and the Canadian Institutes of Health Research (CIHR), among others, have moved toward funding more pragmatic research questions. Second, there may be a lack of evidence for established practices – a form of “self-evident evidence paradox” [19] – that is now coming under closer scrutiny. Third, there are now many new or emerging trial designs that capitalize on methodological and statistical innovations as well as the expansion in availability of routinely-collected health data. This may enhance opportunities for pragmatism in key domains of design such as identification and recruitment of participants, the intensity of follow up required, and the collection of outcome data. Examples of designs that may lend themselves to more pragmatic approaches include novel cluster randomized trials (such as cluster crossovers [20, 21] and stepped wedge trials), cohort multiple randomized design [22], and registry-based randomized controlled trials [23]. Fourth, the cost and logistical complexity of traditional explanatory trials has motivated funders and researchers to identify potential ways to reduce trial costs, with pragmatic RCTs that potentially leverage existing infrastructure – such as health administrative data – seen as a one way to do this [2426]. Finally, selection of more patient-focused trial outcomes closely aligns to increasing interest in, and funding for, patient engagement and patient-oriented research with national strategies such as the Canadian Strategy for Patient Oriented Research (SPOR) [27] and the US Patient-Centered Outcomes Research Institute (PCORI) [28] (see Table 2 for examples).
Table 2

Statements regarding pragmatic trials by funding agencies

Agency

Year

Definition of pragmatic trial

Canadian Institutes of Health Research (Canada)

2016

"[innovative Clinical Trials (iCT)] methods can reduce the cost of conducting trials, reduce the amount of time needed to answer research questions, and increase the relevance of research findings to patients, health care providers and policy makers. Adopting these alternative designs can maximize the use of existing knowledge and data. Some examples of iCTs include: Pragmatic trials [...];" https://www.researchnet-recherchenet.ca/rnr16/vwOpprtntyDtls.do?prog=2471&view=search&terms=pragmatic&incArc=true&org=CIHR&fromYear=2005&toYear=2022&type=EXACT&resultCount=25&next=1

Medical Research Council (UK)

2018

“Effectiveness Trials - Studies designed to produce research information about the effectiveness, costs, and broader impact of health technologies for those who use, manage and provide care in the NHS are supported by the Health Technology Assessment (HTA) programme, funded by NIHR and managed by NETCC.” https://mrc.ukri.org/funding/science-areas/translation/translation-and-clinical-trials/

NCCIH (US)

2017

“The projects must be pragmatic trials rather than explanatory trials. […]

The Collaboratory supports pragmatic trials. A pragmatic trial is primarily designed to determine the effects of an intervention under the usual, real-world conditions in which it will be applied. The approach, including study design, is kept as simple as possible without sacrificing scientific rigor.” https://nccih.nih.gov/news/events/telecon/pragmatic-CT-webinar-rm-16-019

PCORI (US)

2018

“[…] applicants should design trials so that they address practical comparative questions faced by patients and clinicians—to include broader and more diverse populations—and can be conducted in real-world clinical and diverse health-system settings. Such trials are often referred to as “pragmatic clinical trials” because they are intended to provide information that healthcare providers can adopt directly.” https://www.pcori.org/sites/default/files/PCORI-PFA-2018-Cycle-1-Pragmatic-Studies.pdf

Ethical issues in pragmatic RCTs

Pragmatic trial designs may, however, also raise new ethical challenges. Consider the design domain of recruitment: a more pragmatic RCT may attempt to recruit participants by utilizing clinical staff during routine clinical encounters. This has the potential advantages of lowering trial cost while increasing expediency, but it also has the potential to blur the line between clinical practice and research and may raise concerns regarding the voluntariness of patient informed consent. On the other hand, a more explanatory trial with dedicated research staff responsible for recruiting patients is likely more costly, time consuming, and may result in greater recruitment challenges, but at the same time may provide a clearer separation between research and clinical care, and thus, more independence in the consent process. Further, recruitment undertaken by researchers may also provide more clarity about the ethical guidelines that should be adhered to in the research as opposed to a situation in which healthcare professionals recruit participants within routine clinical practice.

Other examples include the domain of eligibility criteria: Populations — such as pregnant women or older patients — are routinely excluded from clinical trials [9, 14, 2931], and are thus exposed to risks in their clinical care brought about by a lack of evidence-informed practice. These populations would benefit from improved evidence gained by broader inclusion criteria within pragmatic RCTs. However, their inclusion may raise concerns as to whether additional protections need to be in place during the trial (such as closer monitoring) and if so, when, or how identification of potentially vulnerable populations should take place and what appropriate responses might be.

Finally, consider the domain of data collection: pragmatic trials commonly utilize routinely-collected data for outcome assessment. These systems were not designed for the purpose of capturing data for research purposes, and questions have been raised regarding the ability of electronic health record systems to comply with requirements set out within international research standards such as Good Clinical Practice [3234].

There is now an emerging body of empirical research exploring these and other ethical challenges with more pragmatic RCTs [3541]. Kalkman and colleagues [35], for example, identified four key themes through interviews with key stakeholders: that less controlled experimental conditions create safety concerns regarding the patients enrolled; that comparison between an intervention and a comparator that constitutes suboptimal usual care may compromise clinical equipoise; that consent processes may be modified, but the circumstances and extent of modification are contested; and that minimal interference with real-world practice drives arms to equivalence (and thus the trial may not find a statistically or clinically significant result).

In particular, questions of consent in pragmatic trials have been consistently raised as topics of interest and have been studied through vignette-based research with healthcare professionals and patients [36, 37], interviews with physicians [39], deliberative engagement activities [42] and surveys [41]. For example, in a study using case vignettes, respondents preferred specific disclosure and options to either opt-in or opt-out of research compared to a general policy in which patients were informed broadly that a healthcare system engages in Comparative Effectiveness Research (CER) [37, 42] . Similarly, a study examining the nature of CER as research or practice explored whether physicians have a duty to participate in quality improvement (QI) CER as well as whether patient consent is required for physician-targeted interventions [38]. Other studies have explored attitudes toward consent under different trial scenarios. This work found that attitudes toward consent differed not only by the type of intervention, but also due to different design elements such as randomization and the degree to which the trial design departed from routine clinical care [39].

Limits of the existing literature

Despite this emerging body of evidence, the current literature has several limitations. First, few empirical studies have been grounded in the actual experiences of participants with the design or conduct of trials that are more pragmatic. Further, studies have tended to compartmentalize designs into either pragmatic or explanatory categories. Given that trials are generally considered to exist on a continuum between more explanatory and more pragmatic trials, it makes little sense to try and isolate ‘pragmatic’ RCTs and identify ‘their’ ethical issues. Instead, it would seem more valuable to identify ethical issues that may emerge from particular trial designs, domains or dimensions of trial pragmatism. Second, the literature is dominated by discussions set within the US healthcare context, resulting in appeals mainly to US regulations [4345]. Third, few studies explicitly address governance of pragmatic trials or the perspectives of research ethics committees who review these trials. Fourth, studies have tended to focus either on broad concepts such as the Learning Healthcare System [46, 47] or on a very limited number of ethical issues. As such, debates have either tended to lack concrete application or have been overly narrow and deep. For example, there has been a great deal of focus on consent [42, 46, 4850], and insufficient attention paid to the ethical issues that arise from decisions regarding the pragmatic or explanatory nature of individual trial design elements.

There is, we believe, a need for constructive ethical guidance relating to more pragmatic design choices within RCTs. Identifying key ethical considerations and the ways in which they are aligned with pragmatic design decisions is an essential first step on the path to developing guidance for researchers, research ethics committees and other key stakeholders in the design and conduct of more pragmatic RCTs. It is crucial that identification of such ethical issues is grounded in the experiences of those directly involved in the design, implementation, and evaluation of more pragmatic RCTs.

Aim

The present study aims to explore the views of pragmatic trial experts and key stakeholders (for example: trialists, ethicists, methodologists, chairs of research ethics committees, health system leaders, quality improvement experts, and patient representatives on research study teams) to generate a thorough understanding of the types of ethical issues arising in the practice of pragmatic trials from a variety of perspectives. It is the first of five planned studies embedded within a large, four-year multi-aim project that seeks to, ultimately, develop ethical guidance for the design and conduct of pragmatic trials. The protocol for the full study is published elsewhere [51]. Within the present study we will:
  1. 1.

    Explore the experiences of key stakeholders regarding ethical issues that arise in the design and conduct of pragmatic RCTs;

     
  2. 2.

    Identify ethical issues that arise from taking more pragmatic (as opposed to explanatory) approaches to design elements;

     
  3. 3.

    Elicit perspectives regarding the appropriate ethical oversight of RCTs and how this may differ between trials that are more pragmatic or more explanatory.

     

The results from this study will be used to formulate a typology of ethical issues arising from more pragmatic trials, to be addressed in the conceptual work within the larger project. It will also inform the development of data extraction items for a planned review of published pragmatic trials, questionnaire items for a planned survey with trialists and research ethics committees, and discussion items for focus group discussions with trial participants.

Methods/design

The methods will involve semi-structured interviews with key stakeholders with experience or exposure to trials reflecting more pragmatic designs. Interviewees will be identified using a purposive sampling strategy, augmented through snowball sampling [52, 53].

Participants

Participants will reflect a broad range of stakeholders in the design and conduct of clinical trials generally, and pragmatic RCTs in particular. Specifically, we will recruit participants within two broad groups of stakeholders: trial experts and lay members of study teams.

Trial experts

Trial experts will include trialists, ethicists, methodologists, chairs of research ethics committees, health system leaders, and quality improvement experts with experience of pragmatic trials. To be considered eligible, potential interviewees must be recognized experts in trials that are more pragmatic (e.g., have been an investigator on multiple RCTs considered to be pragmatic in design, have published papers addressing the ethical challenges in more pragmatic RCTs, have been engaged in work regarding the methodological development of more pragmatic approaches to RCTs or major trial designs, or have been engaged in the governance or oversight of RCTs considered to be more pragmatic in nature). Potential interviewees will be selected across a broad range of jurisdictions and clinical areas to reflect a range of experiences.

Lay members of study teams

Lay members of study teams (i.e., members of study teams who have lived experience of the condition under study or who represent the groups involved in the study or broader community at large) will be eligible if they have played a role in the development or implementation of a specific trial deemed to be more pragmatic in design or have been engaged in larger study teams to enhance the design, conduct, or implementation of trials that are more pragmatic in design.

The increasing inclusion of lay members in study teams of pragmatic trials reflects the underlying premise that trials that are more pragmatic should address outcomes and study questions relevant to patients and clinical practice. While previous studies of ethical issues raised by pragmatic trials have included patients and members of the public [37, 42], we are deliberately targeting lay members of study teams who are more likely to have been exposed to a pragmatic RCT and to have given consideration to implications of study design choices. Furthermore, the inclusion of lay members of study teams will improve the saliency of the study question and the breadth of experience upon which the participant could draw.

Identification and recruitment

Trial experts

An initial sample of trial experts will be identified through our extensive investigator networks, publications, and association with known centers conducting trials that may be more pragmatic in nature. Potential participants will be selected to ensure a breadth of experience and representation from across the identified range of stakeholder groups. We will also ensure representation from Canada, the US, UK, France, Australia and Low and Middle Income Countries. We chose these jurisdictions because they represent countries in which the vast majority of pragmatic trials are conducted, they have a rich history in methodological developmental of pragmatic trial designs, and our team members have connections with experts and research ethics organizations in these countries which will help facilitate participation. Depending on the emerging themes, we may purposively sample additional stakeholders to ensure a diversity of opinion is sought.

Initial contact (and subsequent follow up if necessary) with trial experts will be made via email by the study team. The initial contact will introduce the study design and purpose and inquire about the potential informant’s willingness to participate. If the potential interviewee is willing to participate, the interviewer (SN, KC) will arrange a time for the interview. Following confirmation of an interview date, an overview of the interview structure will be sent to participants together with a summary of the PRECIS-2 domains. On the agreed date, the consent form will be reviewed, and verbal consent will be obtained to proceed with the interview.

Lay members

To begin with, lay members of study teams will be identified via existing funded pragmatic trials or studies addressing the design, conduct or implementation of pragmatic trials which include lay members as part of the study team. Examples include trials which, due to funding requirements, require patient engagement (e.g. Ontario SPOR Support Unit Impact Awards, Patient-Centered Outcomes Research Institute (PCORI)) or through existing lists (e.g. PRECIS-2 list of evaluated trials: https://www.precis-2.org/Trials), or from investigator networks. This initial list will be supplemented through snowball sampling.

The principal investigators for the identified studies will be approached via email and asked to either provide the contact information of the lay members involved in their trial, or to forward a study invitation and consent form on behalf of our investigator team. If the lay member of the study team is willing to participate, the interviewer (SN, KC) will arrange a time for the telephone interview. On the agreed date, the consent form will be reviewed, and verbal consent will be obtained to proceed with the interview. An overview of the recruitment process is presented in Fig. 1.
Fig. 1
Fig. 1

Interview recruitment process

In all cases, potential interviewees will receive a copy of the consent form which will include an email address and phone number that potential interviewees can use to contact the study team if they are interested in participating. All participants will be offered an honorarium ($100 CAD) in recognition of their participation.

Interview guide and data collection processes

Data will be collected through semi-structured interviews. Interviews with stakeholders will comprise three main sections: (i) experiences with pragmatic trials, including experiences of ethical issues; (ii) perceptions of ethical issues relevant to pragmatic trials; and (iii) perspectives on oversight and regulation. Interview guides were developed based on a review of the literature as well as existing tools, such as PRECIS-2. Draft interview guides were prepared by the team and pilot tested on three pragmatic trial experts. Given the differing populations, separate interview guides were generated for the trial experts and lay members of study teams (See Additional files 1 and 2 for interview schedules).

Interviews will be conducted by one of two members of the team (SN/KC). Both interviewers will have experience conducting qualitative interviews, and pilot interviews were conducted with both team members present to ensure familiarity with the interview guide and consistency of approach. The interviewers will give prompts and ask clarifying questions in addition to the questions in the interview protocol. Interviews will be conducted either in person or over the telephone as required, depending on participant availability and logistics. We anticipate the interview sessions to take approximately 1 h.

In all cases, interviews will be audio-recorded with consent, transcribed verbatim by a professional transcription service, and imported into qualitative data analysis software (QSR International’s NVivo 10 qualitative data analysis Software [54]) for analysis. If a participant wishes to take part but does not wish the interview to be audio-recorded, then written notes will be taken. During the process of transcription, data will be de-identified and interview transcripts will be assigned a unique participant ID. Copies of transcribed interviews will be made available to interviewees for additional comments.

Sample size

Qualitative approaches necessarily require small samples due to the complex nature of the data generated and the costs incurred by collection and analysis of the data [52, 55]. Following established qualitative research methods, our target sample size is estimated at what will achieve saturation (i.e., when new interviews cease to provide fresh information) [52, 5658]. While approximated, sample sizes are based on the experience of the team [59, 60]. We anticipate that 12–20 interviews per group of stakeholders (trial experts and lay members of study teams) will be required before data saturation is reached [56], and hence a total of n = 24–40 interviews. However, as saturation of topics is the stated end-point, additional interviews may be required (and will be undertaken as necessary).

Analysis

The examination of the transcripts will follow a thematic analysis approach [61, 62]. Under this methodology, textual data contained within transcripts are coded and labeled in an inductive manner. Using the constant comparison technique, data analysis occurs in parallel to the conduct of interviews, allowing for the interview guide to evolve and integrate emergent themes into future interviews and for greater exploration of these issues. Comparisons will be made within and across interviews allowing for the revision, combination or separation of codes in light of new data [6365]. After an initial phase of open coding, individual codes will be grouped into overarching themes or constructs through a process of data reduction. Analysis will be facilitated by qualitative data analysis software (QSR International’s NVivo 10 qualitative data analysis Software [54]) to assist with the collation and management of codes and themes.

Specifically, the analyses will consider: how pragmatic RCTs are conceptualized and aspects identified as being definitive components within trials that are more pragmatic in design; ethical considerations in the design of trials that are more pragmatic in nature; aspects of trial design that generate ethical discussion, and; considerations in the oversight or regulation of trials that are more pragmatic in nature.

Interviews will be coded independently by two researchers (SN, KC) who will then discuss between themselves, before presenting their analyses to the broader team for comments and further discussion. This process of dual coding has been suggested as a qualitative comparator to traditionally quantitative notions of inter-rater reliability [66].

Discussion

This study will address important gaps in the current empirical literature by identifying a comprehensive range of ethical issues pertaining to pragmatic RCTs. Interviews will be conducted with a broad range of stakeholders including trialists, ethicists, methodologists, chairs of research ethics committees, health system leaders, and lay members of study teams. The interview guide was designed to be grounded in the experiences of participants who are actively engaged in the design, conduct, or evaluation or more pragmatic RCTs, which will help ensure that the results are applicable to clinical research and practice, as opposed to being based on hypothetical scenarios. Furthermore, by opening up a broad range of topics for consideration within our parallel ethical analysis, we will extend the current debate beyond consent to the range of ethical considerations that may flow from specific design choices. As such, the larger program of work which this exploratory study informs [51] promises to provide practical advice and guidance to a range of stakeholders in the design and conduct of RCTs including researchers, research ethics committee members, and regulators.

A main operational issue in the present study will be the identification and recruitment of lay members of study teams who have had exposure to pragmatic RCTs, and thus experience upon which to draw within the interview. To this end we have established connections with other stakeholder groups and identified funded trials that are pragmatic in nature and which, due to funding requirements, require patient engagement. This will ensure that all lay members of study teams that are approached will have experience of an RCT that is more pragmatic in design.

We thus expect that the study outputs will be of interest to a wide range of knowledge users including trialists, ethicists, research ethics chairs, journal editors, regulators, healthcare policymakers, research funders and patient groups.

Footnotes
1

While we acknowledge that trials that exist on a spectrum, for simplicity we use ‘explanatory RCTs’ to refer to those RCTs that are more explanatory and ‘pragmatic RCTs’ to refer to those that are more pragmatic.

 

Abbreviations

CIHR: 

Canadian Institutes of Health Research

PCORI: 

Patient Centered Outcomes Research Institute

RCT: 

Randomized controlled trial

REC: 

Research ethics committee

SPOR: 

Strategy for Patient-Oriented Research

Declarations

Acknowledgements

N/A

Funding

This work is supported by the Canadian Institutes of Health Research through the Project Grant competition (competitive, peer reviewed), award number PJT-153045. Jeremy Grimshaw holds a Canada Research Chair in Health Knowledge Transfer and Uptake and a CIHR Foundation Grant (FDN-143269). Charles Weijer holds a Canada Research Chair in Bioethics. Joanne McKenzie is supported by an Australian National Health and Medical Research Council Career Development Fellowship (1143429). Vipul Jairath hold a personal Endowed Chair at Western University (John and Susan McDonald Endowed Chair). Marion Campbell is based with the Health Services Research Unit which is core-funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. Ian Graham is a CIHR Foundation Grant recipient (FDN# 143237).

Availability of data and materials

N/A

Authors’ contributions

MT, CW, JMG and DAF conceived the overall project idea and co-led the funding application with substantial contributions from all authors. SN led the development of the initial study protocol and interview guides and wrote the initial draft of the manuscript with substantial input from all authors. KC coordinated the study and led the ethics submission. JB, MZ, IDG, JEM, LM, MKC and VJ tested the interview guide and contributed to its refinement. CW, SPH and CEG provided ethical insights to the development of the interview guide. DAF contributed senior leadership of the project. All authors contributed critical revisions and approved the final version of the manuscript.

Authors’ information

Stuart G. Nicholls is a Senior Clinical Research Associate at the Ottawa Hospital Research Institute (OHRI).

Kelly Carroll is a research Coordinator at the OHRI in the Clinical Epidemiology Program.

Jamie C. Brehaut is Senior Scientist, Clinical Epidemiology Program, OHRI and Associate Professor, School of Epidemiology and Public Health, University of Ottawa.

Charles Weijer is Canada Research Chair in Bioethics, and Professor in the Departments of Philosophy, Medicine and Epidemiology and Biostatistics at Western University.

Spencer Hey is a Faculty Member, Harvard Center for Bioethics; Research Scientist, Program on Regulation, Therapeutics, and Law (PORTAL), Brigham and Women’s Hospital.

Cory E. Goldstein is a doctoral student in Philosophy and a resident member of the Rotman Institute of Philosophy at Western University.

Merrick Zwarenstein is Professor, Centre for Studies in Family Medicine & Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University.

Ian D Graham is Senior Scientist, Clinical Epidemiology Program, OHRI; Professor, School of Epidemiology and Public Health, University of Ottawa; Adjunct Professor, School of Nursing, Queen’s University; and Honorary Professor, Deakin University School of Nursing and Midwifery, Melbourne Australia; Fellow of the Canadian Academy of Health Sciences, New York Academic of Medicine and Royal Society of Canada.

Joanne McKenzie is a biostatistician and Senior Research Fellow, School of Public Health and Preventive Medicine, Monash University.

Lauralyn McIntyre is Critical Care Physician at The Ottawa Hospital; Senior Scientist in the Clinical Epidemiology Program at the Ottawa Hospital Research Institute; and, Associate Professor in the Department of Medicine at the University of Ottawa.

Vipul Jairath is the John and Susan McDonald chair in inflammatory bowel disease, Division of Gastroenterology, Western University; and, Director of Medical Research and Development at Robarts Clinical Trials.

Marion Campbell is Professor of Health Services Research in the Health Services Research Unit (HSRU), University of Aberdeen.

Jeremy M. Grimshaw is Senior Scientist, Clinical Epidemiology Program, OHRI; Full Professor, Department of Medicine, University of Ottawa; Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake.

Dean A. Fergusson is Senior Scientist, Clinical Epidemiology Program, OHRI; Director, Clinical Epidemiology Program, OHRI; Full Professor, Departments of Medicine, Surgery, & the School of Epidemiology and Public Health, University of Ottawa; Endowed Chair, Clinical Epidemiology Program, OHRI/University of Ottawa; Scientific Co-Lead, Ontario SPOR SUPPORT Unit, Government of Ontario and the Canadian Institutes of Health Research.

Monica Taljaard is a biostatistician and Senior Scientist, Clinical Epidemiology Program, OHRI; and, Associate Professor, School of Epidemiology and Public Health, University of Ottawa.

Ethics approval and consent to participate

Potential participants will be sent an invitation letter, information sheet, and consent form. All participants will provide an initial consent to arrange an interview, either in person or in writing. Consent will be re-affirmed verbally from all participants on the date of the interview. This study received ethics approval from the Ottawa Health Science Network Research Ethics Board #20170435-01H.

Consent for publication

All authors have approved the manuscript for publication.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Civic Campus, 1053 Carling Avenue, Civic Box 693, Admin Services Building, ASB 2-013, Ottawa, ON, K1Y 4E9, Canada
(2)
Rotman Institute of Philosophy, Western University, London, ON, Canada
(3)
Center for Bioethics, Harvard Medical School, Boston, MA, USA
(4)
Program on Regulation, Therapeutics and Law (PORTAL), Brigham and Women’s Hospital, Boston, MA, USA
(5)
Rotman Institute of Philosophy, Western University, London, ON, Canada
(6)
Centre for Studies in Family Medicine, Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
(7)
School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
(8)
Department of Medicine, Division of Gastroenterology, Western University, London, ON, Canada
(9)
Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
(10)
Health Services Research Unit, University of Aberdeen, Aberdeen, UK
(11)
Department of Medicine, University of Ottawa, Ottawa, ON, Canada
(12)
Faculty of Medicine, University of Ottawa, 501 Smyth Road, Box 201B, Ottawa, ON, K1H 8L6, Canada
(13)
School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada

References

  1. de Melo-Martin I, Ho A. Beyond informed consent: the therapeutic misconception and trust. J Med Ethics. 2008;34:202–5. https://doi.org/10.1136/jme.2006.019406.View ArticleGoogle Scholar
  2. Pullman D, Wang X. Adaptive designs, informed consent, and the ethics of research. Control Clin Trials. 2001;22:203–10.View ArticleGoogle Scholar
  3. Raymond J, Darsaut TE, Altman DG. Pragmatic trials can be designed as optimal medical care: principles and methods of care trials. J Clin Epidemiol. 2014;67:1150–6. https://doi.org/10.1016/j.jclinepi.2014.04.010.View ArticleGoogle Scholar
  4. Friedman LM, Furberg CD, DeMets DL. Fundamentals of clinical trials. 3rd ed. New York: Springer; 1998.View ArticleGoogle Scholar
  5. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62:499–505. https://doi.org/10.1016/j.jclinepi.2009.01.012.View ArticleGoogle Scholar
  6. Karanicolas PJ, Montori VM, Devereaux PJ, Schunemann H, Guyatt GH. A new “mechanistic-practical” framework for designing and interpreting randomized trials. J Clin Epidemiol. 2009;62:479–84. https://doi.org/10.1016/j.jclinepi.2008.02.009.View ArticleGoogle Scholar
  7. Luce BR, Drummond M, Jonsson B, Neumann PJ, Schwartz JS, Siebert U, et al. EBM, HTA, and CER: clearing the confusion. Milbank Q. 2010;88:256–76. https://doi.org/10.1111/j.1468-0009.2010.00598.x.View ArticleGoogle Scholar
  8. Ford I, Norrie J. Pragmatic trials. N Engl J Med. 2016;375:454–63. https://doi.org/10.1056/NEJMra1510059.View ArticleGoogle Scholar
  9. Tunis S, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624–32.View ArticleGoogle Scholar
  10. Oxman AD, Lombard C, Treweek S, Gagnier JJ, Maclure M, Zwarenstein M. A pragmatic resolution. J Clin Epidemiol. 2009;62:495–8. https://doi.org/10.1016/j.jclinepi.2008.08.014.View ArticleGoogle Scholar
  11. Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10:37. https://doi.org/10.1186/1745-6215-10-37.View ArticleGoogle Scholar
  12. Zwarenstein M, Treweek S. What kind of randomized trials do we need? CMAJ. 2009;180:998–1000. https://doi.org/10.1503/cmaj.082007.View ArticleGoogle Scholar
  13. Zwarenstein M, Treweek S. What kind of randomised trials do patients and clinicians need? EBM. 2009;14:101–3.Google Scholar
  14. Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147. https://doi.org/10.1136/bmj.h2147.View ArticleGoogle Scholar
  15. Chalkidou K, Tunis S, Whicher D, Fowler R, Zwarenstein M. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials. 2012;9:436–46.View ArticleGoogle Scholar
  16. Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci. 2011;13:217–24.Google Scholar
  17. Tudur Smith C, Hickey H, Clarke M, Blazeby J, Williamson P. The trials methodological research agenda: results from a priority setting exercise. Trials. 2014;15:32.View ArticleGoogle Scholar
  18. Rosala-Hallas A, Bhangu A, Blazeby J, Bowman L, Clarke M, Lang T, et al. Global health trials methodological research agenda: results from a priority setting exercise. Trials. 2018;19. https://doi.org/10.1186/s13063-018-2440-y.
  19. Wilcken B. Evaluating outcomes of newborn screening programs. Southeast Asian J Trop Med Public Health. 2003;34:13–8.Google Scholar
  20. Giraudeau B, Ravaud P, Donner A. Sample size calculation for cluster randomized cross-over trials. Stat Med. 2008;27:5578–85. https://doi.org/10.1002/sim.3383.View ArticleGoogle Scholar
  21. Parienti JJ, Kuss O. Cluster-crossover design: a method for limiting clusters level effect in community-intervention studies. Contemp Clin Trials. 2007;28:316–23. https://doi.org/10.1016/j.cct.2006.10.004.View ArticleGoogle Scholar
  22. van der Velden JM, Verkooijen HM, Young-Afat DA, Burbach JP, van Vulpen M, Relton C, et al. The cohort multiple randomized controlled trial design: a valid and efficient alternative to pragmatic trials? Int J Epidemiol. 2017;46:96–102. https://doi.org/10.1093/ije/dyw050.View ArticleGoogle Scholar
  23. Laur MS, D'Agostino RB. The randomized registry trial - th enext disruptive technology in clinical research? N Engl J Med. 2013;369:1579–81. https://doi.org/10.1056/NEJMp1310771.View ArticleGoogle Scholar
  24. James S, Rao SV, Granger CB. Registry-based randomized clinical trials - a new clinical trial paradigm. Nat Rev Cardiol. 2015;12:312–6.View ArticleGoogle Scholar
  25. Dember LM, Archdeacon P, Krishnan M, Lacson E Jr, Ling SM, Roy-Chaudhury P, et al. Pragmatic trials in maintenance dialysis: perspectives from the kidney health initiative. J Am Soc Nephrol. 2016;27:2955–63. https://doi.org/10.1681/ASN.2016030340.View ArticleGoogle Scholar
  26. Li G, Sajobi TT, Menon BK, Korngut L, Lowerison M, James M, et al. Registry-based randomized controlled trials- what are the advantages, challenges, and areas for future research? J Clin Epidemiol. 2016;80:16–24. https://doi.org/10.1016/j.jclinepi.2016.08.003.View ArticleGoogle Scholar
  27. Canada’s Strategy for Patient-Oriented Research. http://www.cihr-irsc.gc.ca/e/44000.html. Accessed 14 November 2018.
  28. Our Story. https://www.pcori.org/about-us/our-story. Accessed 14 November 2018.
  29. Anderson EE, DuBois JM. IRB decision-making with imperfect knowledge: a framework for evidence-based research ethics review. J Law Med Ethics. 2012;40:951–69.View ArticleGoogle Scholar
  30. Lipman PD, Loudon K, Dluzak L, Moloney R, Messner D, Stoney CM. Framing the conversation: use of PRECIS-2 ratings to advance understanding of pragmatic trial design domains. Trials. 2017;18:532. https://doi.org/10.1186/s13063-017-2267-y.View ArticleGoogle Scholar
  31. Lee PY, Alexander KP, Hammill BG, Pasquali SK, Peterson ED. Representation of elderly persons and women in published randomized trials of acute coronary syndromes. JAMA. 2001;286:708–13.View ArticleGoogle Scholar
  32. Fiore LD, Lavori PW. Integrating randomized comparative effectiveness research with patient care. N Engl J Med. 2016;374:2152–8. https://doi.org/10.1056/NEJMra1510057.View ArticleGoogle Scholar
  33. Irving E, van den Bor R, Welsing P, Walsh V, Alfonso-Cristancho R, Harvey C, et al. Series: pragmatic trials and real world evidence: paper 7. Safety, quality and monitoring. J Clin Epidemiol. 2017;91:6–12. https://doi.org/10.1016/j.jclinepi.2017.05.004.View ArticleGoogle Scholar
  34. Mentz RJ, Hernandez AF, Berdan LG, Rorick T, O’Brien EC, Ibarra JC, et al. Good clinical practice guidance and pragmatic clinical trials: balancing the best of both worlds. Circulation. 2016;133:872–80. https://doi.org/10.1161/CIRCULATIONAHA.115.019902.View ArticleGoogle Scholar
  35. Kalkman S, van Thiel GJ, Grobbee DE, Meinecke AK, Zuidgeest MG, van Delden JJ, et al. Stakeholders’ views on the ethical challenges of pragmatic trials investigating pharmaceutical drugs. Trials. 2016;17:419. https://doi.org/10.1186/s13063-016-1546-3.View ArticleGoogle Scholar
  36. Weir CR, Butler J, Thraen I, Woods PA, Hermos J, Ferguson R, et al. Veterans healthcare administration providers’ attitudes and perceptions regarding pragmatic trials embedded at the point of care. Clin Trials. 2014;11:292–9.View ArticleGoogle Scholar
  37. Whicher D, Kass N, Faden R. Stakeholders’ views of alternatives to prospective informed consent for minimal-risk pragmatic comparative effectiveness trials. J Law Med Ethics. 2015;43:397–409.Google Scholar
  38. Whicher D, Kass N, Saghai Y, Faden R, Tunis S, Pronovost P. The views of quality improvement professionals and comparative effectiveness researchers on ethics, IRBs, and oversight. J Empir Res Hum Res Ethics. 2015;10:132–44. https://doi.org/10.1177/1556264615571558.View ArticleGoogle Scholar
  39. Topazian RJ, Bollinger J, Weinfurt KP, Dvoskin R, Matthews D, DeCamp M, et al. Physicians’ perspectives regarding pragmatic clinical trials. J Comp Eff Res. 2016;5:499–506.View ArticleGoogle Scholar
  40. Weinfurt KP, Bollinger JM, Brelsford KM, Crayton TJ, Topazian RJ, Kass NE, et al. Patients’ views concerning research on medical practices: implications for consent. AJOB Empir Bioeth. 2016;7:76–91. https://doi.org/10.1080/23294515.2015.1117536.View ArticleGoogle Scholar
  41. Ali J, Califf R, Sugarman J. Anticipated ethics and regulatory challenges in PCORnet: the National Patient-Centered Clinical Research Network. Account Res. 2016;23:79–96. https://doi.org/10.1080/08989621.2015.1023951.View ArticleGoogle Scholar
  42. Kass N, Faden R, Fabi RE, Morain S, Hallez K, Whicher D, et al. Alternative consent models for comparative effectiveness studies: views of patients from two institutions. AJOB Empir Bioeth. 2016;7:92–105.92. https://doi.org/10.1080/23294515.2016.1156188.View ArticleGoogle Scholar
  43. Lantos JD, Wendler D, Septimus E, Wahba S, Madigan R, Bliss G. Considerations in the evaluation and determination of minimal risk in pragmatic clinical trials. Clin Trials. 2015;12:485–93. https://doi.org/10.1177/1740774515597687.View ArticleGoogle Scholar
  44. O'Rourke PP, Carrithers J, Patrick-Lake B, Rice TW, Corsmo J, Hart R, et al. Harmonization and streamlining of research oversight for pragmatic clinical trials. Clin Trials. 2015;12:449–56. https://doi.org/10.1177/1740774515597685.View ArticleGoogle Scholar
  45. McKinney RE Jr, Beskow LM, Ford DE, Lantos JD, McCall J, Patrick-Lake B, et al. Use of altered informed consent in pragmatic clinical research. Clin Trials. 2015;12:494–502. https://doi.org/10.1177/1740774515597688.View ArticleGoogle Scholar
  46. Faden RR, Beauchamp TL, Kass NE. Informed consent, comparative effectiveness, and learning health care. N Engl J Med. 2014;370:766–8.View ArticleGoogle Scholar
  47. Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hast Cent Rep. 2013;Spec No:S16–27. https://doi.org/10.1002/hast.134.View ArticleGoogle Scholar
  48. Kalkman S, Kim SYH, van Thiel G, Grobbee DE, van Delden JJM. Ethics of informed consent for pragmatic trials with new interventions. Value Health. 2017;20:902–8. https://doi.org/10.1016/j.jval.2017.04.005.View ArticleGoogle Scholar
  49. Kalkman S, van Thiel G, Zuidgeest MGP, Goetz I, Pfeiffer BM, Grobbee DE, et al. Challenges of informed consent for pragmatic trials. J Clin Epidemiol. 2017. https://doi.org/10.1016/j.jclinepi.2017.03.019.View ArticleGoogle Scholar
  50. Lignou S. Informed consent in cluster randomised trials: new and common ethical challenges. J Med Ethics. 2017. https://doi.org/10.1136/medethics-2017-104249.View ArticleGoogle Scholar
  51. Taljaard M, Weijer C, Grimshaw JM, Brehaut JC, Campbell MK, Carroll K et al. Developing a framework for the ethical design and conduct of pragmatic trials to improve patient health and health system outcomes: study protocol for a mixed methods study. Trials. 2018;19:525. https://doi.org/10.1186/s13063-018-2895-x.
  52. Bowling A. Research methods in health. 2nd ed. Maidenhead: Open University Press; 2004.Google Scholar
  53. Sandelowski. Whatever happened to qualitative description? Res Nurs Health. 2000;23:334–40.View ArticleGoogle Scholar
  54. QSR International Pty Ltd. NVivo qualitative data analysis software. 11th ed; 2017.Google Scholar
  55. Mason J. Qualitative researching. London: Sage publications; 1996.Google Scholar
  56. Patton M. Qualitative research and evaluation methods. 3rd ed. Thousand Oaks: Sage Publications; 2002.Google Scholar
  57. Lincoln Y, Guba E. Naturalistic Inquiry. New York: Sage; 1985.Google Scholar
  58. Krueger RA, Casey MA. Focus groups: a practical guide for applied research. 3rd ed. Thousand Oaks: Sage; 2000.Google Scholar
  59. McAllister M, Payne K, Macleod R, Nicholls S, Dian D, Davies L. Patient empowerment in clinical genetics services. J Health Psychol. 2008;13:895–905. https://doi.org/10.1177/1359105308095063.View ArticleGoogle Scholar
  60. McRae A, Bennett C, Belle Brown J, Weijer C, Boruch R, Brehaut J, et al. Researchers’ perceptions of ethical challenges in cluster randomized trials: a qualitative analysis. Trials. 2013;14:1.View ArticleGoogle Scholar
  61. Boyatzis RE. Transforming qualitative information. Thousand Oaks: Sage publications; 1998.Google Scholar
  62. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.View ArticleGoogle Scholar
  63. Corbin JM, Strauss AL. Basics of qualitative research. 3rd ed. Thousand Oaks: Sage publications; 2008.Google Scholar
  64. Strauss AL. Qualitative analysis for social scientists: Cambridge University Press; 1996.Google Scholar
  65. Fielding NG, Lee RL. Computer analysis and qualitative research. London: Sage publications; 1998.Google Scholar
  66. Armstrong D, Gosling A, Weinman J, Marteau T. The place of inter-rater reliability in qualitative research: an empirical study. Sociology. 1997;31:597–606.597. https://doi.org/10.1177/0038038597031003015.View ArticleGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement