Skip to main content

Redundant trials can be prevented, if the EU clinical trial regulation is applied duly

Abstract

The problem of wasteful clinical trials has been debated relentlessly in the medical community. To a significant extent, it is attributed to redundant trials – studies that are carried out to address questions, which can be answered satisfactorily on the basis of existing knowledge and accessible evidence from prior research. This article presents the first evaluation of the potential of the EU Clinical Trials Regulation 536/2014, which entered into force in 2014 but is expected to become applicable at the end of 2021, to prevent such trials. Having reviewed provisions related to the trial authorisation, we propose how certain regulatory requirements for the assessment of trial applications can and should be interpreted and applied by national research ethics committees and other relevant authorities in order to avoid redundant trials and, most importantly, preclude the unnecessary recruitment of trial participants and their unjustified exposure to health risks.

Peer Review reports

Keypoints

What is known?

  • The problem of wasteful clinical trials has been exposed by empirical studies and debated intensely within the medical community.

  • To a significant extent, wasteful research can be attributed to redundant trials – ie trials that intend to address questions that can be answered satisfactorily on the basis of evidence gathered in earlier studies.

What does this article add?

  • Certain provisions under the EU Clinical Trials Regulation, which is expected to become applicable in 2021/2022, can and should be interpreted and applied in a way that can empower institutions responsible for trial authorisation – research ethics committees (RECs) and national competent authorities (NCAs) – to play a more prominent role in preventing redundant trials.

What is proposed?

Applicants for the trial authorisation shall

  • justify a newly proposed trial by demonstrating that it addresses an outstanding clinical uncertainty in light of the available evidence relevant for the research question and the outcome of interest at issue; and

  • show how the synthesis of earlier research informed the design of a proposed trial.

  • Where no systematic review exists, applicants should make their best effort to identify and synthesise knowledge gained in prior studies.

  • Research Ethics Committees and drug regulatory authorities need to be properly staffed to effectively reduce redundant interventional studies.

Background

The problem of wasteful – ie unregistered [1, 2] biased [3], unreported [4, 5], unpublished [6, 7], clinically irrelevant, inadequately designed, or otherwise wasteful [8,9,10,11,12,13,14,15,16,17,18,19] – trials has been debated relentlessly in the medical research community. In 2009, Iain Chalmers and Paul Glasziou made a staggering claim that up to 85% of clinical trials can be cumulatively considered to be wasteful ([19], p. 88). By ‘waste’, the authors broadly refer to deficiencies in the ways randomised trials are designed, conducted, analysed, reported, regulated, and managed.

In general terms, a trial can be deemed to be wasteful if it does not produce new robust medical knowledge that can justify health risks borne by study participants, research efforts of investigators, and the allocated financial and other resources. Earlier research has analysed the causes and scope of the problem. The critical question is what can be done in order to eliminate or, at least, alleviate it. While opinions differ as to who – regulators [11], investigators [16], funders [16], health care professionals [20], ethics committees and journals [21], methodologists and medical statisticians [22] – should take the lead, it is clear that a unified and systematic approach needs to be implemented at all levels of decision-making.

This article focuses on the issue of redundant randomised clinical trials (RCT) – ie trials that do not contribute to the stock of biomedical knowledge relevant for clinical practice in a way that would justify the risks and costs involved. The problem is often attributed to the insufficient consideration of earlier findings. While redundancy can be difficult to detect, it is highly important that such trials are precluded at the stage of the trial application – prior to the enrolment of study participants.

Commentators have long been advocating – albeit with little hope [23, 24] – that greater scrutiny should be exercised with regard to the applications for clinical trials, especially as far as their justification vis-à-vis prior research and relevance for clinical practice are concerned. The purpose of this article is to examine the potential of the EU Regulation 536/2014 [25] (hereinafter the EU Clinical Trials Regulation) to tackle the problem of research redundancy. While the Regulation was adopted and entered into force in 2014, it will become applicable upon the publication of the notice confirming the full functionality of the EU portal and the EU database by the European Commission ([25], Article 99). Currently the full functionality of the EU Portal is expected at end of 2021 or early 2022. In this context, the present analysis is pertinent and highly timely.

In what follows, we describe the problem of redundancy in clinical trials, review the earlier discourse in the medical community and findings of empirical studies on this subject. Upon identifying key provisions under the EU Clinical Trials Regulation that are closely related to the justification of a trial against the background of prior research, we assess whether they can be leveraged to eliminate redundancy. Further, we propose the interpretation that can be instrumental in preventing redundant trials and discuss critical factors of applying our recommendations. We conclude by reinforcing the idea that, while the EU Clinical Trials Regulation strives to promote competitiveness of European clinical research, it is methodological quality and ethical integrity that shall be viewed as the core aspects.

Main text

The problem of redundant trials

When is a trial considered to be redundant?

A universal definition of a ‘redundant trial’ hardly exists. Redundancy occurs if a trial intends to investigate a question that can be ‘answered satisfactorily with existing evidence’ ([19], p. 87), or where the outcome of interest does not involve genuine, clinically relevant uncertainty. According to the International Ethical Guidelines for Health-related Research Involving Humans, such studies, even if rigorously designed, ‘lack social value’ because the research question at issue has already been ‘successfully addressed in prior research’ [26], p. 2]. Commentators have also referred to redundant trials as ‘unnecessary duplication of research efforts’ ([9], p. 159; [17], p. 4). A clarification is necessary: in a strict sense, a duplicative trial would mean a trial that intends to test an identical medicinal product for the identical condition as an earlier trial. The only time when such duplication can be sanctioned is when two phase III trials are conducted to gather substantial evidence on efficacy and safety, typically requested by drug authorities such as the European Medicines Agency in the European Union and the Food and Drug Administration in the U.S. Beyond this requirement, there is no regulatory need for duplication. The probability of a trial conducted for the purpose of generic drug approval shall also be excluded, since in most, if not all, jurisdictions generic drugs are exempted from conducting full-scale clinical trials in order to be authorised for marketing.

Furthermore, redundant trials should not, by any means, be confused with phase IV studies that intend to further improve and refine the dosage recommendation or the understanding of benefit-risk relationship in general or specific populations, and/or to find less typical adverse reactions of medicinal products approved for marketing. It is important to emphasise that the problem of redundancy is not phase-specific but case-specific. Based on the empirical studies on this issue (partially summarised in Table 1), the concept of redundancy ought to be understood broadly in relation to therapeutic subcategories. This, by no means, implies that further studies investigating new interventions in a given therapeutic class or subclass are redundant per se in situations where there is a standard treatment. Quite on the contrary, there can be numerous aspects that might need to be investigated in further studies. The decisive factor is not whether a standard treatment exists or not, but how a newly proposed trial is designed, in particular, the extent to which evidence from prior studies related to the research question and the outcome of interest has been taken into consideration, and whether a subsequent trial intends to address genuine uncertainty and novel aspects of a treatment. Having said that, it is worth noting that, in the case of phase IV trials, the problem in practice might be reverse, ie there is likely to be underproduction of comparative evidence rather than excessive studies on therapeutic use of treatments in the post-marketing authorisation phase.

Table 1 An overview of studies measuring the scale of redundancy in RCTs

While it appears straightforward that a new trial should be initiated only if it is ‘necessary to address relevant uncertainty about the effects of one or more forms of health care’ ([37], p. 1391), evidence suggests that this fundamental principle of scientific research has often been neglected, and that trials continue being conducted – overall involving a large number of patients – long after the beneficial effect of a treatment has been established (see Table 1). Remarkably, in some cases, studies were ‘claimed to be the first trial even when many trials preceded them’ ([24], p. 54).

The cause of the problem is often attributed to the insufficient consideration of findings of earlier research and, especially, systematic reviews. Commentators have long argued that clinical trials ‘should begin and end with systematic reviews of relevant evidence’ [38], and that ‘research funders and regulators should demand that proposals for additional primary research are justified by systematic reviews showing what is already known, and increase funding for the required syntheses of existing evidence’ ([9], p. 156). Yet, evidence suggests that only a small fraction of RCTs explicitly reference systematic reviews as the justification for undertaking a new trial [36]. Even though the non-citation of relevant systematic reviews, in and of itself, does not render a trial redundant, it does raise a question as to what knowledge base supports the research hypothesis.

Needless to say, redundant trials are, first and foremost, unethical as they unjustifiably expose patients to health risks ([26], p. 88). They also violate the scientific principle that ‘the progress depends on new research being carried out and interpreted in the context of systematic reviews of all other relevant and reliable evidence’ ([39], para. 6.C.20.2). The opportunity cost of such studies corresponds to knowledge gaps that remain unaddressed [40,41,42], as well as inefficiencies in the allocation of resources due to missed opportunities to make the design of subsequent trials more informed and targeted ([29], p. 984).

Even though redundancy can hardly be quantified, several studies summarised in Table 1 attempted to measure the scope of the problem.

Earlier proposals and initiatives

The issue of the under-use of systematic reviews in the planning and design of new trials and the need for the greater scrutiny of trial applications in this regard have been discussed in the medical community, at least, from the late 1980s. Highlighting the need for the thorough examination of the existing evidence when new trials are planned and designed, Carpenter refers to the example reported in 1989, where studies had continued to investigate the effect of prophylactic antibiotics on the risk of infection after caesarean section for nearly two decades after the beneficial effect of antibiotics had been established ([43], p. 222). In 1993, Herxheimer put forward a proposal that a clinical trial ‘should be accompanied by a thorough review of all previous trials that have examined the same and closely related questions’ ([44], p. 211). In 1996, Savulescu, Chalmers and Blunt alleged that RECs ‘are behaving unethically by endorsing new research which is unnecessary’ ([37], p. 1390). They insisted that proposals for new trials should be supported by ‘scientifically defensible reviews of the results of relevant existing research’ ([ibid], p. 1391). Commentators have been sceptical, however, as to whether this requirement can be effectively enforced. For instance, Robinson and Goodman observe that there are ‘no barriers to funding, conducting, or publishing an RCT without proof that the prior literature had been adequately searched and evaluated’ ([24], p. 54, emphasis added), and that institutional review boards have ‘neither the capacity nor the charge to second-guess a researcher’s claim that a new RCT is needed’ (ibid).

An early attempt to institutionalise the requirement to submit systematic reviews can be traced to 1997 when the Danish national REC reportedly adopted a guidance requiring ‘applicants for ethical approval to show that they have carried out a full systematic review of the relevant scientific literature before the study will be approved’ ([45], p. 1189). According to Goldbeck-Wood, the initiative was driven by then Chairman of the Danish national REC Povl Riis, who believed that ‘me too studies’ are ‘unethical, because they randomise patients to receive a placebo intervention or drug, when an active drug is already known to exist [and] also waste valuable research funds without adding any new information, and drain the precious resource of appropriate control groups’ [ibid]. The provision, however, has not survived to date, and the current Danish Act on Research Ethics Review of Health Research Projects does not explicitly mention systematic reviews but only lists among the conditions for the authorisation that a proposed research project ‘should lead to new knowledge or investigate existing knowledge, which justifies the implementation of the research project’ ([46], section 18(1)(3)).

In 2014, the UK Health Research Authority developed a guidance ‘Specific questions that need answering when considering the design of clinical trials’ [47]. The authors of the guidance emphasise that the trial design ‘should be underpinned by a systematic review of the existing evidence, which should be reported in the protocol’ ([47], p. 2).

As an example of an editorial policy, The Lancet introduced a requirement that, as of 1 January 2015, authors submitting research papers to any journal within The Lancet group must include a section ‘Research in Context’, in which they have to describe all evidence, as well as its sources, that was taken into consideration prior to undertaking the study and indicate what value their findings can add to the existing evidence. Furthermore, the explanatory paper [48] accompanying the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) Statement [49] recommends that relevant evidence such as systematic reviews should be cited in the protocol to support a proposed trial.

Given that publication and reporting guidance documents are not legally binding, the only instrument that can perform the ‘gatekeeping’ function is the regulatory authorisation of trials, in particular, requirements regarding the quality and the assessment of trial applications. While current practices of examining trial applications vary among the EU Member States, a consistent and harmonised approach is crucial for ensuring research quality throughout the Union. In what follows, we examine whether the EU Clinical Trials Regulation [25] can provide a relevant legal basis to support the above-mentioned initiatives in a unified way.

Specific provisions that can be leveraged against redundant trials

The revision of the EU Clinical Trials Directive [50], mainly, pursued the following objectives: first, to modernise the regulatory framework for the submission, assessment, and regulatory follow-up of applications for clinical trials; second, to adapt regulatory requirements to practical considerations and needs; third, to address the global dimension of clinical trials when ensuring compliance with good clinical practice (GCP) ([51], p. 29–31). Towards those ends, the EU Clinical Trials Regulation introduced a streamlined application procedure through the EU portal, the concept of ‘low-intervention’ studies, and the EU database for clinical trial data.

Even though the revision of the EU Clinical Trials Directive did not tackle the issue of ‘wasteful research’, some provisions under the adopted EU Clinical Trials Regulation – if applied and enforced appropriately – can be instrumental in this regard. In particular, new requirements for the publication of trial data and the establishment of the new EU database under Article 81 can reduce waste resulting from the lack of transparency. As far as the issue of research redundancy is concerned, relevant provisions can be found among the requirements for the trial authorisation, in particular, those related to the methodology of a proposed study summarised in Table 2.

Table 2 An overview of the provisions under the EU Clinical Trials Regulation related to the justification of a clinical trial in light of the prior research

Several limitations of the identified provisions can be pointed out. Most importantly, neither systematic reviews, nor critical assessment of earlier studies are explicitly required. The scientific background has to be provided in the form of references to relevant literature and studies. A mere referencing of literature is, however, insufficient as it does not involve the analysis and synthesis of the evidence. Neither is it clear whether and how the referenced literature actually informed the design of a proposed trial. While ‘the scientific context’ and ‘the current state of scientific knowledge’ have to be presented, the question arises as to how they ought to be assessed, especially in terms of the quality and completeness. The scope of the regulatory mandate of institutions in charge of the trial authorisation appears to be ambiguous in this regard. If approached formalistically, a critical analysis of whether the proposed study, in the way it has been designed, is necessary and justified might be missing.

Next, only data on the investigational medicinal product is explicitly required to be submitted within an investigational medicinal product dossier. If, for instance, a new beta blocker was to be tested in patients with myocardial infarction, a systematic review of evidence on the available beta blockers in patients with myocardial infarction – and, most importantly, whether a substantially efficacious product or method of treatment already exists – might be neglected. Even though the EU Clinical Trials Regulation promulgates the overarching principle of ensuring data reliability and robustness ([25], Articles 3 and 6(1)(b)(i)), this principle corresponds to the internal validity of a trial, which means that a trial is designed in a way that it answers the research question in a reliable way ([52], p. 8:2). However, whether a study asks a clinically relevant question to begin with, lies outside the concept of the trial internal validity.

At the same time, it is important to highlight that several provisions broaden the scope of the prior research that ought to be taken into account, such as the requirements to submit data on other relevant products within the investigator’s brochure and findings from other relevant studies within the trial protocol. The beneficial aspect of the broad language of the provisions – in particular, the requirements concerning the relevant ‘scientific context’, ‘relevant literature and data’ and ‘trial appropriateness’ – can be seen in that, first, such language enlarges the scope of the scientific background beyond prior evidence on the investigational product that needs to be taken into consideration when a new trial is planned, designed and assessed; second, it leaves a leeway for interpretation. Accordingly, the effectiveness of the EU Clinical Trials Regulation in addressing the problem of redundancy critically depends on how these requirements are applied by institutions involved in the trial authorisation.

The proposed interpretation and justification

In view of the above-outlined considerations, we propose that the requirements for trial applications and their assessment – in particular, under Article 6(1)(b)(i) second indent and Article 25(1)(a) of the EU Clinical Trials Regulation – shall be interpreted and applied as:

  1. a)

    a duty on applicants for the trial authorisation to justify the need to conduct a new trial by demonstrating that it addresses an outstanding clinical uncertainty based on the critical assessment of the accessible relevant evidence from earlier research, including in the form of systematic reviews, and

  2. b)

    a duty on the institutions in charge of the authorisation of clinical trials – typically, NCAs and RECs – to require and critically examine such justification.

In what follows, we explain the main rationales supporting the proposal.

Preventing unnecessary trials as a matter of protection of the well-being of trial participants

The proposal is supported by the teleological method of interpretation that construes the meaning of legal provisions in light of the underlying policy objectives and principles. Ethical integrity and scientific quality of clinical trials are the core values that lie at the heart of the EU Clinical Trials Regulation. The protection of the rights, safety, dignity and well-being of subjects, as well as ensuring reliability and robustness of clinical trial data constitute the main objective and the general principle of conducting interventional studies ([25], Recital 85, Article 3). These values correspond to the universal ethical principles proclaimed under the 1964 Declaration of Helsinki [53] and the overarching notion of good clinical practice. The latter is defined under the 1996 Guideline of the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) as ‘[a] standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity, and confidentiality of trial subjects are protected’ ([54], para. 1.24). In fact, the wording of the objectives and principles under the EU Clinical Trials Regulation ([25], Recital 85, Article 3) is a slight paraphrase of the definition of GCP under the ICH Guideline.

The fundamental ethical principle of medical research in humans posits that health risks borne by study subjects can only be justified by knowledge that could not be otherwise obtained ([26], p. 1). Accordingly, it appears straightforward that preventing redundant trials shall be viewed as a matter of protection of the rights, safety and well-being of trial participants and, thus, come under the purview of RECs.

The necessity of a trial as a matter of the ethical assessment

Ambiguity can arise as to what institutions shall examine the necessity of a trial vis-à-vis the prior knowledge. Article 4 of the EU Clinical Trials Regulation stipulates that a trial application shall be subject to both ethical and scientific review. The division of tasks and responsibilities between RECs and NCAs is, however, not harmonised in this regard. Notably, the participation of RECs in the evaluation of aspects related to the trial methodology, including the trial relevance and scientific context, is optional. Article 4 of the EU Clinical Trials Regulation reads:

The ethical review shall be performed by an ethics committee in accordance with the law of the Member State concerned. The review by the ethics committee may encompass aspects addressed in Part I of the assessment report for the authorisation of a clinical trial as referred to in Article 6 and in Part II of that assessment report as referred to in Article 7 as appropriate for each Member State concerned (emphasis added).

Besides, recital 18 of the EU Clinical Trials Regulation states that the Member States have full discretion ‘to determine the appropriate body or bodies to be involved in the assessment of the application to conduct a clinical trial and to organise the involvement of ethics committees’.

The current situation varies among the EU Member States in this regard: In some countries, such as Greece [55], RECs are in charge of only part II of the assessment report (ie ethical aspects, such as informed consent and compensation issues, according to Article 7 of the EU Clinical Trials Regulation), while part I (the scientific assessment pursuant to Article 6 of the EU Clinical Trials Regulation) is evaluated by other competent bodies. In other countries, such as Germany, the RECs make the overall assessment including the scientific quality, legitimacy and ethical justifiability [56, 57].

It is important to emphasise that the distinction between the ethical and scientific review is notional only. The two aspects cannot be separated because scientifically unsound research involving humans is unethical as it exposes trial participants to unjustified health risk ‘for no purpose’ ([26], p. 88). As stressed by the Guidelines of the Council for International Organizations of Medical Sciences (CIOMS), RECs ‘must [ …] recognize that the scientific validity of the proposed research is essential for its ethical acceptability’ and, therefore, ‘must either carry out a proper scientific review, verify that a competent expert body has determined the research to be scientifically sound, or consult with competent experts to ensure that the research design and methods are appropriate’ ([26], p. 88). The UNESCO Universal Declaration on Bioethics and Human Rights states: ‘Independent, multidisciplinary and pluralist ethics committees should be established, promoted and supported at the appropriate level in order to [inter alia] assess the relevant ethical, legal, scientific and social issues related to research projects involving human beings’ ([58], article 19(a), emphasis added).

As a legally non-binding but norm-setting instrument laying down ethical principles of conducting medical research in humans, the 2012 Guide for Research Ethics Committee Members of the Council of Europe [39] makes an explicit reference to systematic reviews in relation to the trial scientific quality and justification as the prerequisite for the trial authorisation. In particular, it states that RECs ‘must be satisfied about the scientific quality of the research proposal’ ([39], para. 5.A.1.1), that they ‘should pay particular attention to the scientific justification for the proposed research [in order to] help prevent inappropriate research’, and that systematic reviews of research results, in animals as well as human beings are ‘especially important’ in that regard ([39], 6.C.1). Furthermore, the Guide mentions that the ‘aim of and justification for the research based on the most up-to-date review of scientific evidence’ ([39], para. 6.C) should be stated in the description of the study, which is provided to and examined by RECs.

In sum, there is no doubt that RECs have all reasons and discretion to evaluate the necessity of a newly proposed trial – and, towards this goal, require systematic reviews – as a part of the ethical assessment of trial applications.

A mere reference of a systematic review is not sufficient

Systematic reviews can aid in designing, conducting, and analysing clinical trials in numerous ways, including by informing the choice of comparator, sample size calculation, eligibility criteria, selection and definition of the trial outcomes [31, 59,60,61,62]. Thus, apart from referencing relevant systematic reviews, it is important that a trial application should demonstrate how they informed the design of a proposed trial. The study conducted by Habre et al. [23] illustrates this point. While a 2000 systematic review already questioned the necessity of performing new trials to identify another analgesic intervention to prevent pain from propofol injection, 136 trials were subsequently conducted. Remarkably, the authors were unable to identify significant differences between the designs of the trials citing and not citing the systematic review at issue ([23], p. 5). Thus, without the explicit explanation, it cannot be taken for granted that the referenced literature, in fact, informed the design of a new study.

Where to draw the line?

The proposal to apply the requirement of justifying new trials against the background of relevant systematic reviews more stringently can be viewed as an ‘obvious remedy’ ([24], p. 54). Several critical aspects require, nevertheless, further methodological reflection.

Redundancy vs. genuine uncertainty

In some cases, redundancy can be evident, especially where robust efficacy of a treatment has been clearly demonstrated, such as in the astounding cases described by Habre et al. [23], where 136 trials involving 19,778 patients were conducted after the systematic review had questioned the necessity of conducting new studies; by Fergusson et al. [21] where 52 RCTs were conducted to address the question of efficacy that had been established definitively by prior research; or Ker et al. [30] where trials were conducted 10 years after reliable evidence confirmed the treatment effect at issue. In some situations, an outcome of interest can be derived by way of scientific inference, whereby biological insights into the mechanism of action of a treatment and knowledge of relevant factors can inform the decision-making ([63], p. 6). The question of whether scientific inference can be justified has to be assessed on the case-by-case basis.

However, in other cases, there can be a fine line between the instances where a research question can be answered conclusively on the basis of the accessible evidence, and where clinical uncertainty may still persist. Even if efficacy of a treatment has been demonstrated, subsequent studies can be justified if they seek to examine uncertain aspects of a treatment, optimise the dosage regimen, or find a more favourable benefit-risk ratio. Such differences vis-à-vis prior research should be reflected in the objectives of a newly proposed RCT and the way it was designed, especially as far as the definition of the outcome of interest and endpoints is concerned. For instance, when commenting on the study by Fergusson et al., Augoustides and Fleisher suggested that intention to find a more cost-effective dosage regimen, or a better benefit-risk balance, is a possible reason for redundancy in trials ([64], p. 231–2). That, however, was not the case: the follow-on trials examined by Fergusson et al. displayed homogeneity in terms of their objectives and outcome measures ([21], p. 224).

Replicability vs. redundancy

While the recommendation that replication ‘to check the validity of previous research is justified, but unnecessary duplication is unethical’ ([65], p. 2) is sensible, in many cases, it might be difficult to draw the line between redundancy and the need to ensure generalizability. The latter means that the study results can be applied in the contexts or populations other than the original one. Generalizability relates to replicability of medical knowledge, ie the ability to replicate the data from an earlier study by following the same procedures ([66], p. 4). Both, replicability and generalizability are difficult to achieve in RCTs due to the biological variability of study subjects and diseases.

Notably, the study by Ker and Roberts [63] found that concerns about the generalizability of the results of earlier studies – including due to the change in patient characteristics – are often indicated as the main motivation for new trials. They assume that the awareness of systematic reviews confirming a reliable demonstration of a treatment effect can, in fact, stimulate an increase rather than decrease in trial activity, as investigators would be motivated to confirm the treatment effect in a different population. In such cases, potential redundancy can only be detected through a thorough analysis as to whether the generalizability and replicability of findings from earlier relevant studies can be called into question. Such assessment, in turn, crucially depends on the accessibility of the trial protocols from prior studies. Access to non-summary level of clinical trial data has been, and still is, challenging in many jurisdictions. At the same time, policy measures such as the establishment of the EU Clinical Trial database, new transparency requirements providing for the publication of clinical study reports ([25], Article 81), the policy on access to trial data of the European Medicines Agency [67], as well as publication policies of medical journals [68] can, to a significant extent, alleviate the problem and enable secondary data analyses.

Reliability of data from earlier studies and systematic reviews – a vicious cycle?

Another critical factor is the reliability of data from earlier studies and the quality of systematic reviews. Commentators point out the issue of exponential production of redundant, potentially conflicting and misleading systematic reviews and meta-analyses [69, 70]. Some argued that many systematic reviews ‘fail to provide a complete and up-to-date synthesis of evidence’, and that ‘failure to rigorously synthesize the totality of relevant evidence may have a detrimental effect on treatment decisions and future research planning’ ([71], p. 2]; [72, 73]). Low quality of meta-analyses is viewed to be a more significant cause of redundant research than the failure to appraise the existing evidence ([63], p. 1).

In light of these allegations, one may question whether investigators of newly proposed trials might be better-off by not relying on the conclusions drawn from the synthesis of the reported data. As a safeguard, which could alleviate such concerns, at least to some extent, systematic reviews referenced in trial applications should demonstrate the adherence to the recognised quality standards and methodological guidance, such as guidelines developed by the Cochrane Collaboration [74], as well as the established publication standards, guidelines, and principles [75, 76]. For instance, the study by Sun et al. [77] showed that, since the publication of the PRISMA Statement, the quality of reporting of systematic reviews and meta-analyses in the area of nursing interventions in patients with Alzheimer’s disease has improved. Besides, various analytical methods such as cumulative network meta-analysis [78,79,80] and analytical tools [81] can assist trialists in identifying relevant prior studies and managing exponentially growing trial data [82] and, ultimately, ‘prevent experimentation with an unnecessarily large number of participants’ ([79], p. 1).

The ‘best-in-class’ strategy vs. redundancy

To a large extent, follow-on trials are driven by the so-called ‘best-in-class ’[83, 84] competitive strategy of drug companies directed at the development of drug improvements. The ‘best-in-class’ competitive strategy implies that pharmaceutical companies aim at the improvement of drugs with a particularly advantageous economic profile, while the ‘first-in-class’ strategy pursues the development of ‘breakthrough’ drugs ([83], p. 12). Competition by drug improvements includes the development of new formulations, modes of administration, combinations of active ingredients with known therapeutic activity ([84], p. 49). The critical question is: At what point should research and development efforts addressing a particular condition cease and be diverted to unresolved clinical uncertainties, especially if a substantially efficacious treatment has already been identified among the alternatives?

The need for follow-on RCTs can only be determined on the case-by-case basis. While, in some situations, follow-on drugs might feature higher efficacy, reduced side effects, or a more convenient regimen ([85], p. 34–35), in other cases they represent insignificant modifications of the existing medicines that can be so minor and the clinical need for such modifications can be so small that the clinical benefit would not outweigh the costs of conducting trials.

In any event, the need for a new RCT should be considered carefully, as some clinical uncertainties under certain conditions can be adequately investigated without randomisation, eg where there is a good understanding of the mode of action, such as with ß-blockers, ACEI, or statins. There is a long-standing discussion in the medical research community concerning the conditions, under which a randomized trial is definitely needed [86, 87]. At the same time, evidence shows that regulatory approval of pharmaceuticals without randomized controlled studies is nowadays common by agencies such as the European Medicines Agency and the U.S. Food and Drug Administration [88].

In situations, where a new trial is conducted in the presence of an established treatment, it would be logical to expect that an investigational product shall be compared with that reference treatment, whenever ethically acceptable. When a randomised trial is planned, it has to be thoroughly examined whether access to the best effective treatment is limited (eg by a placebo control) beyond what is acceptable by the current ethical standards established under the Declaration of Helsinki [53]. Studies report disturbing evidence that randomised placebo-controlled trials continue to be the dominant study design for assessing pharmacological interventions [89]. Apart from obvious ethical concerns, studies where the use of placebo is unjustified can represent a significant source of waste as they do not generate knowledge regarding comparative benefits and risks of medical interventions [90].

It is important to emphasise that the use of placebo can be justified only where there is a genuine uncertainty as to whether one treatment is superior to placebo (the ethical principle of clinical equipoise) ([91], p. 141). The CIOMS International Ethical Guidelines for Health-related Research Involving Humans state that, as a general rule, study participants in the control group of a trial must receive an established effective intervention, where an established effective intervention exists for the condition under investigation ([26], p. 9, 15, 17). According to the ICH Guideline, for serious diseases, if a therapeutic treatment which has been proved to be efficacious by superiority trials exists, a placebo-controlled trial can be deemed unethical and ‘the scientifically sound use of an active treatment as a control should be considered’ ([92], p. 14). Accordingly, there is no doubt that the choice of control should constitute a part of the ethical assessment of trial applications.

Is it feasible at all?

When responding to the article by Savulescu, Chalmers and Blunt [37] calling on RECs to consider more critically the need for new studies, a representative of a REC contended that ‘it is unreasonable and unrealistic’ to expect the quality of medical research to be improved ‘entirely through the mechanism of review by local research ethics committees’ ([93], p. 676). This remark is accurate in that many RECs, as well as NCAs, lack human resources in the fields of research methodology and biostatistics. The Report of the European Commission [94] shows that, in many Member States, the number of quality and clinical assessors involved in the clinical trial assessment in the NCAs is extremely limited ([94], p. 22–23). Notwithstanding whether these individuals might be represented by physicians, pharmacologists, toxicologists, pharmacists or, in some rare case, biostatisticians, the workforce is highly disproportionate to the workload ([94], p. 28–29). In this view, it might be simply unfeasible for RECs to conduct an in-depth evaluation of the quality, relevance, and completeness of the submitted systematic reviews, or other background literature.

Thus, in reality, RECs ‘seem to stand alone, with limited resources and sometimes not enough scientific credit or knowledge to identify, and stop, the performance of irrelevant research’ ([23], p. 4). While such situation is regrettable, the point is that it should be viewed and remedied as the problem of lacking human resources, and not as the absence of the relevant legal basis for requiring stronger justification of new trials in light of prior research.

Conclusions

In answering the question posed by the title, we have shown that the EU Clinical Trials Regulation clearly has a potential to effectively reduce redundant RCTs. The extent to which it can do so depends on how the requirements for the trial authorisation are interpreted and applied by the institutions concerned. The recommendations proposed in this article for exercising more stringency as to the justification of new trials in light of prior research can provide further guidance and be undoubtedly instrumental in this regard. The proposal is supported by the fundamental objectives and underlying principles of conducting research in humans that are promoted by the EU Clinical Trials Regulation – namely, the protection of the well-being of trial participants and ensuring data reliability and robustness –, as well as the overarching concept of good clinical practice. To the extent, to which these principles are applied by investigators, sponsors, RECs and other institutions involved in the authorisation and monitoring of clinical trials, the analysed regulatory provisions can be effective in tackling the problem of research redundancy.

The review of the EU Clinical Trials Regulation will not take place earlier than 2024 and is expected to address the regulatory impact on scientific and technological progress and the competitiveness of European clinical research ([25], Article 97). In view of the foregoing, we argue that it is research quality that shall be viewed as the principal component of competitiveness, and that the revision of the EU Clinical Trials Regulation should strengthen the methodological aspects of trial planning and design addressed by the present analysis. Until then, institutions responsible for the trial authorisation should rely on the regulatory provisions regarding the preparation and assessment of trial applications, identified and analysed in this article, as the legal basis to examine more stringently the necessity of new studies and their methodological quality.

Availability of data and materials

Not applicable.

Abbreviations

CIOMS:

The Council for International Organizations of Medical Sciences

EU:

The European Union

GCP:

Good clinical practice

ICH:

The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use

IPD:

Individual patient data

NCA:

National competent authority

RCT:

Randomised clinical trial

REC:

Research ethics committee

SR:

systematic review

UNESCO:

The United Nations Educational, Scientific and Cultural Organization

References

  1. The World Health Organisation, trial registration. Why is trial registration important? In: international clinical trials registry platform (ICTRP). http://www.who.int/ictrp/trial_reg/en/. Accessed 23 Jul 2020.

  2. Killeen S, Sourallous P, Hunter IA, Hartley JE, Grady HL. Registration rates, adequacy of registration, and a comparison of registered and published primary outcomes in randomized controlled trials published in surgery journals. Ann Surg. 2014;259(1):193–6.

    Article  Google Scholar 

  3. Gøtzsche PC. Reference Bias in reports of drug trials. Br Med J (Clin Res Ed). 1987;295:654.

    Article  Google Scholar 

  4. Goldacre B, DeVito NJ, Heneghan C, Irving F, Bacon S, Fleminger J. Compliance with requirement to report results on the EU clinical trials register: cohort study and web resource. BMJ. 2018;362:k3218.

    Article  Google Scholar 

  5. Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–65.

    Article  Google Scholar 

  6. Yilmaz T, Jutten RJ, Santos CY, Hernandez KA, Snyder PJ. Discontinuation and nonpublication of interventional clinical trials conducted in patients with mild cognitive impairment and Alzheimer's disease. Alzheimers Dement (N Y). 2018;4:161–4.

    Article  Google Scholar 

  7. Jones CW, handler L, Crowell KE, Keil LG, weaver MA, Platts-Mills TF. Non-publication of large randomized clinical trials: cross sectional analysis. BMJ. 2013;347:f6104.

    Article  Google Scholar 

  8. Clarke M. Doing new research? Don’t forget the old. Nobody should do a trial without reviewing what is known. PLoS Med. 2004;1:100–2.

    Article  Google Scholar 

  9. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

    Article  Google Scholar 

  10. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

    Article  Google Scholar 

  11. Al-Shahi Salman R, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383(9912):176–85.

    Article  Google Scholar 

  12. Chan AW, song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

    Article  Google Scholar 

  13. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267–76.

    Article  Google Scholar 

  14. Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101–4.

    Article  Google Scholar 

  15. Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PMM, Korevaar DA. Increasing value and reducing waste in biomedical research: who’s listening? Lancet. 2016;387(10027):1573–86.

    Article  Google Scholar 

  16. Flohr C, Weidinger S. Research waste in atopic eczema trials-just the tip of the iceberg. J Invest Dermatol. 2016;136(10):1930–3.

    Article  Google Scholar 

  17. Clarke M, Brice A, Chalmers I. Accumulating research: a systematic account of how cumulative meta-analyses would have provided knowledge, improved health, reduced harm and saved resources. PLoS One. 2014;9(7):e102670.

    Article  Google Scholar 

  18. Storz-Pfennig P. Potentially unnecessary and wasteful clinical trial research detected in cumulative meta-epidemiological and trial sequential analysis. J Clin Epidemiol. 2016;82:61–70.

    Article  Google Scholar 

  19. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

    Article  Google Scholar 

  20. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290(12):1624–32.

    Article  Google Scholar 

  21. Fergusson D, Glass KC, Hutton B, Shapiro S. Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials. 2005;2(3):218–29.

    Article  Google Scholar 

  22. Yordanov Y, Dechartres A, Porcher R, Boutron I, Altman DG, Ravaud P. Avoidable waste of research related to inadequate methods in clinical trials. BMJ. 2015;350:h809.

    Article  Google Scholar 

  23. Habre C, Tramèr MR, Pöpping DM, Elia N. Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection. BMJ. 2014;348:g5219.

    Article  Google Scholar 

  24. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann intern med. 2011;154(1):50–5.

    Article  Google Scholar 

  25. Regulation 536/2014/EU of the European Parliament and of the council of 16 April 2014 of 16 April 2014 on clinical trials on medicinal products for human use, and repealing directive 2001/20/EC [2014] OJ L158.

  26. The Council for International Organizations of medical sciences (CIOMS) in collaboration with the World Health Organization (WHO). International ethical guidelines for health-related research involving humans. 4th ed. CIOMS; 2016. https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf. Accessed 23 Jul 2020.

  27. Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC. Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med. 1992;327(4):248–54.

    Article  Google Scholar 

  28. Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews when designing studies. Clin Trials. 2005;2(3):260–4.

    Article  Google Scholar 

  29. Goudie AC, Sutton AJ, Jones DR, Donald A. Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials. J Clin Epidemiol. 2010;63(9):983–91.

    Article  Google Scholar 

  30. Ker K, Edwards P, Perel P, Shakur H, Roberts I. Effect of tranexamic acid on surgical bleeding: systematic review and cumulative meta-analysis. BMJ. 2012;344:e3054.

    Article  Google Scholar 

  31. Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13:50.

    Article  Google Scholar 

  32. Clayton GL, smith IL, Higgins JPT, Mihaylova B, Thorpe B, Cicero R. The INVEST project: investigating the use of evidence synthesis in the design and analysis of clinical trials. Trials. 2017;18(1):219.

    Article  Google Scholar 

  33. Tierney JF, Pignon JP, Gueffyier F, Clarke M, Askie L, Vale CL. How individual participant data meta-analyses have influenced trial design, conduct, and analysis. J Clin Epidemiol. 2015;68(11):1325–35.

    Article  Google Scholar 

  34. De Meulemeester J, Fedyk M, Jurkovic L, et al. Many randomized clinical trials may not be justified: a cross-sectional analysis of the ethics and science of randomized clinical trials. J Clin Epidemiol. 2018;97:20–5.

    Article  Google Scholar 

  35. Blanco-Silvente L, Castells X, Garre-Olmo J, et al. Study of the strength of the evidence and the redundancy of the research on pharmacological treatment for Alzheimer’s disease: a cumulative meta-analysis and trial sequential analysis. Eur J Clin Pharmacol. 2019;75:1659–67.

    Article  Google Scholar 

  36. Walters C, Torgerson T, Fladie I, Clifton A, Meyer C, Vassar M. Are randomized controlled trials being conducted with the right justification? J Evid Based Med. 2020:1–2.

  37. Savulescu J, Chalmers I, blunt J. Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ. 1996;313(7069):1390–3.

    Article  Google Scholar 

  38. Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. The lancet. 2010;376(9734):20–1.

    Article  Google Scholar 

  39. The Council of Europe. Guide for research ethics committee members. Council of Europe; 2012.

  40. Crowe S, Fenton M, Hall M, Cowan K, Chalmers I. Patients’, clinicians’ and the research communities’ priorities for treatment research: there is an important mismatch. Res Involv Engagem. 2015;1:2.

    Article  Google Scholar 

  41. Ospina NS, Rodriguez-Gutierrez R, Brito JP, young WF, Montori VM. Is the endocrine research pipeline broken? A systematic evaluation of the Endocrine Society clinical practice guidelines and trial registration. BMC Med. 2015;13:187.

    Article  Google Scholar 

  42. Tallon D, chard J, Dieppe P. Relation between agendas of the research community and the research consumer. Lancet. 2000;355(9220):2037–40.

    Article  Google Scholar 

  43. Carpenter LM. Is the study worth doing? Lancet. 1993;342(8865):221–3.

    Article  Google Scholar 

  44. Herxheimer A. Clinical trials: two neglected ethical issues. J Med Ethics. 1993;19(4):211–8.

    Article  Google Scholar 

  45. Goldbeck-Wood S. Denmark takes a lead on research ethics. BMJ. 1998;316:1189.

  46. Act on research ethics review of Health Research projects of 13 November 2018. http://en.nvk.dk/rules-and-guidelines/act-on-research-ethics-review-of-health-research-projects. Accessed 23 Jul 2020.

  47. Clark T, Davies H, Mansmann U. Five questions that need answering when considering the design of clinical trials. Trials. 2014;15:286.

    Article  Google Scholar 

  48. Chan AW, Tetzlaff JM, Gøtzsche PC, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ. 2013;346.

  49. Chan AW, Tetzlaff M, Gøtzsche PC, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials BMJ. 2013;346:e7586.

  50. Directive 2001/20/EC of the European Parliament and of the council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use OJ L 121 [2001].

  51. The European Commission. Commission staff working document. Impact assessment report on the revision of the “clinical trials directive” 2001/20/EC accompanying the document proposal for a regulation of the European Parliament and of the council on clinical trials on medicinal products for human use, and repealing directive 2001/20/EC. SWD (2012) 200 final. 17 July 2012. Vol I. https://ec.europa.eu/health/sites/health/files/files/clinicaltrials/2012_07/impact_assessment_part1_en.pdf. Accessed 23 July 2020.

  52. Higgins JPT, Altman DG, Sterne JAC. Assessing risk of bias in included studies. In: Higgins JPT, Churchill R, Chandler J, Cumpston MS, editors. Cochrane handbook for systematic reviews of interventions. Version 5.2.0. Cochrane; 2017. p. 8:1–8:73. https://training.cochrane.org/cochrane-handbook-systematic-reviews-interventions.

  53. The world medical association. WMA declaration of Helsinki – ethical principles for medical research involving human subjects adopted by the 18th WMA general assembly, Helsinki, Finland. 1964.

  54. The international conference on harmonisation of technical requirements for registration of Pharmaceuticals for Human use. Harmonised tripartite guideline. Guideline for good clinical practice. E6(R1). 1996.

  55. European network of research ethics committees, National Information: Greece. Short description of RECs system. http://www.eurecnet.org/information/greece.html. Accessed 24 Jul 2020.

  56. Hasford J. The impact of the EU regulation 536/2014 on the tasks and functioning of ethics committees in Germany. Bundesgesundheitsbl. 2017;60:830–5.

    Article  Google Scholar 

  57. Doppelfeld E, Hasford J. Medizinische Ethikkommissionen in der Bundesrepublik Deutschland: Entstehung und Einbindung in die medizinische Forschung. Bundesgesundheitsbl. 2019;62:682-9.

  58. Universal declaration on bioethics and human rights of 19 October 2005.

  59. Clarke M. Partially systematic thoughts on the history of systematic reviews. Syst Rev. 2018;7:176.

    Article  Google Scholar 

  60. Chalmers I. Adrian Grant’s pioneering use of evidence synthesis in perinatal medicine, 1980–1992. Reprod Health. 2018;15(1):79.

    Article  Google Scholar 

  61. Bath PM, Gray LJ. Systematic reviews as a tool for planning and interpreting trials. Int J Stroke. 2009;4(1):23–7.

    Article  Google Scholar 

  62. Sutton AJ, Cooper NJ, Jones DR, Lamber PC, Thompson JR, Abrams KR. Evidence-based sample size calculations for future trials based on results of current meta-analyses. Stat Med. 2007;26:2479–500.

    Article  Google Scholar 

  63. Ker K, Roberts I. exploring redundant research into the effect of Tranexamic acid on surgical bleeding: further analysis of a systematic review of randomised controlled trials. BMJ Open. 2015;5(8):e009460.

    Article  Google Scholar 

  64. Augoustides JG, Fleisher LA. Comment on: Fergusson D, glass KC, Hutton B, Shapiro S. randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clin Trials. 2005;2:231–2.

    Google Scholar 

  65. Health Research Authority, Guidance. Specific questions that need answering when considering the design of clinical trials. http://www.hnehealth.nsw.gov.au/working-together/Documents/HRA%20Guide.pdf. 23 Jul 2020.

  66. Bollen K, Cacioppo JT, Kaplan RM, Krosnick JA, Olds JL. Social, behavioral, and economic sciences perspectives on robust and reliable science. Report of the subcommittee on Replicability in science advisory committee to the National Science Foundation Directorate for social, behavioral, and economic sciences. https://www.nsf.gov/sbe/AC_Materials/SBE_Robust_and_Reliable_Research_Report.pdf. Accessed 23 Jul 2020.

  67. European medicines agency. European medicines agency policy on publication of clinical data for medicinal products for human use. EMA/144064/2019. 21 march 2019. https://www.ema.europa.eu/en/documents/other/european-medicines-agency-policy-publication-clinical-data-medicinal-products-human-use_en.pdf.

  68. International Committee of Medical Journals Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. December 2019. http://www.icmje.org/icmje-recommendations.pdf.

  69. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

    Article  Google Scholar 

  70. Katsura M, Kuriyama a TM, Yamamoto K, Furukawa TA. Redundant systematic reviews on the same topic in surgery: a study protocol for a meta-epidemiological investigation. BMJ open. 2017;7(8):e017411.

    Article  Google Scholar 

  71. Créquit P, Trinquart L, Yavchitz A, Ravaud P. Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: the example of lung cancer. BMC Med. 2016;14:8.

    Article  Google Scholar 

  72. Roberts I, Ker K. How systematic reviews cause research waste. Lancet. 2015;386(10003):1536.

    Article  Google Scholar 

  73. Helfer B, Prosser A, Samara MT, Geddes JR, Cipriani A, Davis JM. Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination. BMC Med. 2015;13:82.

    Article  Google Scholar 

  74. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane handbook for systematic reviews of interventions version 6.0 (updated 2019). Cochrane, 2019. www.training.cochrane.org/handbook. Accessed 23 Jul 2020.

  75. CONSORT. Transparent reporting of trials. http://www.consort-statement.org/. Accessed 23 July 2020.

  76. PRISMA. Transparent reporting of systematic reviews and meta-analyses. http://prisma-statement.org/. Accessed 23 July 2020.

  77. Sun X, Zhou X, Yu Y, Liu H. Exploring reporting quality of systematic reviews and meta-analyses on nursing interventions in patients with Alzheimer’s disease before and after PRISMA introduction. BMC Med Res Methodol. 2018;18:154.

    Article  Google Scholar 

  78. Vandvik PO, Brignardello-Petersen R, Guyatt GH. Living cumulative network meta-analysis to reduce waste in research: a paradigmatic shift for systematic reviews? BMC Med. 2016;14:59.

    Article  Google Scholar 

  79. Salanti G, Nikolakopoulou A, Sutton AJ, Reichenbach S, Trelle S, Naci H, Egger M. Planning a future randomized clinical trial based on a network of relevant past trials. Trials. 2018;19:365.

    Article  Google Scholar 

  80. Nikolakopoulou A, Mavridis D, Salanti G. Using conditional power of network meta-analysis (NMA) to inform the Design of Future Clinical Trials. Biom J. 2014;56(6):973–90.

    Article  Google Scholar 

  81. Treweek S, Altman DG, Bower P, et al. Making randomised trials more efficient: report of the first meeting to discuss the trial forge platform. Trials. 2015;16:261.

    Article  Google Scholar 

  82. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9).

  83. DiMasi JA, Paquette C. The economics of follow-on drug research and development: trends in entry rates and the timing of development. Pharmacoeconomics. 2004;22(2 Suppl 2):1–14.

    Article  Google Scholar 

  84. The European Commission. Pharmaceutical sector inquiry. Final report. 2009. https://ec.europa.eu/competition/sectors/pharmaceuticals/inquiry/staff_working_paper_part1.pdf. Accessed 23 Jul 2020.

  85. Petrova E. Innovation in the pharmaceutical industry: the process of drug discovery and development. In: Ding M, Eliashberg J, Stremersch S, editors. Innovation and Marketing in the Pharmaceutical Industry. New York: Springer; 2014. p. 19–81.

    Chapter  Google Scholar 

  86. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:1887–92.

    Article  Google Scholar 

  87. Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312(7040):1215–8.

    Article  Google Scholar 

  88. Hatswell AJ, Baio G, Berlin JA, Irs A, Freemantle N. Regulatory approval of pharmaceuticals without a randomised controlled study: analysis of EMA and FDA approvals 1999–2014. BMJ Open. 2016;6:e011666.

    Article  Google Scholar 

  89. Hester LL, Poole C, Suarez EA, Der JS, Anderson OG, Almon KG. Publication of comparative effectiveness research has not increased in high-impact medical journals, 2004–2013. J Clin Epidemiol. 2017;84:185–7.

    Article  Google Scholar 

  90. Ioannidis JPA. Why Most clinical research is not useful. PLoS Med. 2016;13(6):e1002049 https://doi.org/10.1371/journal.pmed.1002049.

    Article  Google Scholar 

  91. Freedman B. Equipoise and the ethics of clinical research. N Engl J Med. 1987;317:141–5.

    Article  Google Scholar 

  92. The international conference on harmonisation of technical requirements for registration of Pharmaceuticals for Human use. Harmonised tripartite guideline. Stat Princip Clin Trials. 1998:E9.

  93. Pierce E. Are research ethics committees behaving unethically? Committees are now being expected to do everything. BMJ. 1997;314(7081):676.

    Article  Google Scholar 

  94. The European Commission. Commission staff working document. Impact assessment report on the revision of the “clinical trials directive” 2001/20/EC accompanying the document proposal for a regulation of the European Parliament and of the council on clinical trials on medicinal products for human use, and repealing directive 2001/20/EC. SWD (2012) 200 final. 17 Jul 2012. Vol II. https://ec.europa.eu/health/sites/health/files/files/clinicaltrials/2012_07/impact_assessment_part2_en.pdf. Accessed 23 Jul 2020.

Download references

Acknowledgements

The opinions expressed in this manuscript do not necessarily reflect the opinions of the Permanent Working Party of Research Ethics Committees of Germany or the Max Planck Institute for Innovation and Competition.

Funding

Both authors declare no funding received in relation to the manuscript. Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

Both authors contributed substantially to the conception, drafting and revision of the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Daria Kim.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

Both authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, D., Hasford, J. Redundant trials can be prevented, if the EU clinical trial regulation is applied duly. BMC Med Ethics 21, 107 (2020). https://doi.org/10.1186/s12910-020-00536-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12910-020-00536-9

Keywords