Skip to main content

Table 1 An overview of studies measuring the scale of redundancy in RCTs

From: Redundant trials can be prevented, if the EU clinical trial regulation is applied duly

Study

Objective

Method

Sample

Results

Conclusions

Lau et al. (1992) [27]

To demonstrate that ‘searching and monitoring the clinical literature and performing cumulative meta-analyses can […] supply practitioners and policy makers with up-to-date information on emerging and established [medical] advances’ ([27], p. 248).

Cumulative meta-analyses of clinical trials that evaluated 15 treatments and preventive measures for acute myocardial infarction.

Trials conducted between 1959 and 1988 that investigated the use of intravenous streptokinase as thrombolytic therapy for acute infarction.

A consistent, statistically significant reduction in total mortality was achieved in 1973 upon the completion of eight trials involving 2432 patients; 25 subsequent trials, in which 34,542 patients were enrolled, had little or no effect on the odds ratio establishing efficacy.

Clinical trials are ‘part of a continuum, and those that have gone before must be considered when new ones are planned’ ([27], p. 253).

Fergusson et al. (2005) [21]

To evaluate the impact of systematic reviews of RCTs on the design of subsequent trials.

Cumulative meta-analyses of all RCTs of aprotinin using placebo controls or no active control treatment. Parameters of collected data included the study primary outcomes, objectives, the presence of a systematic review as a part of the background and/or rationale for the study, the number of previously published RCTs cited.

All RCTs of aprotinin conducted between 1987 and 2002 reporting an endpoint of perioperative transfusion.

64 RCTs meeting the selection criteria were identified with the median trial size ranging between 20 and 1784 trial participants. A cumulative meta-analysis showed that aprotinin significantly decreased the need for perioperative transfusion, stabilizing at an odds ratio of 0.25 by the 12th study published in 1992. Thereafter, the upper limit of the confidence interval did not exceed 0.65 and results were similar in all subgroups. Citation of previous RCTs was low – on average, only 20% of relevant prior trials was cited. Only 7 of 44 of subsequent reports referenced the largest trial, which was 28 times larger than the median trial size.

Investigators evaluating aprotinin ‘were not adequately citing previous research, resulting in a large number of RCTs being conducted to address efficacy questions that prior trials had already definitively answered’ ([21], p. 218).

Cooper, Jones, Sutton (2005) [28]

To assess the extent, to which Cochrane systematic reviews are taken into account in the design of new trials.

A survey among authors of published studies added in the updated Cochrane reviews. Authors were asked if they had used the 1996 Cochrane or other reviews in designing their trials.

All studies included in the 2002 and 2003 updates of the 1996 Cochrane review (overall, 33 Cochrane reviews).

Of 32 authors of eligible studies newly included in the updated Cochrane reviews, 24 responded. Eleven respondents were aware of the relevant Cochrane review at the time of designing the study. In eight cases the design of the new study had been influenced by a review; in two this was the relevant Cochrane review.

Cochrane or other systematic reviews are used in the designing of new studies to a rather limited extent ([28], p. 260).

Goudie et al. (2010) [29]

To define the extent to which previous trials were considered in the design of new trials (eg in the calculation of the sample size).

The assessment of a sample of RCTs to establish whether authors considered previous trials when designing own trials.

27 RCTs published in the leading medical journals in 2007.

Only a small fraction of the trials in the analysed sample referenced the relevant meta-analyses and related the results of the trial to previous research.

Previous evidence from trials ‘is not used (or not reported to be used) as extensively as it could be in justifying, designing, and reporting RCTs’ ([29], p. 984).

Robinson and Goodman (2011) [24]

To evaluate the extent, to which the reports of RCTs cite prior trials addressing the same interventions.

Meta-analyses published in 2004 that combined four or more trials were identified; within each meta-analysis, the extent to which each trial report cited the trials that preceded it by more than one year was assessed.

227 meta-analyses comprising 1523 trials across various health care disciplines published from 1963 to 2004.

Less than 25% of the eligible prior RCTs was cited. The percentage of ‘ignored RCTs [was] increasing as the number of those RCTs increased, [while] the proportion of trials citing no prior evidence stayed constant as the evidence accumulated’ ([24], p. 54). The reports that did cite individual trials in the introduction and discussion sections, were not integrating their findings with the cited trials. In several cases, the investigators ‘claimed to be the first trial even when many trials preceded them’ (ibid).

Further research is needed to explore the explanations for and consequences of the under-citation of earlier research. ‘Potential implications [of under-citation] include ethically unjustifiable trials, wasted resources, incorrect conclusions, and unnecessary risks for trial participants’ ([24], p. 50).

Ker et al. (2012) [30]

To assess the effect of tranexamic acid on blood transfusion, thromboembolic events, and mortality in surgical patients.

Systematic review and meta-analysis.

RCTs comparing tranexamic acid with no tranexamic acid or placebo in surgical patients. 129 trials, totalling 10,488 patients, carried out between 1972 and 2011 were included.

‘A statistically significant effect of tranexamic acid on blood transfusion was first observed after publication of the third trial in 1993. Although subsequent trials have increased the precision of the point estimate, no substantive change has occurred in the direction or magnitude of the treatment effect.’ ([30], p. 3)

‘Reliable evidence that tranexamic acid reduces blood transfusion in surgical patients has been available for many years. […] those planning further placebo controlled trials should … focus their efforts on resolving the uncertainties about the effect of tranexamic acid on thromboembolic events and mortality.’ ([30], p. 3)

Jones et al. (2013) [31]

To examine how systematic reviews of earlier trials had been used to inform the design of new RCTs.

Review of RCTs with regard to the following parameters: the justification of treatment comparison, choice of frequency or dose, selection (or definition) of an outcome, recruitment and consent rates, sample size (margin of equivalence or non-inferiority, size of difference, control group event rate, measure of variability and loss to follow up adjustment), length of follow-up, withdrawals, missing data or adverse events.

The documentation related to RCTs funded under the National Institute for Health Research Health Technology Assessment programme in the UK in 2006, 2007 and 2008 and included applications for funding and project descriptions of 48 RCTs.

About half of the examined applications for funding in fact used the cited review in order to inform the trial design, in particular, the selection and definition of the outcomes, the calculation of the sample size and the duration of follow up.

Guidelines for applicants and funders were proposed as to how systematic reviews can be used to optimise the design and planning of new RCTs.

Clarke, Brice, Chalmers (2014) [17]

To provide ‘the most comprehensive collection of cumulative meta-analysis of studies of healthcare interventions’, and to explore that cumulative evidence in the context of unnecessary duplication of research efforts ([17], p. 2).

A systematic review of the findings of cumulative meta-analyses of all studies examining effects of clinical interventions published between 1992 and 2012 and accessible through PubMed, MEDLINE, EMBASE, the Cochrane Methodology Register and Science Citation Index.

50 eligible reports including over 1500 cumulative meta-analyses.

Four cumulative meta-analyses have shown ‘how replications have challenged initially favourable results where the early trials were favourable but not statistically significant’ ([17], p. 3).

Two cumulative meta-analyses have shown ‘how replications have sometimes challenged initially unfavourable results’ (ibid).

22 cumulative meta-analyses demonstrated that ‘a systematic review of existing research would have reduced uncertainty about an intervention’ (ibid).

Some trials were ‘much too small’ to resolve uncertainties exposed by the cumulative meta-analyses (ibid).

‘… had researchers assessed systematically what was already known, some beneficial and harmful effects of treatments could have been identified earlier and might have prevented the conduct of the new trials. This would have led to the earlier uptake of effective health and social care interventions in practice, less exposure of trial participants to less effective treatments, and reduced waste resulting from unjustified research.’ ([17], p. 4)

Habre et al. (2014) [23]

To examine the effect of a 2000 systematic review of interventions preventing pain from propofol injection (the Picard review), which provided a clear research agenda, on the design of subsequent trials; to examine whether the designs of trials that cited the 2000 review differed from those that did not cite it; to establish whether the number of new trials published each year had decreased.

A comparison of the characteristics and design of trials published before and after the 2000 Picard review, which questioned the necessity to conduct further trials to identify another analgesic intervention to prevent pain from propofol injection. Parameters under comparison included blinding methods, the inclusion of children population, the use of the known most efficacious intervention as a comparator.

All RCTs investigating interventions to prevent pain from propofol injection in humans conducted and published after the Picard review.

136 new trial were conducted after the systematic review had questioned the necessity to conduct new studies. Only 36.0% of new trials could be considered to be clinically relevant as they used the most efficacious intervention as comparator or included a paediatric population as recommended by the review.

The impact of the Picard systematic review on the design of subsequent research was low. The number of trials published per year had not decreased; the most efficacious intervention was used only marginally.

Clayton et al. (2015) [32]

To summarise the current use of evidence synthesis in trial design and analysis; to capture opinions of trialists and methodologists on such use, and to understand potential barriers.

A survey collecting the views and experiences on the use of evidence synthesis in trial design and analysis.

638 participants of the International Clinical Trials Methodology Conference.

The response rate was only 17%. Respondents acknowledge that they had not been ‘using evidence syntheses as often as they felt they should’ [32], p. 1]. 42 out of 84 relevant respondents confirmed the use of meta-analyses to inform whether a trial is needed, while 62 out of 84 stated that this was desirable. Notably, only 6% of relevant respondents had applied earlier relevant evidence to inform sample size calculations, while 22% showed support for this. The main perceived barrier to the greater utilization of evidence synthesis in trial design or analysis was ‘time constraints, followed by a belief that the new trial was the first in the area’ ([32], p. 6).

Further research and training on how to synthesise and incorporate results from earlier trials can help ‘ensure the best use of relevant external evidence in the design, conduct and analysis of clinical trials’ ([32], p. 10).

Tierney et al. (2015) [33]

To identify the impact of individual patient data (IPD) meta-analyses on subsequent research in terms of the selection of comparators and participants, sample size calculations, analysis and interpretation of subsequent trials, as well as the conduct and analysis of ongoing trials.

Potential examples of the impact of IPD meta-analyses on trials were identified at an international workshop, attended by individuals with experience in the conduct of IPD meta-analyses and knowledge of trials in their respective clinical areas. Relevant trial protocols, publications, and Web sites were examined to verify the impacts of the IPD meta-analyses.

52 examples of IPD meta-analyses thought to have had a direct impact on the design or conduct of subsequent trials.

After screening relevant trial protocols and publications, 28 instances where IPD meta-analyses had clearly impacted on trials were identified. They have influenced the selection of comparators and participants, sample size calculations, analysis and interpretation of subsequent trials, and the conduct and analysis of ongoing trials, sometimes in ways that would not possible with systematic reviews of aggregate data. Additional potential ways of how IPD meta-analyses could be used to influence trials were identified in the course of the analysis.

IPD meta-analysis ‘could be better used to inform the design, conduct, analysis, and interpretation of trials’ ([33], p. 1326).

Storz-Pfennig (2016) [18]

To identify and estimate the extent, to which potentially unnecessary clinical trials in major clinical areas might have been conducted.

A cumulative meta-analysis and trial sequential analysis of a sample of Cochrane collaboration systematic reviews were conducted to determine, at what point evidence was found sufficient to reach a reliable conclusion. Trials published thereafter were considered potentially unnecessary and, therefore, wasteful. Sensitivity analysis was conducted in order to identify whether the findings could be explained by a delayed perception of published findings when planning new trials.

13 comparisons in major medical fields including cardiovascular disease, depression, dementia, leukemia, lung cancer.

In eight out of 13 comparisons, meta-analysis detected potentially unnecessary research with the range between 12 and 89% of all participants in trials that might not have been needed. In three of these cases with high proportions (69–89%) of potentially unnecessary research, this finding was found unchanged upon the sensitivity analysis.

‘The reasonableness of claims to relevance of additional trials needs to be much more carefully evaluated in the future. Cumulative, information size bases analysis might be included in systematic reviews. Research policies to prevent unnecessary research from being done need to be developed.’ ([18], p. 62)

De Meulemeester et al. (2018) [34]

To test the hypothesis that the majority of a sample of recently published RCTs would not explicitly incorporate the scientific criterion of addressing a persisting uncertainty established through a systematic review.

Cross-sectional analysis of all RCTs published in the New England Journal of Medicine and the Journal of the American Medical Association in 2015. The identified articles and protocols were reviewed inter alia for: a clearly stated central hypothesis, indications of evidentiary uncertainty, a meta-analysis or systematic review supporting the hypothesis or study question.

208 RCT articles and 199 protocols met the inclusion criteria.

The majority of RCTs (56%) did not meet the criteria of having a clear hypothesis and demonstrating that an uncertainty around that hypothesis exists, being established through a systematic review.

RCTs not meeting the criteria of having a clear hypothesis and demonstrating that an uncertainty around that hypothesis exists being established through a systematic review can be scientifically and therefore ethically unjustified. Authors recommend to replace the criteria of “equipoise,” “clinical equipoise,” and “lack of consensus” with the requirement that RCTs have a clearly stated, meaningful hypothesis around which uncertainty has been established through a systematic review of the literature.

Blanco-Silvente et al. (2019) [35]

To examine the strength of the available evidence on efficacy, safety and acceptability of cholinesterase inhibitors and memantine for Alzheimer’s disease (AD); To determine the number of redundant trials following the authorisation of cholinesterase inhibitors (ChEI) and memantine used as current pharmacological treatments for Alzheimer’s disease.

A cumulative meta-analysis with a trial sequential analysis, whereby the primary outcomes were cognitive function assessed with ADAS-cog or SIB scales, discontinuation due to adverse events and discontinuation for any reason. The redundancy of post-authorisation clinical trials was studied by determining the novel aspects of each study on patient, intervention, comparator and trial outcome characteristics. Two criteria of trial futility - lenient and strict – were used.

A total of 63 randomised clinical trials (RCTs) (16,576 patients) including placebo-controlled, double-blind, parallel-design RCTs with a minimum duration of 12 weeks that had investigated the effects of donepezil, galantamine, rivastigmine or memantine in monotherapy or in combination with a ChEI at the doses approved by the Food and Drug Administration or the European Medicine Agency in patients with Alzheimer’s disease.

It was conclusive that neither ChEI nor memantine achieved clinically significant improvement in cognitive function. In relation to safety, there was sufficient evidence to conclude that donepezil caused a clinically relevant increase on dropouts due to adverse events whereas the evidence was inconclusive for the remaining interventions. Regarding acceptability, it was conclusive that no ChEI improved treatment discontinuation while it was uncertain for memantine. The proportion of redundant trials was 5.6% with the lenient criteria and 42.6% with the strict criteria.

The evidence showed conclusively that neither ChEI nor memantine achieve clinically significant symptomatic improvement in Alzheimer’s disease, and that the acceptability of ChEI is unsatisfactory. Although evidence on the safety of pharmacological interventions for AD and acceptability of memantine is inconclusive, no further RCTs are needed as their efficacy is not clinically relevant. Redundant trials were identified but their number depends on the criteria of futility used.

Walters et al. (2020) [36]

To determine to what extent systematic reviews were cited as justification for conducting phase III trials published in high impact journals.

The analysis of all phase III RCTs published between 1 January 2016 and 31 August 2018 in New England Journal of Medicine, Lancet, and JAMA, in particular with regard to the references to systematic reviews as the justification for conducting the RCT in the introduction, methods, and discussion/conclusion. The strength of justification was classified as follows: (1) authors explicitly stated that a SR had established the need for the trial; (2) authors discussed the SR in a way that could be inferred that the SR provided the necessary justification; and (3) authors made no mention of using a SR as the basis for conducting the trial.

665 RCTs were retrieved, of which 637 were included that cited in total 728 systematic reviews.

Less than 7% of the analysed RCTs published in three high impact general medicine journals cited explicitly a systematic review as the basis for undertaking the trial.

Trialists should be required to present relevant systematic reviews to ethics committees demonstrating that the existing evidence for the research question is insufficient. Elimination of research waste is both scientific and ethical responsibility.