Background
Systematic reviews of randomised controlled trials (RCTs) are considered the gold standard for evaluating the effectiveness and harms of interventions [
1]. However, results of many completed RCTs are not published, which leads to reduced power and potential publication bias in reviews [
2‐
4]. Moreover, peer-reviewed publications are not always an accurate reflection of how trials were planned, conducted and analysed. A lack of transparency or missing information on harms is common [
3,
5].
One potential source of unpublished data is clinical study reports (CSRs): extensive reports prepared by pharmaceutical companies and submitted to regulatory authorities as a part of an application for marketing authorisation. The structure of CSRs is outlined in a guideline from the International Conference on Harmonisations [
6]. Access to CSRs has historically been difficult [
7], but since 2015, the European Medicines Agency (EMA) has launched an initiative (policy 0700) to increase transparency of information on medicinal drugs by providing access to CSRs submitted to the agency. However, the EMA has not published any CSRs since December 4, 2018, when the initiative was paused indefinitely during the EMA’s move to Amsterdam [
8]. Although several systematic reviews have included CSRs [
9‐
12] and a questionnaire study found that respondents consider CSRs valuable for systematic reviews [
13], a 2014 study found that most systematic reviews continue to rely on publications as the primary source of data [
14].
Several studies have compared reporting in publications, trial registries and CSRs; for example, a study found that CSRs had higher reporting quality than did registry reports and publications [
15], a finding that was confirmed in several other studies [
16‐
20]. However, no study has systematically compared reporting of harms in trial registries and publications with a large sample of recent CSRs from oncological trials.
Targeted therapy and immunotherapy for cancer have revolutionised the care of most patients with cancer. Several of these specific oncologic drugs have recently been approved by the US Food and Drug Administration and EMA. Evaluating the harms of these new drugs is essential. Thus, we aimed to compare the delay to access results of oncological RCTs, the completeness of reporting harm data and discrepancies in harm data reported between the three sources: CSRs available on the EMA Website, clinical trial registries and journal publications.
Methods
We identified all trials evaluating targeted therapy and immunotherapy for cancer available on the EMAs clinical data Website and retrieved the related CSRs. Then, we systematically searched for the related records in clinical trial registries and related publications. Finally, we compared the delay to access results, the completeness of reporting of harm data and discrepancies in harm data reported between the three sources.
Identification of trials
In November 2019, we used the EMA’s clinical data website (
https://clinicaldata.ema.europa.eu) to identify all submissions for marketing authorisation or extension of indication under the EMA policy 0070. We updated the search in June 2020 and identified no new submissions. For all submissions, we extracted the product name, active substance, marketing authorisation holder and Anatomical Therapeutic Chemical (ATC) code. We selected the ATC codes for monoclonal antibodies (L01XC) and protein kinase inhibitors (L01XE) corresponding to targeted therapy and immunotherapy.
Once we had identified all eligible active substances, we downloaded all documents from the EMA website (i.e., CSRs and related documents) and used these to create a list of all trials submitted to the EMA. We included phase II, II/III or III RCTs that were part of a submission for a targeted therapy or immunotherapy. We excluded trials that compared only different dosages of the same treatment.
Two reviewers (ASP-M and PC) independently identified trials from the documents for one-quarter of the eligible active substances. Because of no discrepancies in this identification, the remaining identifications involved one reviewer (ASP-M).
One reviewer (ASP-M) systematically searched ClinicalTrials.gov and the European Clinical Trials Register (EUCTR) by using (1) the trial registry identifier or ID number if mentioned in the CSR or (2) the name of the experimental drug (or its international non-proprietary name). If we were still unable to identify the corresponding trial, we used other keywords (e.g. treatment comparator and indication). The records identified were systematically checked by a second reviewer (PC). Then we checked whether results were posted on the trial registries identified. If the study was registered in both registries, we extracted data from both.
Identification of results publications for identified RCTs
We first searched for citations listed in trial registries. For ClinicalTrials.gov, the only registry to give access to citations, we used the citations listed under “publication of results” and “publications automatically indexed to this study by ClinicalTrials.gov identifier (NCT Number)”. We included all publications reporting results for the trial identified. We did not include publications of reviews or publications that presented pooled analyses of several trials and did not include data from the individual trial. If no publications were indexed in the registry record, we searched MEDLINE and EMBASE by using the name of the experimental drug, treatment comparator, indication and name of the principal investigator.
For each trial, we extracted information from the CSR available on the EMA Website, the clinical trial registry records (both ClinicalTrials.gov and EUCTR) and all related publications. The extracted information was entered in a data extraction spreadsheet. Two reviewers (ASP and PC) independently extracted data for 10% of trials. Because of only minor disagreements, one reviewer (ASP-M) extracted the data for the remaining trials. All extractions were then checked by a second reviewer (PC). All discrepancies were resolved by discussion.
We extracted the following information for each trial:
1)
General characteristics: name of trial, name of studied drug, clinical development phase, condition; number of centres, number of arms, number of participants randomised, whether the trial was a non-inferiority trial, the primary outcome, funding and whether the trial was blinded.
2)
Delay in access to trial results: The CSRs included in this project are released under the EMA policy 0070 [
21], which dictates that clinical data submitted to the EMA as part of a marketing authorisation application or a post-authorisation procedure shall be released once the concerned procedure (hereafter EMA procedure) has been finalised. We recorded the date of finalisation of the procedure for all included submissions by using the European Commission’s register [
22] and determined the delay between the finalisation of the procedure and publication of CSRs on the EMA website.
To determine the time between completion of the study and release of results in each source, we recorded the primary trial completion date (i.e. the date of the last participant’s final follow-up visit for measurement of the primary outcome) from ClinicalTrials.gov. If this was not available, we checked the other sources (CSRs, publications, and EUCTR) for a primary completion date. We also recorded for each source the date when the results were released and available. For trials with multiple publications, we used the earliest publication date. We then calculated the delay between primary trial completion date and availability of results for each source.
3)
Completeness of reporting harm data and discrepancies in harm data: We extracted the following information from all three sources of data for each trial: number of patients randomised, whether a definition of safety population was provided, number of patients in the safety population, threshold for reporting adverse events (e.g. 10%, 5% or none), number of patients experiencing at least one adverse event, total number of adverse events, number of patients experiencing at least one serious adverse event, total number of serious adverse events, number of patients experiencing at least one adverse event judged to be grade 3–5 according to the Common Terminology Criteria for Adverse Events (CTCAE), total number of adverse events judged to be grade 3–5 according to the CTCAE, number of patients discontinuing the trial due to adverse events, number of deaths due to adverse events, and whether a description of the process of determining whether a death was due to adverse events, including whether the person(s) making the judgement were blinded, was provided. For all variables, we recorded the numbers per arm. Some sources reported CTCAE grade 3–4 events rather than grade 3–5 events. If the number of grade 5 events was reported separately, we added the numbers. If the number of grade 5 events was not reported, we still gave the trial a “yes” for the question, extracted the number of grade 3–4 events and noted this.
Analysis
We compared reporting of the different variables defined above in CSRs with that in clinical trial registries and publications, separately. We performed Kaplan-Meier analysis on the delay from primary trial completion date to the publication of the CSR, the first publication of results in trial registries, and a publication in a medical journal. If a trial had not been published in a source, we calculated the delay between the primary completion date and June 29, 2020 and considered the trial right censored. For numerical variables reported in at least two of the data sources, we examined whether the numbers reported were the same. For this analysis, we pooled results from ClinicalTrials.gov and EUCTR. If results were available from both registries, we used the data from ClinicalTrials.gov for the analysis of discrepancies.
Patient involvement
No patients were involved in the planning or conduct of this study.
Discussion
Our study shows that data on harms in RCTs evaluating targeted therapy and immunotherapy for cancer are reported more frequently and in more detail in CSRs than in registries and publications. However, reporting is not perfect. CSRs were missing for five (12%) trials. Three of these trials were ongoing at the time of submission of documents to the EMA which might explain the missing CSRs. The two remaining trials were completed before the submission and it is unclear why CSRs were missing for these trials. Furthermore, important data were incompletely reported; for example, the total number of serious adverse events and all adverse events was available in only 9/37 (24%) and 12/37 (32%) CSRs. Although data should be available at the date the EMA procedure is completed, we showed a median of 1.21 years between the finalisation of the procedure and publication of CSRs on the website. Additionally, the delay from primary completion of a trial until results were available was longer for CSRs than for other sources. We also demonstrated discrepancies in harms data between CSRs and other sources. One possible explanation for such discrepancies could be different selection criteria for which events to include in which reports, e.g., clinicaltrials.gov use a 5% threshold for non-serious adverse events and similar thresholds are also commonly used in journal publications. Routinely reporting all events, without thresholds, would both improve reporting and potentially solve some discrepancies.
Our results are consistent with other findings. In 2013, Wieseler et al. examined a sample of 86 trials with both a CSR and a publicly available source of data and found that serious adverse events, adverse events and withdrawals due to adverse events were more frequently reported in CSRs than another source [
16]. In 2014, Maund et al. found that for nine antidepressant trials, CSRs were a more reliable source of information on harms than were journal articles [
17]. In 2016, two reports described the reporting of harms of orlistat in CSRs and journal publications: both concluded that reporting of harms was more extensive in CSRs than in journal publications [
18,
19]. In 2019, a study compared six CSRs for gabapentin and two for quetiapine with publications and found that in CSRs all adverse events were reported, whereas no publications reported all adverse events [
20].
Our finding of non-publication of RCTs and poor reporting of harms in publications is also in accordance with previous findings [
113]. The CONSORT statement outlines items that should be reported in journal publications describing RCTs and is endorsed by 585 journals [
114]. However, the CONSORT statement only has one item addressing harms. An extension for harms exists and some of the items outlined in this extension are the number of participants discontinued due to harms, the frequency of all adverse events with separate information about the severity, and the number of both affected participants and the number of events [
115]. Unfortunately, to our knowledge, submission of the CONSORT harms extension is not mandated by any journal, and of the
Lancet,
New England Journal of Medicine,
BMJ and
JAMA, only the
Lancet makes specific reference to the extension in their guidance to authors [
116]. Had the extension been endorsed by the journals included in this study, and thus been followed, publications would likely have fared much better.
The substantial delay between completion of trials and availability of CSRs is an important barrier to access to all data. Several teams have highlighted the need to access trial results through posting on trial registries. For example, a report from TranspariMED states that in a 2013 cohort of cancer drug trials, two thirds of trials had not posted results to ClinicalTrials.gov 3 years after completion [
117]. Interventions have been proposed to improve access to results [
118] and similar strategies to increase access and reduce delays for CSRs should be developed.
Our study is the first to compare reporting of harms in CSRs released under EMA policy 0070 with publications and trial registries for oncological trials. The automatic release of the CSRs might have led to better reporting of harms in other sources of data, but this does not seem to be the case. Additionally, we systematically examined predefined variables in a relatively large sample of trials.
Our study has some limitations. First, we focused on oncology trials, and our findings might not be applicable to other fields of medicine. However, our results, together with results from previous studies, suggest that the reporting of harms is better in CSRs than trial registries and journal publications across all specialities. Second, we examined only two clinical trial registries, and more information might be available from other registries; however, ClinicalTrials.gov and EUCTR are two of the most-used registries, and information available elsewhere is unlikely to substantially alter our conclusions.
Our study has important implications for both research and practice. Our results suggest that any systematic review or other assessment of harms associated with oncological treatments would have to rely on CSRs for making the soundest conclusions. If such an assessment relies on data from only publications and trial registries, it will only be able to include a subset of the available data. This is problematic for several reasons, namely reduced power to detect differences between groups and the fact that the data reported in registries and publications might vary systematically from data not reported. Additionally, we have shown marked discrepancies in data reported in CSRs and other sources, especially for withdrawals due to adverse events; therefore, we consider results based on CSRs more reliable. However, while we believe that CSRs are currently the most reliable source of data on harms, using CSRs might currently not be feasible. First, even though CSRs contained more data than other sources, important information was still missing in a significant number of CSRs. Secondly, we identified a substantial delay between the completion of trials and the availability of a CSR. To solve the problem of missing data in CSRs, we suggest that regulatory authorities make stricter requirements to the quality of submitted CSRs. The issue of delay in access is complicated by the fact that the EMA does not release CSRs until the procedure for which they were submitted is completed; however, we identified substantial delay from completion of the procedure to availability of CSR. By decreasing this delay to the absolute minimum, the overall delay could be reduced substantially. Currently, the EMA are not releasing any CSRs due to the agency’s move to Amsterdam, and in a reply to an open letter from IQWIQ and Cochrane the agency would not commit to reinitiating publication of CSRs [
119].
In addition, improving reporting of harms in journal publications would be valuable, as this is a very accessible source of information and as publications are often available earlier than CSRs. By actively enforcing the CONSORT statement and the extension for harms, journals could help improve the reporting of harms. Also, the CONSORT extension for harms, which was released in 2004, could be updated to better reflect new opportunities for data-sharing and new knowledge on reporting of harms.
In addition, current estimates of the harms of oncological treatments based on published data might not be accurate and not able to inform clinical practice. Because oncological treatments are generally toxic, the harms profile is an important piece of information for assessing the benefit/harm balance, and true informed consent is only possible if the estimate of harms is accurate.
We suggest that future studies comparing reporting of harms in different sources should focus on more detailed aspects of harms reporting, e.g. whether information on adverse events by System Organ Class and Preferred Term levels are available.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.