Background
Methods
Sample
Coding of outcome reporting
Letter preparation and submission
Data analysis
Results
Workflow
Outcome reporting quality
Journal name |
Annals
| BMJ | JAMA |
Lancet
| NEJM | Total | |
---|---|---|---|---|---|---|---|
Basic information | Number of trials included | 5 | 3 | 13 | 24 | 22 | 67 |
Journal listed as “endorsing CONSORT” | Yes | Yes | Yes | Yes | Yes | All | |
Protocol availability | Pre-trial protocol with pre-specified outcomes available? | 0 | 0 | 7 | 3 | 19 | 29 |
Percentage of pre-trial protocols available | 0.0% | 0.0% | 53.9% | 12.5% | 86.4% | 43.3% | |
Missing primary outcomes | Trials with any unreported primary outcomes | 4 | 2 | 2 | 4 | 1 | 13 |
Percentage of trials with any unreported primary outcomes | 80.0% | 66.7% | 15.4% | 16.7% | 5.0% | 19.4% | |
Primary outcomes | Total number of primary outcomes pre-specified | 9 | 4 | 22 | 34 | 28 | 97 |
Number of primary outcomes correctly reported as primary outcomes | 4 | 1 | 18 | 24 | 27 | 74 | |
Percentage of primary outcomes correctly reported | 44.4% | 25.0% | 81.8% | 70.6% | 96.4% | 76.3% | |
Number of primary outcomes reported anywhere | 7 | 1 | 18 | 24 | 28 | 78 | |
Percentage of primary outcomes reported anywhere | 77.8% | 25.0% | 81.8% | 70.6% | 100.0% | 80.4% | |
Secondary outcomes | Total number of secondary outcomes pre-specified | 49 | 36 | 111 | 218 | 404 | 818 |
Number of secondary outcomes correctly reported as secondary outcomes | 15 | 26 | 78 | 141 | 190 | 450 | |
Percentage of secondary outcomes correctly reported | 30.6% | 72.2% | 70.3% | 64.7% | 47.0% | 55.0% | |
Number of secondary outcomes reported anywhere | 15 | 26 | 78 | 142 | 190 | 451 | |
Percentage of secondary outcomes reported anywhere | 30.6% | 72.2% | 70.2% | 65.1% | 47.0% | 55.1% | |
Novel outcomes | Number of novel outcomes reported without declaration | 32 | 25 | 53 | 192 | 63 | 365 |
Mean number of novel outcomes reported without declaration, per trial | 6.4 | 8.3 | 4.1 | 8 | 2.9 | 5.4 (95% CI 1.2–10.6) | |
Percentage of novel outcomes declared as novel | 5.9% | 0% | 39.1% | 9.4% | 3.1% | 13.7% (95% CI 0.0–42.5%) |
Letter publication rates
Annals
| BMJ | JAMA |
Lancet
| NEJM | Total | |
---|---|---|---|---|---|---|
Letters required | 5 | 2 | 11 | 20 | 20 | 58 |
Percentage of letters required | 100.00% | 66.70% | 84.60% | 83.30% | 90.90% | 86.6% (95% CI 78.4–94.7%) |
Letters published | 5 | 2 | 0 | 16 | 0 | 23 |
Percentage of letters published | 100% | 100% | 0% | 80% | 0% | 39.7% (95% CI 27.0%–53.4%) |
Mean publication delay for published letters | 0 days (online) | 0 days (online) | n/a | 150 days | n/a | 104 days (median 99 days, range 0–257 days) |
Coding amendments
Themes in responses from journals
Theme and subthemes | Quote | Issue |
---|---|---|
Conflicts with CONSORT | ||
Failure to recognise that post-commencement changes are acceptable under CONSORT but should be declared in the paper reporting the results of the trial
| “On the basis of our long experience reviewing research articles, we have learned that pre-specified outcomes or analytic methods can be suboptimal or wrong” “Although pre-specification is important in science, it is not an altar at which to worship… [COMPare’s] assessments appear to be based on the premise that trials are or can be perfectly designed at the outset… and that any changes investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science” (Annals Editors critique, 01/03/16). | COMPare uses CONSORT as the gold standard. CONSORT item 6b requires that trial reports declare and explain “any changes to trial outcomes after the trial commenced, with reasons” in the paper reporting the results of the trial. Changes are not forbidden; however, they should be declared in the trial report. |
Stating that outcome switching doesn’t matter if the main results of the study are unlikely to be affected
| “We reviewed materials associated with the articles and concluded that the information reported in the articles accurately represented the scientific and clinical intent detailed in the protocols... We found no inconsistencies between the audited articles and their related protocols that would justify changes in trial interpretation, corrections, or warnings to readers” (Trial 45, Annals, 06/04/16). | CONSORT requires all outcomes to be correctly reported; it does not distinguish between circumstances when this would, or would not, affect the overall interpretation of the intervention being trialled. It is unlikely that all outcome misreporting would change the direction or size of an overall finding; however, a culture of permissiveness around correct outcome reporting does permit misrepresentation more broadly. |
Statement describing journal practices that contradict CONSORT guidance
| “We view each piece individually and add the data as appropriate based on the judgment of the peer reviewers, the statistical reviewers, and the editors” (NEJM emails 1, 17/11/15). | CONSORT item 6b requires that trial reports declare and explain “any changes to trial outcomes after the trial commenced, with reasons” in the paper reporting the results of the trial. |
Statement that failure to report pre-specified secondary outcomes is not of interest
| “We will not ordinarily consider letters that simply... point out unpublished secondary outcomes” (JAMA emails, 09/12/15). | |
Denial of endorsing CONSORT, despite appearing on CONSORT’s list of endorsing journals
| “The New England Journal of Medicine finds some aspects of CONSORT useful but we do not, and never have, required authors to comply with CONSORT” (NEJM emails 1, 17/11/15). | |
Timing of pre-specification | ||
Dismissal of pre-commencement registry data as “out of date”
| “The initial trial registry data… often include outdated … entries” (Annals Editors critique, 01/03/16). “Registries... do not routinely monitor whether the data in the registry match the protocol, and may not be updated when the protocol changes. We therefore rely primarily on the protocol” (Annals Editors critique, 01/03/16). | The statement that registry data are “outdated” may reflect a broader misunderstanding about the need for outcomes to be pre-specified pre-commencement. Where the registry entry is the only accessible source of pre-specified outcomes, discrepancies should be declared as per CONSORT 6b. Even if there is a contemporaneous protocol that is not publicly accessible, the pre-specified outcomes in this protocol should match its registry entry; if not, then there are two sets of discrepant pre-specified outcomes, which requires declaration and discussion. Of note, for one trial [Trial 57, Annals, 03/05/16], we found three different sets of pre-specified outcomes in two registries (EUCTR and ClinicalTrials.gov) and one protocol from the same time period. |
Stating or implying that pre-specification after trial commencement is acceptable
| “We disagree with COMPare’s contention that registry data are superior to protocol information because of the timing of the former ...” (Trial 45, Annals, 06/04/16). | COMPare used pre-commencement outcomes from registry data only as a last resort when they were not available from a pre-commencement protocol. Pre-specification of outcomes should take place before trial commencement. CONSORT item 6b requires that trial reports declare and explain “any changes to trial outcomes after the trial commenced, with reasons” in the paper reporting the results of the trial. |
Registries | ||
Dismissal of registry data as unreliable
| “We check the registries, but as both authors’ responses attest, registry information can be incomplete or lack sufficient detail” (Trial 45, Annals, 06/04/16). “The initial trial registry data… often include... vague or erroneous entries” (Annals Editors critique, 01/03/16). “Registries include only extracted information” (Annals Editors critique, 01/03/16). | Publicly accessible trial registries are a cornerstone of trial transparency. Trialists are legally required to correctly register their trials; pre-specified outcomes are a required component under WHO guidance on trial registration; and ICMJE member journals commit to ensuring that trials are appropriately registered. Where the only source of pre-commencement outcomes contains information so imprecise that correct outcome cannot be assessed, we suggest that “inadequately pre-specified outcomes” be noted in the paper reporting the trial’s results, as this presents a similar risk of bias to misreporting of clearly pre-specified outcomes. |
Stating that discrepancies between outcomes pre-specified in a registry entry and those reported in the paper are the fault of the registry
| “Inaccuracies in the trial registration documents are more of an issue for the individuals overseeing the trial registries” (JAMA emails, 9/12/15). “We will not ordinarily consider letters that simply note discrepancies with the trial registration” (JAMA emails, 09/12/15). | It is the responsibility of the journal and trialist to ensure that a trial is correctly reported, with discrepancies against outcomes pre-specified prior to commencement declared as per CONSORT 6b. If there are discrepancies between the outcomes pre-specified and the outcomes reported in the paper, then the paper is discrepant, not the source of pre-specified outcomes. If the pre-specified outcomes on a registry are inconsistent with those in a contemporaneous protocol, then there are multiple sets of pre-specified outcomes and therefore the outcomes have not been correctly pre-specified: this should be noted in the results manuscript. |
Rhetoric | ||
Stating that space constraints prevent all pre-specified outcomes from being reported
| “Space constraints for articles published in the Journal do not allow for all secondary and other outcomes to be reported” (NEJM emails 1, 21/11/15). | The claim that space constraints prevent all pre-specified outcomes from being reported conflicts with the finding of COMPare, and prior research on outcome misreporting, that non-pre-specified additional outcomes were routinely added, in large numbers: a mean of 5.4 novel non-pre-specified outcomes were added per trial in COMPare (range 2.9–8.3 by journal). |
JAMA: “authors are not always required to report all secondary outcomes and all pre-specified exploratory or other outcomes in a single publication, as it is not always feasible given the length restrictions to include all outcomes in the primary report” (JAMA emails, 9/12/15). | ||
General statement about supporting goals of COMPare
| “Though we share COMPare’s overarching goals to assure the validity and reporting quality of biomedical studies, we do not agree with their approach” (Trial 44, Annals, 15/12/16). | All such statements were accompanied by caveats, statements that explicitly or implicitly undermined the journals’ commitment to CONSORT, or incorrect statements about specific data points. |
“While the goal of the COMPare project (http://www.compare-trials.org) is noble, my colleagues and I have outlined concerns with COMPare’s approach (1)” (Trial 45, Annals, 06/04/16). | ||
Statements about journal processes | ||
Statement that authors are required to declare changes to outcomes
| “When the review process generates requests for authors to report outcomes not specified in the protocol or the authors choose themselves to present such outcomes, we ask authors to indicate these as post hoc or exploratory analyses” (Annals Editors critique, 12/02/16). | We cannot verify whether Annals ask authors to do this; however, we can confirm that trials reported in Annals are routinely non-compliant with CONSORT, a finding which is consistent with previous research. COMPare found that, in Annals trials, only 6% of novel non-pre-specified outcomes added to trial reports were correctly indicated by the Annals manuscript as novel; a mean of 6.4 novel undeclared outcomes were added per trial; 44% of primary outcomes were correctly reported; and 31% of secondary outcomes were correctly reported. |
“To be consistent with CONSORT recommendations, we ask authors to describe, either in the manuscript or in an appendix, any major differences between the trial registry and protocol, including changes to trial endpoints or procedures” (Annals Editors critique, 01/03/16). | ||
Statement that journal has a process to ensure correct outcome reporting
| “We carefully check for discrepancies between the protocol and the manuscript” (JAMA emails, 09/12/15). | We cannot verify JAMA’s internal processes; however, we can confirm that trials reported in JAMA are routinely non-compliant with CONSORT, a finding which is consistent with previous research. COMPare found that, in JAMA trials, 39% of novel outcomes added to trial reports were correctly indicated as novel; a mean of 4.1 novel undeclared outcomes were added per trial; only 82% of primary outcomes were correctly reported; and 70% of secondary outcomes were correctly reported. |
“We agree that it is important for researchers to pre-specify primary and secondary outcomes before conducting a trial and to report outcomes accurately in their publications. In fact, we carefully monitor this during editorial review” (JAMA emails, 9/12/15). | ||
Placing responsibility on others (for example, trialists or reader) | ||
Stating that readers can see for themselves whether outcomes reported are discrepant with those pre-specified
| NEJM: “Any interested reader can compare the published article, the trial registration and the protocol (which was published with the article) with the reported results to view discrepancies” (NEJM emails 1, 21/11/15). | COMPare found that accessing documents and assessing trials for correct outcome reporting took between 1 and 7 hours per trial. |
Passing responsibility to trialists rather than journals or editors
| The Lancet published 15 out of 20 letters, mostly with accompanying responses from trialists: the majority of author responses expressed further misunderstandings about what constitutes correct outcome reporting, as reported in the accompanying paper on trialists’ responses. The Lancet made no comment themselves [all correspondence]. We asked the journal to clarify their position in our follow-up correspondence: “Since The Lancet have a longstanding positive commitment to improving reporting standards, lead the REWARD campaign on research integrity, and endorse CONSORT, we would welcome their perspective on why undeclared outcome switching in PETIT2 (and others) was apparently not addressed prior to publication; whether they now view outcome switching as acceptable; or whether they disagree that it has happened here”. We received no reply and our letter was not published (Trial 9, Lancet, 05/02/16). | Where a journal is listed as endorsing the CONSORT guidelines on trial reporting, it is reasonable to expect that they will take responsibility for ensuring that trials are reported consistently with these guidelines. |
Placing responsibility on trial registry staff
| “Inaccuracies in the trial registration documents are more of an issue for the individuals overseeing the trial registries” (JAMA emails, 9/12/15). | As above, if there are discrepancies between the outcomes pre-specified and the outcomes reported in the paper, then the paper, not the source of pre-specified outcomes, is discrepant. |
NEJM quote | Issue |
---|---|
“[The criticism by COMPare that] AEs leading to discontinuation [were] not correctly reported... is false. Protocol indicates safety and tolerability as second of 2 primary objectives, and registration lists incidence of AEs leading to discontinuation as 1 of 2 primary outcome measures. First line of Table 3 and first sentence of Safety section (p. 2604) reports that 1 of 624 patient treated with sofosbuvir-velpatasvir discontinued due to AE” (NEJM first comments on trial 22 (1)). | NEJM are incorrect. The outcome in question was pre-specified as a primary outcome but incorrectly reported by NEJM as a secondary outcome. COMPare therefore coded it as reported, but incorrectly reported. This is clearly denoted in the COMPare assessment sheet for this trial, and the COMPare letter reads, “There were 2 prespecified primary outcomes, of which one is reported in the paper; while one is incorrectly reported as a secondary outcome”. |
“[The criticism by COMPare that] Secondary outcome SVR [was] not reported in publication... is false. This is reported in Table 2. The COMPARE reviewers may not appreciate that SVR4 (sustained virologic response week 4) is equivalent to HCV RNA <15 IU/mL at week 4, which is reported in Table 2. HCV RNA <15 IU/mL is the lower limit of detection of the assay, as indicated in the Table footnote” (NEJM first comments on trial 22 (2)). | This is invalid. COMPare correctly coded this outcome as missing. Table 2 does report HCV RNA <15 IU/mL at “week 4” but this was week 4 during treatment (which was 12 weeks long); SVR4 is sustained virologic response at week 4 posttreatment. Hence, we correctly concluded that HCV RNA <15 IU/mL at week 4 post-treatment (SVR4) was not reported in the publication. It seems that NEJM editors did not realise that SVR4 is 4 weeks posttreatment, rather than the 4th week of treatment, hence their misunderstanding and misreporting of this outcome in NEJM and their error in their review of the letter from COMPare. |
“[The criticism by COMPare that] proportion with HCV RNA < LLOQ on treatment [was] not reported in publication… is false. The COMPARE reviewer may not appreciate that “HCV RNA < LLOQ” is equivalent to “HCV RNA < 15 IU/mL”. Table 2 reports HCV RNA < 15 IU/mL during treatment” (NEJM first comments on trial 22 (4)). | This is invalid. The time point for this outcome was given in the registry entry as “up to 8 weeks”, and results were reported in NEJM only for 2 and 4 weeks. We therefore concluded that the pre-specified outcome was not reported. The fact that this discrepancy relates only to the time point is made explicit in the letter submitted by COMPare to NEJM, which states that the outcome “is not reported at the pre-specified timepoint, but is reported at two novel time-points”. Because of variation in clinical presentation over time, and the attendant risk of selective reporting, under CONSORT each separate time point at which an outcome is measured is regarded as a separate outcome. |
“[The criticism by COMPare that] HCV RNA change from baseline [was] not reported in publication… is false. The change in HCV RNA from baseline is conveyed by reporting the mean HCV RNA at baseline (Table 1) and the rates of HCV RNA < 15 IU/mL (Table 2). Table S4 [of the trial report manuscript] reports the HCV RNA levels for the 2 patients who virologic failure” (NEJM first comments on trial 22 (5)). | This is invalid and represents a concerning approach to reporting pre-specified outcomes. NEJM suggests that readers calculate the results for a pre-specified outcome themselves. In addition, “HCV RNA change from baseline” cannot be calculated from the numbers reported. Mean baseline HCV RNA is reported. Mean follow-up HCV RNA is not reported. Table 2 reports only the proportion of patients with HCV RNA < LLOQ (undetectably low). |
“[The criticism by COMPare that] proportion with virologic failure [was] not reported in publication… is false. This is reported in Table 2 which reports virologic failure during treatment (0 patients) and virologic failure after treatment (1 patient)” (NEJM first comments on trial 22 (6)). | This is invalid. COMPare coded this outcome as “correctly reported”. This is clear on the assessment sheet. |
Theme | Quote | Issue |
---|---|---|
Misrepresentation of COMPare’s methods
| COMPare’s method is a “simple check for an exact word match between outcomes entered in a registry and those reported in a manuscript, but that oversimplifies a highly nuanced process” (Annals to BMJ). | This is untrue. COMPare did not seek literal word matches: each pre-specified outcome was manually checked and re-checked, as per previous research on outcome misreporting, using CONSORT as gold standard. |
“The initial trial registry data… serve as COMPare’s ‘gold standard’” (Annals Editors critique, 01/03/16). | This is untrue. As explained in our publicly accessible protocol, COMPare used the registry entry only as a last resort where there was no pre-commencement protocol publicly available, as CONSORT 6b requires that changes after commencement be noted in the trial report. Notably, no Annals trial had a publicly accessible pre-commencement protocol. | |
Stating that COMPare correspondence and raw data sheets contained insufficient information
| “In addition, some of the information in your letters is vague, containing only numbers and not specific outcomes, making it difficult to understand the specific issues or reply to them. Moreover, the last 2 paragraphs of the letters you have submitted, concerning CONSORT and the COMPare project, are identical” (JAMA emails, 09/12/15). | All correction letters linked to the COMPare online repository where all underlying raw data sheets were shared in full, specifying in detail each pre-specified primary and secondary outcome, whether and how each pre-specified outcome was reported, each additional non-pre-specified outcome reported, and whether each non-pre-specified outcome added was correctly declared as non-pre-specified. This JAMA letter was received halfway through the COMPare study period. To address the reasons given for letter rejection, despite word length limits imposed by JAMA for correspondence, all subsequent JAMA letters had no repetition and extensive detail within the text on specific misreported outcomes. However, none of these subsequent letters was published and we received no further replies. |
Warning readers against COMPare’s assessments
| “Until the COMPare Project’s methodology is modified to provide a more accurate, complete and nuanced evaluation of published trial reports, we caution readers and the research community against considering COMPare’s assessments as an accurate reflection of the quality of the conduct or reporting of clinical trials” (Annals Editors critique, 01/03/16), (Trial 25, Annals, 14/12/15), (Trial 44, Annals, 15/12/16), (Trial 45, Annals, 15/12/15), (Trial 68, Annals, 30/12/15). | All Annals’ critiques on matters of fact were incorrect; Annals rejected replies demonstrating this to readers. Following this comment posted by Annals under all COMPare correspondence, no trialists engaged with any of our evidence of their failure to correctly report pre-specified outcomes. We regarded Annals’ advising authors not to engage with reasonable professional criticism of their methods and results as a breach of ICMJE guidance on correspondence. |
Claim that COMPare coding incorrect on specific outcomes
| NEJM gave journalists a detailed review of COMPare’s assessment of one trial, which NEJM stated had identified six errors in COMPare’s assessment. This was reviewed, and NEJM were wrong on all six counts; full details are presented in the table above and in the correspondence appendix (NEJM first comments on trial 22). Another NEJM review of a COMPare letter was also factually wrong on all three issues it raised (NEJM second comments on trial 22 (2)). | The editors were wrong on all nine issues raised. The document they sent exemplified misunderstandings around the importance of reporting all pre-specified time points for each pre-specified outcome. |
Narrative account of individual journals’ responses
October–December 2015: Following submission, all COMPare letters were accepted as online comments only. Reading these requires registration for an Annals user account. | |
14 December 2015: Annals editors published an 850-word critique of COMPare as an online comment on the CASCADE trial, which later appeared as a full page article in print in the March 1 edition and as a standalone online letter. This piece has no named authors. It contained various incorrect and internally inconsistent statements on outcome pre-specification and reporting, as documented in Table 3 and online [20]. Annals declined to publish a response from COMPare in print or below the standalone online letter (Trial 5, Annals, 14/12/15). In their critique, Annals stated that they prefer to use protocols over trial registries and that registries often contain “outdated, vague or erroneous entries” (Table 3). However, pre-trial protocols were not available for any of the Annals trials assessed by COMPare. From February to April 2016, the official Annals social media account claimed, incorrectly, to have fully published COMPare correspondence on four occasions [Annals tweets] after their non-publication of COMPare responses was reported elsewhere [21]. | |
14–30 December 2015: Annals editors posted an identical comment beneath four out of five COMPare comments on misreported trials in Annals: “we caution readers and the research community against considering COMPare’s assessments as an accurate reflection of the quality of the conduct or reporting of clinical trials... we do not believe that COMPare’s comments on [trial] merit a response”. In our view, this conflicts with ICMJE guidelines: “The authors of articles discussed in correspondence ... have a responsibility to respond to substantial criticisms of their work using those same mechanisms and should be asked by editors to respond” [22]. Following this comment from Annals editors, no authors engaged on the issue of whether they had reported their pre-specified outcomes. In March 2016, Annals clarified their comments, asserting that their comment was not intended to dissuade authors from replying to COMPare comments. No trial authors have replied on the concerns raised about their outcome reporting since Annals’ initial comment. | |
1 March 2016: Two COMPare comments were published in print in Annals, with responses from an author and the editors (mentioned above). Both of these responses contained further errors and misunderstandings in relation to the outcomes published in the trial report; Annals declined to publish subsequent COMPare correspondence pointing out these issues. | |
18 March 2016: Following Annals editors stating that protocols should be used to assess outcome reporting fidelity in preference to registry entries, COMPare requested the protocol for Trial 45 (Everson et al.) from the lead author: this protocol was not published, but the “Reproducible Research Statement” in Annals stated that it was available on request. We received a reply from Gilead sciences, stating that the protocol is confidential [Everson emails, Annals]. COMPare raised concerns about this in a further online comment to Annals [Trial 45, Annals, 19/04/16]; Annals subsequently issued a correction to the Reproducible Research Statement. | |
19 April 2016: Annals changed its “Instructions to Authors” to require submission of protocol for subsequent publication alongside all trials in the future. Annals told journalists that this change was planned and predated COMPare’s concerns [23]. |