A decade after the FDAAA, 74% of phase III trials published in high-impact journals contained some discrepancy in the information and results reported in
ClinicalTrials.gov and their corresponding publication, marking only a slight decrease from studies of a decade prior [
2,
3]. Such high rates of discordance suggest that the challenges of providing clear and consistent trial information and reported results across public sources of information still need to be addressed. Most concerning were the inconsistencies observed in reporting of primary efficacy endpoint results and serious adverse events. While the magnitude of these discrepancies was often small, such as small differences in cohort characteristics (e.g., in reported mean age), these discrepancies raise questions as to which source is correct. Potential explanations include that the publications may have reported on a differently defined cohort than the original trial or that trials were published before additional study observations accrued or after statistical analyses were refined and
ClinicalTrials.gov was not subsequently updated. Despite inconsistencies between registered and published primary outcomes of clinical trials being recently observed among a broad sample of clinical trials [
4], our findings are particularly concerning because we focused on phase III trials published in high-impact journals, which are the trials that likely have the greatest influence on clinical care and are used in clinical practice guidelines.
Our study was limited to an 18-month sample of phase III trials that were registered and reported results in both
ClinicalTrials.gov and published in high-impact journals. Thus, it is likely that our study examined those trials following the best practices with respect to result reporting, making our estimates of discrepancies in reporting conservative. Investigators and sponsors that are reporting results to
ClinicalTrials.gov, in addition to publishing their study in the highest impact journals, are more likely to adhere to best practices as compared to those who fail to report results to
ClinicalTrials.gov. Nevertheless, these findings underscore the necessity for monitoring for concordance of clinical trial information and results reported between these sources. We propose a three-pronged approach to ensuring a harmonious reporting of result information: a checklist for investigators to use to ensure congruent reporting pre-submission to a journal, an acknowledgement of any differences that investigators recognize in the submitted manuscript, and a post-submission check by the journal editors. Sponsors and investigators face several challenges to accurate and consistent result reporting, including a high rate of research staff turnover, lack of staff dedicated to monitoring result reporting at many academic institutions and smaller companies, and poor knowledge of FDA and NIH reporting requirements. A checklist—similar to those applied in surgical settings—may provide a systemized procedure for investigators to monitor accurate reporting to
ClinicalTrials.gov throughout the trial process. Additionally, investigators that recognize differences between the results in their manuscript and those in
ClinicalTrials.gov should explain these in the study publication. And finally, journal editors, upon receiving a submission, should request that the trial sponsors provide a link to the corresponding
ClinicalTrials.gov entry and an itemized list of consistencies between the most important trial features (such as the four we examine in this study).