Skip to main content
Erschienen in: Trials 1/2019

Open Access 01.12.2019 | Letter

Comments on “Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals”

verfasst von: Rohan Kumar Ochani, Asim Shaikh, Naser Yamani

Erschienen in: Trials | Ausgabe 1/2019

download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN

Abstract

Randomized controlled trials are considered the gold standard in assessing treatment regimens, and since abstracts may be the only part of a paper that a physician reads, accurate reporting of data in abstracts is essential. The CONSORT checklist for abstracts was designed to standardize data reporting; however, for papers submitted to anesthesiology journals, the level of adherence to the CONSORT checklist for abstracts is unknown. Therefore, we commend Janackovic and Puljak for their efforts in determining the adherence of reports of trials in the highest-impact anesthesiology journals between 2014 and 2016. The results of their study are extremely important; however, we believe that that study had some methodological limitations, which we discuss in this manuscript.
Hinweise
An author's reply to this comment is available online at https://​doi.​org/​10.​1186/​s13063-019-3858-6.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Dear Editor,
The importance of adhering to the Consolidated Standards of Reporting Trials (CONSORT) checklist when reporting randomized controlled trials cannot be overstated, as the results of a trial can strongly influence clinical practice [1], especially for abstracts, since busy clinicians often rely solely on the abstract. Hence, we commend Janackovic and Puljak for their efforts in determining the adherence of reports of trials to the CONSORT checklist for abstracts in the highest impact anesthesiology journals between 2014 and 2016 [2]. The results of their study are extremely important; however, we believe that that study has some methodological limitations.
Firstly, that study calculates an overall total adherence score for all trials. All items in the checklist were scored as “yes,” “no,” or “unclear.” Hence, that study clearly assigns an equal weight to each item on the CONSORT checklist. We believe that giving each item an equal value and scoring them identically is not the best approach, as evidently some items should carry much more importance, such as randomization, blinding, and reporting of the primary outcome compared to giving the contact details of the authors [3]. Furthermore, the total adherence score is heavily influenced by a very few items that have extreme results. In that study, “interventions,” “objective,” “outcome,” and “conclusions” all had scores of over 90% and in contrast, “source of funding” had a score of only 0.2%. We suspect that these values had a profound impact on the total adherence score.
Secondly, the study also states “two authors independently screened bibliographic results.” An inter-rater reliability test, such as Cohen’s kappa, would have been of great benefit here. Multiple individuals collecting similar types of data often come to different conclusions. Moreover, variables that are subject to inter-rater errors are common throughout the clinical literature [4]. Therefore, while resolving discrepancies via discussion may have produced a consensus, conducting an inter-rater reliability test would have identified discrepancies and which variables were susceptible to errors. That study does not indicate the level of agreement achieved for these crucial differences.
Finally, the study compares the total adherence scores obtained for each journal and states which had the highest and lowest scores. Note that journals can have very different reporting criteria and policies for certain items [5]. Some journals insist that certain items are reported in the full text as opposed to the abstract, and vice versa. Moreover, there can be discrepancies between an abstract and its corresponding full text [6]. Therefore, comparing journals based on their total adherence scores may be misguided. Perhaps, comparing individual checklist items between journals, especially important items such as allocation concealment, would be more effective at highlighting significant inadequacies concerning adherence to CONSORT.

Acknowledgements

Not applicable.
Not applicable.
Not applicable.

Competing interests

The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Falci SG, Marques LS. CONSORT: when and how to use it. Dental Press J Orthod. 2015;20(3):13–5.CrossRef Falci SG, Marques LS. CONSORT: when and how to use it. Dental Press J Orthod. 2015;20(3):13–5.CrossRef
2.
Zurück zum Zitat Janackovic K, Puljak L. Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals. Trials. 2018;19(1):591.CrossRef Janackovic K, Puljak L. Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals. Trials. 2018;19(1):591.CrossRef
3.
Zurück zum Zitat Bridgman S, Engebretsen L, Dainty K, Kirkley A, Maffulli N. ISAKOS Scientific Committee. Practical aspects of randomization and blinding in randomized clinical trials. Arthroscopy. 2003;19(9):1000–6.CrossRef Bridgman S, Engebretsen L, Dainty K, Kirkley A, Maffulli N. ISAKOS Scientific Committee. Practical aspects of randomization and blinding in randomized clinical trials. Arthroscopy. 2003;19(9):1000–6.CrossRef
4.
Zurück zum Zitat McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276–82.CrossRef McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276–82.CrossRef
5.
Zurück zum Zitat Shawwa K, Kallas R, Koujanian S, et al. Requirements of clinical journals for authors’ disclosure of financial and non-financial conflicts of interest: a cross sectional study. PLoS One. 2016;11(3):e0152301.CrossRef Shawwa K, Kallas R, Koujanian S, et al. Requirements of clinical journals for authors’ disclosure of financial and non-financial conflicts of interest: a cross sectional study. PLoS One. 2016;11(3):e0152301.CrossRef
6.
Zurück zum Zitat Li G, LPF A, Nwosu I, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.CrossRef Li G, LPF A, Nwosu I, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.CrossRef
Metadaten
Titel
Comments on “Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals”
verfasst von
Rohan Kumar Ochani
Asim Shaikh
Naser Yamani
Publikationsdatum
01.12.2019
Verlag
BioMed Central
Erschienen in
Trials / Ausgabe 1/2019
Elektronische ISSN: 1745-6215
DOI
https://doi.org/10.1186/s13063-019-3857-7

Weitere Artikel der Ausgabe 1/2019

Trials 1/2019 Zur Ausgabe