Article Text

Download PDFPDF

‘Spin’ in reports of clinical research
Free
  1. Kamal R Mahtani
  1. Nuffield Department of Primary Care Health Sciences, Centre for Evidence-Based Medicine, University of Oxford, Radcliffe Observatory Quarter, Oxford, UK
  1. Correspondence to : Dr Kamal R Mahtani
    , Nuffield Department of Primary Care Health Sciences, Centre for Evidence-Based Medicine, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK; kamal.mahtani{at}phc.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

For many researchers, numbers of publications and the impact those publications make, is the usual currency for measuring professional worth. Furthermore, researchers are increasingly discussing their work in public through mainstream and social media, as more of these opportunities arise. With this increased exposure may come a temptation for researchers to report the results of their findings in a more favourable way than they deserve, that is, to add some spin. However, according to the EQUATOR network, such practice constitutes misleading reporting and they highlight as particularly bad practice misinterpretation of study findings, for example, presenting a study in a more positive way than the actual results reflect or down play harms.1

Prevalence of spin in clinical research

An analysis of 72 randomised controlled trials that reported primary outcomes with statistically ‘non-significant’ results, found that more than 40% of the trials had some form of spin, defined by the authors as the “use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically non-significant difference for the primary outcome, or to distract the reader from statistically non-significant results”.2 The analysis identified a number of strategies for spin. Some of the most common techniques were focusing reporting on statistically significant results for other analyses, that is, not the primary outcomes; or focusing on another study objective and distracting the reader from a statistically non-significant result. Another analysis, this time involving 107 randomised controlled trials in oncology, similarly found that nearly half of the trials demonstrated some form of spin in either the abstract or the main text.3

Systematic reviews of primary research should, in theory at least, address some of these problems. By seeking the totality of available evidence, interpreting the impact of bias and then synthesising the evidence into a useable form, are powerful tools for informing clinical decisions. However, not all systematic reviews are equal. Non-Cochrane systematic reviews are twice as likely to have positive conclusion statements as Cochrane reviews.4 Furthermore, non-Cochrane reviews, when matched to an equivalent Cochrane review on the same topic, are more likely to report larger effect sizes with lower precision than the equivalent Cochrane review.5 In both cases, these findings may well reflect the extent to which methodological complexity is ignored or sidestepped in poorer quality reviews.

Thus, not all systematic reviews are equal and neither are they exempt from spin. A review of the presence of spin in systematic reviews of psychological therapies showed that spin was present in 27 of the 95 included reviews (28%).6 Another recent study identified 39 different types of spin that may be found in a systematic review.7 Thirteen of those were specific to reports of systematic reviews and meta-analyses. When Cochrane systematic review editors and methodologists were asked to rank the most severe types of spin found in the abstracts of reviews, their top three were (1) recommendations for clinical practice not supported by findings in the conclusion, (2) a misleading title and (3) selective reporting.

Impacts of spin from clinical research

Spin may influence the interpretation of information by clinicians. A randomised controlled trial allocated 150 clinicians to assess a sample of cancer-related abstracts with spin and another 150 clinicians to assess the same abstract with the spin removed.8 Although the absolute effect size was small, the study found that the presence of spin was significantly more likely to induce the clinicians to report that the treatment was beneficial. Paradoxically, the study also showed that spin caused clinicians to rate the study as being ‘less’ rigorous and they were more likely to want to review the full-text article.

Dissemination of research findings to the public, for example, through mainstream media, can also be a source of added spin. An analysis of 498 scientific press releases from the EurekAlert! Database identified 70 that referred to two-arm, parallel-group randomised controlled trials.9 Spin, which included a tendency to put more emphasis on the beneficial effects of a treatment, was identified in 33 (47%) of the press releases. Furthermore, the authors of the analysis found that the main factor associated with spin in a press release was the presence of spin in the abstract conclusion.

Motivation for adding spin

This is a complex area, to which more relevant research might add clarity. A desire to demonstrate ‘impact’ from one's work may certainly be one driver. Another may be wishful thinking, the desire to have a treatment that works.10 Other proposed mechanisms include (1) ignorance of scientific standards, (2) young researchers' imitation of previous practice, (3) unconscious prejudice or (4) wilful intent to influence readers.7

Pre-existing conflicts of interest (COI) will almost certainly have some bearing on the presence of spin. As an example, an overview of systematic reviews examined whether financially related COI influenced the overall conclusions from systematic reviews that examined the relationship between the consumption of sugar-sweetened beverages (SSBs) and weight gain or obesity.11 Of the included studies, 5/6 systematic reviews that disclosed some form of financial COI with the food industry, reported no association between consumption of SSBs and weight gain. In contrast, 10/12 reviews, which reported no potential COI, found that consumption of SSBs could be a risk factor for weight gain.

While a great deal of discussion focuses on financial COI, relatively less time is spent in recognising the problems that arise from ‘non-financial’ COI (NFCOI) or ‘private interests’.12 For systematic reviews, these types of conflicts have been defined as ‘a set of circumstances that creates a risk that the primary interest—the quality and integrity of the systematic review—will be unduly influenced by a secondary or competing interest that is not mainly financial’.13 Examples of NFCOI include strongly held personal beliefs (eg, leading to a possible ‘allegiance bias’), personal relationships, a desire for career advancement or (increasingly possible now) a greater media profile.13 ,14 All of these have the potential to affect professional judgement and may generate a message that does not convey a fair reflection of the true research findings.

Unfortunately a significant proportion of clinical research is still littered with various types of bias, which can influence treatments and waste valuable resources.15 ,16 The added bias of spin, whether motivated by financial, personal or intellectual COI, or even plain ignorance, further complicates the problem. Those who produce research evidence, as well as those who read it, must remain vigilant.

Acknowledgments

The author extends thanks to Jeff Aronson, Meena Mahtani and Annette Plüddemann for helpful comments on an earlier draft.

References

Footnotes