Nomenclature
International rapid reviews vary widely in terms of the language used to describe these reviews, timeframes to complete them, content of the reviews, and methods. Various terms associated with rapid or accelerated methods for conducting reviews found within the literature include: 'rapid review,' 'rapid health technology assessment,' 'rapid response,' 'ultra rapid review,' 'rapid evidence assessment,' 'technotes,' 'succinct timely evaluated evidence review,' and 'rapid and systematic reviews.' These rapid reviews vary in the length of time taken to conduct literature reviews and synthesis, with timeframes ranging from one to nine months. Some reviews called themselves 'rapid,' yet used timeframes similar to those of traditional systematic reviews or were unclear about steps taken to accelerate their approach. Many studies failed to acknowledge the length of time taken to conduct the reviews. Some organizations that conduct rapid reviews have made available general guidelines about their rapid review products and processes. For example, the National Institute for Health and Clinical Excellence (NICE) has developed guidelines for rapid response products or health technology assessments that are usually completed within approximately nine months [
7,
8]. Garces briefly described the rapid review process used by the Canadian Agency for Drugs and Technologies in Health, stating that these reviews provide enhanced rigour beyond health technology inquiries, usually take four months to complete once the scope of review is defined, and follow a format similar to their full health technology assessments [
9].
Methodological Approaches
Rapid reviews employ a variety of methodologies and vary in terms of the depth of description of methods used to make the processes rapid. Very few reviews explicitly address the questions of what was lost or what bias was introduced by using these methods. Numerous examples of rapid review methods were found; exemplars were chosen to demonstrate maximal variability in terms of methods used for rapid reviews found within this literature search (Table S1, Additional file
1,). We considered framing the various rapid review methods in the context of time taken to complete the syntheses; however, many did not report this information, and time required to conduct reviews is also dependent on staff availability. Instead, Table S1 Additional file
1 has been organized in terms of implications of methodological shortcuts taken, from minimal to significant levels of bias potentially introduced that would impact estimates of effectiveness as a consequence of the methodological approach. While we have suggested the implications of choosing the various methods, we acknowledge that the evidence, direction, and magnitude of any risk of bias cannot truly be assessed if methods have not been fully described [
10]. Moreover, although a decision was made to structure the table based on potential for bias, part of this assessment is inevitably subjective because there is no way to quantify the relative impact of some methodological decisions (
e.g., exclusion by study design versus failure to include grey literature).
Many reviews introduced restrictions at the literature-searching and retrieval stages. Several searches were truncated to include only readily accessible published literature, including limitations by language and date of publication, or by number of electronic databases searched. Others conducted systematic searches of published literature, yet limited searches of unpublished literature. One rapid review narrowed its search in terms of geographical context and setting (
i.e., primary healthcare), to ensure that evidence could be readily applied to the context of interest [
11]. Some acknowledged that their literature review and search term selection were not iterative processes, so some relevant references may have been missed [
12]. Several others acknowledged restricted timeframes for articles to be retrieved and assessed, and limited ability to follow up with authors and industry contacts to clarify information presented [
12‐
16]. Some rapid reviews streamlined systematic review methods at later stages in the process, including during title and abstract review, full text review, data extraction, and quality assessment phases.
Comparisons of rapid versus traditional methods
A review comparing rapid versus full systematic reviews found that overall conclusions did not vary greatly in cases where both rapid and full systematic reviews were conducted [
17]. In terms of content, however, full reviews were more likely than rapid reviews to report clinical outcomes, economic factors, and social issues. Systematic reviews were also more likely to provide greater depth of information and detail in recommendations. Due to the various and variable differences between systematic and rapid reviews, it is suggested that rapid reviews may be useful to answer certain types of questions, but they are not viable alternatives to full reviews. Based on Cameron's inventory of current rapid review methods, it is also suggested that while standardization of rapid review methods may not be appropriate, it is important that transparency of methods be achieved [
17]. Watt
et al. found that although the scope of rapid reviews is limited, they can provide adequate advice for clinical and policy decisions [
18]. Watt
et al. also acknowledge that rapid reviews may not be appropriate for all healthcare or technology assessments. In a review of health technology assessments (HTAs) in the United States, Eisenberg and Zarin discussed increased pressure by Medicare to conduct assessments within shortened timeframes (approximately 45 days), while maintaining transparency and scientific rigour [
19]. Eisenberg and Zarin identified a number of concerns associated with rapid HTAs, including: the complex nature of many questions; the scarcity of methodological and content knowledge for many rapid HTA topics; the challenges associated with synthesizing studies of lesser quality; and the need for methodological transparency to enhance scientific credibility of the rapid HTA process.
In a methodology discussion paper, Burls
et al. also stated the need for transparency of methods used, particularly in the absence of standardized methods for thorough yet non-systematic literature searches [
13]. The discussion paper also recommended minimum reporting standards related to rapid review methods. Oxman, Schunemann, and Fretheim recommended that rapid reviews should be explicit in terms of methods, limitations, and biases, but should also state the need for follow-up with a full systematic review [
5].
Few articles explicitly summarized or focused on rapid review methodologies. Elliott
et al. provided details about the rapid response process for NICE in the United Kingdom [
7]. Updated and revised guidelines have recently been published by NICE [
8]. Its rapid review process included: a six- to nine-month timeframe; needs assessment to provide clear understanding of the issue; an initially broad literature search to develop scope; consultation with key stakeholders to refine and focus the scope; guidance development over four months; and peer review or public consultation about results of the draft summary report [
7].
The Magenta Book: Guidance notes for policy evaluation and analysis, by the Government Social Research Unit in England, discussed rapid evidence assessments that fall methodologically between health technology assessments and systematic reviews and are completed within two to three months [
20]. These rapid reviews synthesize available evidence using 'fairly comprehensive' search strategies and sift out poor-quality evidence, but do not exhaustively search published and grey literature.
Publication bias
Bias can be introduced in many ways through the methodological approach to study location and selection [
22]. Butler
et al. outlined methods used in rapid evidence assessments (REAs), and acknowledged that selection bias, publication bias, and language of publication bias may be introduced when using literature that is readily accessible to a researcher [
23]. Within their REAs, exhaustive database searching, hand searching, and grey literature searching is not initially undertaken. Furthermore, it was suggested that the shortened timeframe associated with REAs increased risk of publication bias [
24].
Topfer
et al. compared literature searches within MEDLINE and EMBASE electronic databases and found that the greatest yield of relevant resources came from combined searches of the databases, because each identified resources not found in the other [
24]. Topfer
et al. acknowledged that better search strategies are partnered with increased time and cost for reviewers. While Royle and Milne also found that additional database searching produced additional trials, this remained only a small percentage of overall number of trials [
25].
Sampson
et al. found that searching MEDLINE but not EMBASE has the potential to impact meta-analysis effect size estimates, suggesting a potential for database bias [
26]. Royle and Waugh compared the cost-effectiveness of various literature retrieval strategies and found diminishing marginal returns with increased database searching [
27]. Instead, Royle and Waugh recommended that, when timeframes are restricted, hand searching of relevant reference lists and consultation with experts about missed articles may be more effective than exhaustive database searching. Oxman, Schunemann, and Fretheim supported this recommendation and also suggested that when conducting rapid assessments with limited resources, priority should be placed on quality assessment over extensive literature searching. In addition, contacting experts and hand searching reference lists should be given priority over additional database searching [
3].
Doust
et al. compared sensitivity and precision of search strategies, comparing use of bibliographic databases with hand searching for references [
28]. This study highlighted the potential for increased accuracy but decreased practicality in hand searching a large number of journals. Doust
et al. recommended using 'snowballing' techniques, and also having two reviewers screen citation lists to maximize sensitivity of bibliographic searching. Hopewell
et al. compared hand searching versus electronic searching and found that a combination of these approaches provide the most comprehensive results when searching published literature [
29]. Hopewell
et al. found that hand searching provided greater search yields than electronic searching alone, and suggests this is likely related to indexing of terms within the databases [
29]. Langham, Thompson, and Rowan compared hand searching versus MEDLINE searching in terms of emergency medicine literature, with similar conclusions: hand searching is better than electronic searching, but a dual approach to literature searching should be employed [
30]. Accuracy of hand searching is, however, dependent upon the knowledge and expertise of those conducting the searches.
McManus
et al. reviewed the importance of contacting experts in literature searching, indicating that electronic searching may only locate one-half of relevant studies, and that 24% of relevant studies may be missed by not contacting experts [
31]. Contacting experts is particularly important in fields lacking well-defined specialist literature, because hand searching is often focused on such specialist literature. Savoie
et al. studied sensitivity and precision of extended search methods, and found that searching beyond electronic databases, with specialized databases and trial registries, was most effective for identifying relevant randomized controlled trials [
32]. In addition, Edwards
et al. examined the accuracy and reliability of reviewers in screening records, and found that while a single reviewer is likely to identify the majority of relevant records, having a second reviewer maximizes inclusion and can increase the records identified by an average of 9% [
33].
Small and unpublished study effects
A few studies addressed the impact of grey literature on treatment effect within meta-analyses. A Cochrane review of the impact of grey literature in meta-analyses of randomized controlled trials found that the inclusion of grey literature decreased publication bias and provided more conservative treatment effects than when grey literature was excluded [
29]. Hopewell
et al. had results consistent with this; published trials were typically larger and showed greater treatment effects than those found within grey literature [
34]. McAuley
et al. found that exclusion of grey literature could lead to inflated effectiveness estimates, and suggested that meta-analyses should seek to include all grey and unpublished reports that meet study inclusion criteria [
35]. In contrast, Sterne, Gavaghan, and Egger examined the impact of small study effects on meta-analyses, and found that inclusion of smaller studies may increase treatment effects and introduce bias due to potentially lower methodological quality [
36].
Language of publication bias
Other studies addressed the impact of other languages on treatment effects and conclusions in meta-analyses [
37,
38]. Juni
et al. found that inclusion of non-English studies typically involved greater efforts to locate, as well as cost and time to translate, but exclusion led to more conservative treatment effect estimates [
37]. Juni
et al. also concluded that the need to include non-English studies may depend on the topic of the review, and whether relevant studies within the specialty literature are predominately published in English. In contrast to these findings, Moher
et al. found that language restricted meta-analyses did not differ significantly in intervention effectiveness estimates when compared to language inclusive meta-analyses [
38,
39].
Egger
et al. suggest that if the content area of a review is housed primarily within published literature, then a review based on a search of English language-restricted studies will likely produce similar results to those based on those that do not have language restrictions [
40]. Lawson
et al. found that systematic reviews that did not restrict searches by language tended to be more comprehensive in their searches and inclusion of relevant literature [
41]. They did, however, find that systematic review results can be influenced by restricting languages if their language of publication is associated with study quality [
42]. The influence of language is also dependent upon whether the review is based on conventional medicine or complementary and alternative medicine [
42]. It has also been suggested that depending on the content area of the planned review, investigators need to consider the literature search and the level of comprehensiveness of searching necessary [
40]. For example, the methodological quality of harder-to-find studies also needs to be considered, as they may be of lower methodological quality and actually increase bias by their inclusion [
40]. In contrast, specific to other-language trials, Moher
et al. found no difference in trial quality and reporting among English and other language trials, and suggest that inclusion of other languages can increase precision and reduce language of publication bias [
43]. An additional consideration beyond language is country or location of study publication. Vickers
et al. found that some countries publish higher proportions of positive results (
i.e. publication bias), which may have implications for rapid review results if a search is limited by publication location [
44].