Background
Objectives
Methods
Review protocol
Searches
Electronic searches
Searching other resources
Study inclusion and exclusion criteria
Primary outcomes
Secondary outcomes
-
Perceived relevance of systematic review summaries
-
Perceived credibility of the summaries
-
Perceived usefulness and usability of systematic review summaries
-
o Perceptions and attitudes regarding the specific components of the summaries and their usefulness
-
-
Understandability of summaries
-
Desirability of summaries (e.g., layout, selection of images, etc.) [5]
Potential effect modifiers and reasons for heterogeneity
Study quality assessment
Data extraction strategy
Data synthesis and presentation
Results
Review statistics
Results of the search
Study ID | Methods | Participants | Intervention description | Outcomes |
---|---|---|---|---|
Brownson 2011 [23] | RCT | Legislative staff members (e.g., committee staff), state legislators, and executive branch administrators (e.g., division directors, program heads) | 4 different policy briefs on mammography screening to reduce breast cancer mortality - Data-focused brief with state-level data - Data-focused brief with local-level data - Story-focused brief with state-level data - Story-focused brief with local-level data Each participant was emailed 1 of the 4 briefs. | Self-reported understandability (using 3 measures assessing whether the information was presented clearly in an attractive way and held the reader’s attention) and credibility (2 measures that assessed whether the information in the brief was believable and accurate) |
Carrasco-Labra 2016 [30] | RCT | Health care professionals, guideline developers and researchers that use and/or develop systematic reviews | An alternate summary of findings table was compared against the current format - Alternate format provides options to display the same data in a different way or to provide supplementary data to the current format | Self-reported understanding assessed with 7 multiple choice questions (5 response options). Self-reported accessibility of information assessed with 3 self-reported domains (how easy it is to find critical information, how easy it is to understand the information, whether the information is presented in a useful way for decision-making. Satisfaction measured by asking which about satisfaction with the different formatting elements. Preference assessed using a 7-point Likert scale for the 2 tables |
Dobbins 2009 [25] | RCT | Front line staff, managers, directors, coordinators, and others from public health departments in Canada (those directly responsible for making program decisions related to healthy body weight promotion in children) | 1st group (control) - Access to health-evidence.ca and received an email about access to this resource 2nd group - Received tailored, targeted messages—7 emails with titles of 7 high-quality SRs related to health body weight promotion in children and links to full text, abstract, and summary, plus access to health-evidence.ca 3rd group - Same intervention as the 2nd group plus access to a full-time knowledge broker who was available to ensuring relevant research, was provided to the decision makers in a way that was useful, helped them to develop skills for evidence-informed decision-making, and translating the evidence | Self-reported global evidence-informed decision-making (participants were asked to report the extent to which research evidence was considered in a recent program planning decision within the previous 12 months) related to healthy body weight promotion and public health policies and programs measured by the sum of actual strategies, policies, and/or interventions for healthy body weight promotion in children being implemented by the department |
RCT | Individuals who normally read policy briefs related to international development, e.g., employed in academia, NGOs, and international aid organizations, some self-reported influence on policy decisions and therefore considered policymakers | 3 versions of a policy brief summarizing the results of a SR - One group received a standard policy brief - 2nd group received a policy brief with director’s commentary - 3rd group received the policy brief with unnamed research fellow’s commentary | Beliefs about the effectiveness of and strength of the evidence for the interventions included in the briefs | |
Opiyo 2013 [27] | RCT | Panel of healthcare professionals with roles in neonatal and pediatric policy and care in Kenya | 3 intervention packages - Pack A contained a systematic review alone - Pack B included systematic reviews with summary of findings tables - Pack C received an evidence summary with a graded entry format | Self-reported understanding of the summary content measured by the proportion of correct responses to clinical questions relevant to the effects of the intervention. Value and accessibility (usefulness and usability) of the evidence was assessed using a 3- or 5-point scale |
Vandvik 2012 [28] | RCT | All panelists for the antithrombotic therapy and prevention of thrombosis, American College of Chest Physicians | 2 formats of the evidence profile that differed by 4 features - Placement of additional information - Placement of overall quality of evidence - Study event rates - Absolute risk differences Each group received 1 of 4 emails with similar text but different links allowing download of the evidence profile | User preferences for specific formatting options and the overall format of the table were assessed using a 7-point Likert scale Comprehension of key findings was assessed with multiple choice questions Accessibility of the information for quality of evidence and relative and absolute effects was assessed using 3 domains: easy to find, easy to understand, and helpful in making recommendation using a 7-point scale Time needed to comprehend information about quality assessment and key findings was assessed by asking participants to record the time before and after answering questions testing comprehension |
Study ID | Methods | Participants | Intervention description | Outcomes |
---|---|---|---|---|
Wilson 2011 [31] | RCT | Decision makers (programs, services, advocacy) from community-based HIV/AIDS organizations in Canada affiliated with the Canadian AIDS Society and from relevant provincial HIV/AIDS networks | At baseline, all participants will receive the “self-serve” evidence service (includes a listing of relevant systematic reviews, links to PubMed records, and worksheets to help find and use research evidence). During the intervention, one group will receive the “full-serve” version of SHARE (Synthesized HIV/AIDS Research Evidence) which includes access to a database of HIV systematic reviews, emailed updates, access to user-friendly summaries, links to scientific abstracts, peer relevance assessments (indicating how useful the information is), as well as an interface for comments in the records, plus links to the full text, and access to worksheets to help find and use evidence. The control group will continue to receive the self-serve evidence service. During the final 2-month period, both groups will receive the full-serve version of SHARE | The primary outcome measure will be the mean number of logins/month/organization. The secondary outcome will be intention to use research evidence (measured with a survey administered to one key decision maker from each organization) |
Wilson 2015 [32] | CBA | Clinical Commissioning Groups: governing body and executive members, clinical leads, and any other individuals deemed as being involved in commissioning decision-making processes | 3 arms: (1) consulting plus responsive push of tailored evidence (access to an evidence briefing service provided by the Centre for Reviews and Dissemination (CRD) plus advice and support via phone, email, face-to-face; monthly check in to discuss further evidence needs; issues around use of evidence; alert team to new SRs, and other synthesized evidence relevant to priorities); (2) consulting plus an unsolicited push of non-tailored evidence (access to intervention 1 without tailored evidence briefings and instead just evidence briefings without contextual information); or (3) “standard” service (CRD will disseminate evidence briefings generated in intervention 1 and any other non-tailored briefings produced by CRD over the intervention period) | Primary outcome: change at 12 months from baseline of a CCGs ability to acquire, assess, adapt, and apply research evidence to support decision-making. Secondary outcomes will measure individuals’ intentions to use research evidence in decision-making |
Description of included studies
Study | Type of evidence summary | Format of summary | Method of delivery | Components | Outcomes |
---|---|---|---|---|---|
Brownson 2011 [23] | Policy brief | Printed leaflet/booklet, PDF version for those who prefer online | Mailed, follow up telephone call, emailed if preferred | Front cover varied according to story- versus data-driven, color printed (included data or story), 3rd and 4th pages are the same across all 4 briefs, data-driven briefs contained 2 statements with percentages related to mammography screening, story-driven had 2 personal stories related to mammography, all briefs had data about uninsured women, women not up to date on mammograms, breast cancer mortality compared to other causes, benefits of mammograms, and recommendations | The briefs were considered understandable and credible (mean ratings ranged from 4.3 to 4.5 on 5.0 Likert scale). Likelihood of using the brief was different by study condition for staff members (p = 0.041) and legislators (p = 0.018). Staff members found the story-focused brief with state-level data the most useful. Legislators found the data-focused brief with state-level data the most useful |
Carrasco-Labra 2016 [30] | Summary of findings table | Table | Emailed link to online survey | The new format of summary of findings table moved the number of participants and studies to the outcomes column, quality of evidence was presented with the main reasons for downgrading, “footnotes” was changed to “explanations”, baseline risk and corresponding risk were expressed as percentages, column presenting absolute risk reduction (risk difference) or mean difference, no comments column, addition of “what happens” column, no description of the GRADE evidence definitions | Participants with the new summary of findings table format had higher proportion of correct answers for almost all questions. The new format was more accessible (easier to understand information about the effects (MD 0.4, SE 0.19); and displayed results in a way that was more helpful for decision-making (MD 0.5 SE 0.18); overall, participants preferred the new format (MD 2.8, SD 1.6) |
Dobbins 2009 [25] | Evidence summaries | Text | Targeted, tailored emails | Short summary including key findings and recommendations | The post-intervention change in Global Evidence-Informed Decision-making was 0.74 (95% CI 0.26–1.22) for the group receiving only access to healthevidence.ca; –0.42 (–1.10, 0.26) for the group receiving tailored, targeted emails; and –0.09 (–0.78, 0.60) for the knowledge broker group. The changes in health policies and programs (HPP) after the intervention were –0.28 (–1.20, 0.65) for the group receiving only access to the healthevidence.ca website; 1.67 (0.37, 2.97) for the group receiving tailored, targeted messages; and –0.19 (–1.50, 1.12) for the group with access to a knowledge brokers. The tailored, targeted messages are more effective than the knowledge broker intervention or access to www.health-evidence.ca in organizations with a culture that highly values research |
Policy brief | Text, colored leaflet | Email | Introduction to the problem, description of methodology, conclusions, and policy implications, 2 versions had expert commentary | Respondents with stronger beliefs about the agricultural interventions at baseline rated the policy brief more favourably The policy brief was less effective in changing respondents’ ratings of the strength of the evidence and effectiveness of the intervention | |
Opiyo 2013 [27] | Summary of findings table, graded entry summary of evidence | Text, tables | Email | Summary of findings table Graded entry format included a summary and interpretation of main findings and conclusions, a contextually framed narrative report, and summary of findings table | No differences between groups in the odds of correct responses to key clinical questions Both packs B and C improved understanding. Pack C compared to pack A was associated with a significantly higher mean “value and accessibility” score. Pack C compared to pack A was associated with a 1.5 higher odds of judgments about the quality of evidence being clear and accessible. More than half of participants preferred narrative report formats to the full version of the SR (53% versus 25%). A higher respondent percentage (60%) found SRs to be more difficult to read compared to narrative reports, but some (17%) said that SRs were easy to read. About half of the participants (51%) found SRs to be easier to read compared to summary of findings tables (26%) |
Vandvik 2012 [28] | Summary of findings table | Table | Email | Tables presented outcomes, number of participants, summary of findings, and quality assessment using GRADE | Participants liked presentation of study event rates over no study event rates, absolute risk differences over absolute risks, and additional information in table cells over footnotes. Panelists presented with time frame information in the tables, and not only in footnotes, were more likely to properly answer questions regarding time frame and those presented with risk differences, and not absolute risks were more likely to rightly interpret confidence intervals for absolute effects. Information was considered easy to find and to comprehend and also helpful in making recommendations regardless of table format |
Study quality assessment
Evidence summaries to increase policymakers’ use of systematic review evidence | |||
Patient or population: policymakers and health system managers Settings: Intervention: evidence summaries based on systematic review Comparison: any comparison | |||
Outcomes | Impact | No. of participants (studies) | Quality of the evidence (GRADE) |
Use of systematic review evidence in decision-making | Little to no difference in effect on evidence-informed decision-making when compared to access to a knowledge broker or online registry of research [25] Little to no difference in effect on self-reported likelihood of using data-driven versus story-driven policy briefs (with state-level or local-level data) [23] | 399 (2) | ⊕⊕⊕⊝ Moderatea |
Understanding, knowledge and/or beliefs | One study found little to no effect on understanding of information when provided in different summary of findings table formats [28] while the other found that those provided with a new version of the summary of findings table had consistently higher proportions of correct answers assessing understanding of key findings provided in the table [30] Little to no effect in understanding of information for a graded entry format compared to an summary of findings table or systematic review alone [27] | 676 (4) | ⊕⊕⊕⊝ Moderatea |
Perceived credibility of the summaries | Little to no difference in perceived credibility for different versions of the policy brief (data-driven versus story-driven, local- versus state-level data) [23] | 291 (1) | ⊕⊕⊕⊝ Moderatea |
Perceived usefulness and usability of systematic review summaries | The graded entry format was rated higher than the systematic review alone, and there was little to no difference between the ratings for the summary of findings table and the systematic review alone [27] | 443 (3) | ⊕⊕⊕⊝ Moderatea |
Perceived understandability of the summaries | All formats of the policy brief were reported as easy to understand [23] Graded entry formats were easier to understand the summary of findings tables or systematic reviews alone [27] | 356 (2) | ⊕⊕⊕⊝ Moderatea |
Perceived desirability of the summaries | 378 (2) | ⊕⊕⊕⊕ High |