Skip to main content
Erschienen in: Implementation Science 1/2018

Open Access 01.01.2018 | Method

Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data

verfasst von: Claire Glenton, Benedicte Carlsen, Simon Lewin, Heather Munthe-Kaas, Christopher J. Colvin, Özge Tunçalp, Meghan A. Bohren, Jane Noyes, Andrew Booth, Ruth Garside, Arash Rashidian, Signe Flottorp, Megan Wainwright

Erschienen in: Implementation Science | Sonderheft 1/2018

Abstract

Background

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) working group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation.
CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations; (2) coherence; (3) adequacy of data; and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual’s adequacy of data component.

Methods

We developed the adequacy of data component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual adequacy of data component within several qualitative evidence syntheses before agreeing on the current definition and principles for application.

Results

When applying CERQual, we define adequacy of data as an overall determination of the degree of richness and the quantity of data supporting a review finding. In this paper, we describe the adequacy component and its rationale and offer guidance on how to assess data adequacy in the context of a review finding as part of the CERQual approach. This guidance outlines the information required to assess data adequacy, the steps that need to be taken to assess data adequacy, and examples of adequacy assessments.

Conclusions

This paper provides guidance for review authors and others on undertaking an assessment of adequacy in the context of the CERQual approach. We approach assessments of data adequacy in terms of the richness and quantity of the data supporting each review finding, but do not offer fixed rules regarding what constitutes sufficiently rich data or an adequate quantity of data. Instead, we recommend that this assessment is made in relation to the nature of the finding. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.
Hinweise

Electronic supplementary material

The online version of this article (https://​doi.​org/​10.​1186/​s13012-017-0692-7) contains supplementary material, which is available to authorized users.

Background

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) working group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation.
GRADE-CERQual (hereafter referred to as CERQual) includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations; (2) coherence; (3) adequacy of data; and (4) relevance. This paper focuses on one of these four CERQual components: data adequacy.
When carrying out a CERQual assessment, we define adequacy of data as an overall determination of the degree of richness as well as the quantity of data supporting a review finding [1]. The adequacy component in CERQual is analogous to the imprecision domain used in the GRADE approach for findings from systematic reviews of effectiveness [2].

Aim

The aims of this paper, part of a series (Fig. 1), are to describe what we mean by adequacy of data in the context of a qualitative evidence synthesis and to give guidance on how to operationalise this component in the context of a review finding as part of the CERQual approach. This paper should be read in conjunction with the papers describing the other three CERQual components [35] and the paper describing how to make an overall CERQual assessment of confidence and create a Summary of Qualitative Findings table [6]. Key definitions for the series are provided in Additional file 1.

How CERQual was developed

The initial stages of the process for developing CERQual, which started in 2010, are outlined elsewhere [1]. Since then, we have further refined the current definitions of each component and the principles for application of the overall approach using a number of methods. When developing CERQual’s adequacy component, we undertook informal searches of the literature, including Google and Google Scholar, for definitions and discussion papers related to the concept of adequacy and to related concepts such as data quantity, sample size and data saturation. We carried out similar searches for the other three components. We presented an early version of the CERQual approach in 2015 to a group of methodologists, researchers and end users with experience in qualitative research, GRADE or guideline development. We further refined the approach through training workshops, seminars and presentations during which we actively sought, collated and shared feedback; by facilitating discussions of individual CERQual components within relevant organisations; through applying the approach within diverse qualitative evidence syntheses [717]; and through supporting other teams in using CERQual [18, 19]. As far as possible, we used a consensus approach in these processes. We also gathered feedback from CERQual users through an online feedback form and through short individual discussions with members of the review teams. The methods used to develop CERQual are described in more detail in the first paper in this series [20].

Assessing data adequacy

Data adequacy in the context of qualitative research

Qualitative research methods differ from quantitative research methods in many ways, including the role that numbers play and the manner in which data adequacy is conceptualised. Typically, quantitative researchers gather a relatively limited amount of information from a large number of people and use this information to make estimates, for instance about what people do. When answering these types of questions, quantity plays an important role, and larger numbers, for instance large study populations or large numbers of event rates, generally serve as one marker of data adequacy. Qualitative researchers, on the other hand, generally study fewer people, but attempt to delve more deeply into these people’s perceptions, experiences and settings in order to understand how they experience the world and why they do what they do [21]. To answer these types of questions, rich and detailed data is often required, i.e. information that is detailed enough to allow the researcher or reader to interpret the meaning and context of what is being researched [22]. For qualitative research, data richness therefore often serves as an important marker of data adequacy [22].
This emphasis on data richness does not imply, however, that numbers do not matter in qualitative research. As Sandelowski has argued, sample sizes in qualitative research “may be too small to support claims of having achieved either informational redundancy or theoretical saturation, or too large to permit the deep, case oriented analysis that is the raison-d’etre of qualitative inquiry” [23]. In other words, small numbers of study participants or observations can threaten our ability to make broad claims about a phenomenon while large numbers of participants or observations can threaten our ability to carry out a thorough qualitative analysis that would allow us to properly explore and explain a phenomenon. When determining sample size in qualitative studies, a trade-off needs to be made between conditions that require more versus fewer participants in a sample, including the aim of the study, the analytic approach and the extent to which it is based on existing knowledge and theory [24].

Data adequacy in the context of a review finding

When assessing data adequacy in the context of a finding from a qualitative evidence synthesis, the same principles apply as for other types of qualitative research, although our focus moves from data generated from individual participants and observations to data generated from individual studies that contribute to a specific finding.
When carrying out a CERQual assessment, we define adequacy of data as an overall determination of the degree of richness as well as the quantity of data supporting a review finding.
When we talk about the richness of the data, we are referring to the extent to which the information that the individual study authors have provided is detailed enough to allow the review author to interpret the meaning and context of what is being researched [22]. When we refer to the quantity of data, we are thinking primarily of the number of studies and participants that this data comes from.
There are no fixed rules regarding what constitutes sufficiently rich data or an adequate quantity of data. Instead, any assessment of data adequacy is always relative and is judged according to “what you are expecting the data to do” [25]. In primary qualitative research, what we expect the data to do, and thereby the amount of data that we consider sufficient, will vary from study to study. While one in-depth interview could yield a rich and valid account of one person’s experiences and is sufficient to document, for instance, that these experiences are within the realms of possibility, three such interviews could be sufficient to prove that people may experience the same phenomenon very differently [26]. Even higher numbers of in-depth interviews could show us how people’s experiences vary and can also help us identify the experiences that many people share. The same principles apply for qualitative evidence syntheses.
When assessing data adequacy, our aim is not to judge whether data adequacy has been achieved, but to judge whether there are grounds for concern regarding data adequacy that are serious enough to lower our confidence in the review finding. We are likely to have concerns about data adequacy when we have concerns about the richness or the quantity of the data in relation to the claims made in the review finding.
The concept of “data saturation” is related to, although not the same as the concept of data adequacy (Table 1).
Table 1
The relationship between data adequacy and data saturation
A related concept to data adequacy is the concept of data saturation. In primary research, “data saturation” is often used to refer to the point in data collection and analysis when “no new themes, findings, concepts or problems were evident in the data” [29]. When used in this way, the concept of data saturation is clearly different from the concept of data adequacy as the former focuses on identifying new themes while the latter concept focuses on the extent to which an individual theme or finding is adequately supported by the data. Within grounded theory, the concept of data saturation is more ambitious, however, and “relates not merely to “no new ideas coming out of the data” but to the notion of a conceptually dense theoretical account of a field of interest in which all categories are fully accounted for, the variations within them explained, and all relationships between the categories established, tested and validated for a range of settings” [25]. This second use of the concept is closer to the concept of data adequacy as both focus on the extent to which the data has allowed us to explore the topic in sufficient depth. But there are also differences between these concepts. Researchers applying the concept of data saturation in the context of primary research use this concept as an ideal or goal when collecting and analysing data, and strive to collect new data until saturation has been met. When applying the concept of data adequacy in the context of a qualitative evidence synthesis, on the other hand, researchers assess data that has already been collected, and focus on identifying concerns with this data. As the process of data saturation is potentially limitless; and determining the point at which “saturation” has happened is difficult, if not impossible [26]; the concept of data adequacy may be a more pragmatic and feasible approach.
 

Guidance on how to assess data adequacy in the context of a review finding

Step 1: collect and consider the necessary information related to adequacy (Fig. 2)

To carry out an assessment of data adequacy, you need to collect the following information for each review finding, and present it, for instance, in a table or matrix:
  • An overview of the data upon which each review finding was based
  • An overview of the number of studies from which this data originated, and where possible, the number of participants or observations. Information about the number of participants or observations supporting each finding may be difficult to gain from the individual studies. While most studies describe the number of participants they included in their study overall or give some indication of the extent of their observations, they may be less clear about how well represented participants are in different themes and categories. You can contact study authors for additional information, but they may not be able to readily provide this level of detail. In these cases, this lack of information should be noted, and your assessment of data adequacy will have to be made based on the information available.
If you are using CERQual on findings from your own review, it should be straightforward to collect the information described above as you may already have constructed this type of table or matrix as part of the review process. However, if you are using CERQual to make an assessment of findings from other people’s reviews, this is likely to be a time-consuming process. This is particularly the case when it comes to gaining an overview of the data upon which each review finding is based as review authors do not commonly report all of the data supporting each review finding. Unless you have access to their data extraction sheets, you will need to collect this data yourself by going back to the original references associated with each review finding. For more information on using CERQual on findings from somebody else’s review, see the second paper in this series [6].

Step 2: assess the body of data that contributes to each review finding and decide whether you have concerns about data adequacy

Once you have collected the necessary information, you can start to assess whether you have any concerns about the richness of the data supporting each review finding and the quantity of this data. (In addition to the guidance we offer here, Table 2 also presents examples of how data adequacy can be assessed for a selection of review findings. These examples illustrate how considerations of both data richness and quantity feed into the overall determination of data adequacy).
Table 2
CERQual assessments of data adequacy in the context of a review finding—examples [7, 12]
Example 1: minor concerns
A qualitative evidence synthesis explored factors affecting the implementation of lay or community health worker programmes for maternal and child health [12]. One of the review findings was relatively complex and explanatory in that it made claims about programme recipients’ attitudes towards the lay health workers and suggested factors that appear to influence these attitudes:
“Programme recipients were generally very positive to lay health workers. Reasons for this included the respect, kindness and concern shown by lay health workers, and their non-dogmatic approach. Recipients also appreciated the similarities they saw between themselves and the lay health workers, either because they came from the same community or because they shared similar social backgrounds.”
Twenty-five studies contributed to this finding. Ten of these studies described how recipients were generally positive to the lay health workers, but offered little or superficial information about the factors that appeared to influence these attitudes. However, nine of the studies gave more detailed and specific information about these factors. Based on an overall assessment of the richness of the data and the quantity of the data, we concluded that we had only minor concerns about data adequacy.
Example 2: serious concerns
Another finding from the same qualitative evidence synthesis made the following claim:
“Recipients who lived near town and therefore had short distances to doctors preferred doctors to lay health workers”
This finding was also relatively complex and explanatory as it suggested an association between where people live and their preferences regarding different groups of healthcare workers. However, the data upon which this finding was based offered very little information about this phenomenon, and it was not possible to properly explore or understand why doctors were preferred, and what role the distance to doctors played in people’s preferences. The finding was also only based on two studies. Based on an overall assessment of the richness of the data and the quantity of the data, we concluded that we had serious concerns about data adequacy.
Example 3: serious concerns
A second qualitative evidence synthesis explored the mistreatment of women during childbirth in health facilities [10]. One of the review findings described a relatively unexplored phenomenon as well as making a claim that was unexpected:
“Studies from Benin and Sierra Leone suggest that either the mother or baby may be detained in the health facility, unable to leave until they pay their hospital bills.”
Two studies contributed to this finding and the data that this finding was based on were superficial. While the finding was relatively narrow in scope, the small number of studies was of concern as the finding was unexpected and we were unsure of the extent to which studies undertaken in other settings or groups would have reported similar issues. The lack of rich data was also of concern as we were unable to properly understand this unexplored phenomenon. For instance, it was unclear from the studies whether women and babies were commonly detained, how long women and babies were detained for, and how they experienced this phenomenon. We therefore concluded that we had serious concerns about data adequacy.
Example 4: No or very minor concerns
A third qualitative evidence synthesis explored parents’ views and experiences of communication about child vaccination [7], and included the following review finding:
“Parents generally found the amount of vaccination information they received to be inadequate.”
Seventeen studies contributed to this finding. The data that this finding was based on were often relatively superficial. However, as the finding was a relatively simple, primarily descriptive finding, we concluded that we had no or very minor concerns about data adequacy.
Example 5: moderate concerns
The same qualitative evidence synthesis [7] included the following finding:
“Parents want vaccination information resources to be available at a wider range of health services and community and online settings, for instance through schools, pharmacies, clinics and libraries.”
Only four studies contributed to this finding and both had relatively thin data, which did give us some concern. However, we judged this to be a relatively simple and descriptive finding. We therefore concluded that we had moderate concerns about data adequacy.

Assessing data richness

You are likely to have concerns about the richness of the data if it does not provide you with sufficient details to gain an understanding of the phenomenon described in the review finding. This is a judgement call. However, as a rule of thumb, for review findings that are simple and primarily descriptive, relatively superficial data may be sufficient. But when a review finding is complex or explanatory, e.g. when it suggests associations between different factors, you are less likely to have confidence in that finding if it is based on data that is too superficial to allow a sufficient exploration of the phenomenon (see Fig. 3. See also Table 2 for examples). For a description of descriptive and explanatory review findings, see the paper in this series on coherence [3].

Assessing data quantity

Alongside your assessment of data richness, you also need to consider the number of studies, participants or observations upon which the data are based. As described above, it is often difficult to gain information about the number of participants or observations that contributed to each specific finding, and you may have to focus on the number of studies.
While there is no fixed rule about what constitutes a sufficient number, you are likely to have less confidence in a review finding that is supported by data from only one or very few studies, participants or observations. This is because when only few studies or only small studies exist, or are sampled, we are less confident that studies undertaken in other settings or groups would have reported similar findings. As is the case for data richness, assessments should always be made in relation to the claims the review finding is making.
Some review findings may make claims about a limited aspect of a phenomenon or a very specific group of people or type of settings, and may need fewer studies. However, when a review finding make claims about a broad phenomenon or a large variety of people we are less likely to have confidence in that finding if it is based on a small number of studies (Fig. 3).
Other factors may increase your concerns about the richness or quantity of the data. Qualitative researchers aim not only to look for common attitudes and experiences, but are also interested in outliers and exceptions. However, for a review finding that makes claims about a relatively unexplored topic or a topic where people lack a widely shared language, detailed descriptions of context, intentions and meaning may be required [27]. Similarly, a review finding that makes claims that are unexpected or that challenge common knowledge may require more data than review findings that represent a widely distributed experience or domain of knowledge [28] and you may have less confidence in these findings if they are based on a small number of studies.

Step 3: make a judgement about the seriousness of your concerns and justify this judgement

Once you have assessed data adequacy for each finding, you should categorise any concerns that you have identified as either:
  • No or very minor concerns
  • Minor concerns
  • Moderate concerns
  • Serious concerns
You should begin with the assumption that there are no concerns regarding adequacy. In practice, minor concerns will not lower your confidence in the review finding, while serious concerns will lower your confidence. Moderate concerns may lead you to consider lowering your confidence in your final assessment of all four CERQual components.
Where you have concerns about adequacy, describe these concerns in the CERQual Evidence Profile in sufficient detail to allow users of the review findings to understand the reasons for the assessments made. The Evidence Profile presents each review finding along with the assessments for each CERQual component, the overall CERQual assessment for that finding and an explanation of this overall assessment. For more information, see the second paper in this series [6].
Your assessment of data adequacy will be integrated into your overall assessment of confidence in each review finding. How to make this overall assessment of confidence is described in paper 2 [6].

Implications when concerns regarding the adequacy of data are identified

Concerns about data adequacy may not only have implications for your confidence in a review finding, but can also point to ways of improving future research. First of all, these concerns suggest that more primary research is needed in relation to the phenomenon discussed in the review finding. The review team should also consider whether the review needs to be updated once that research becomes available.
Secondly, concerns about the adequacy of the data may indicate that certain aspects of the review question were too narrow, and that you should consider widening the scope of future updates of the review. (Any changes that are made to the scope of the review are also likely to have an impact on your assessment of the other CERQual components).

Conclusions

Concerns about data adequacy may influence your confidence in a review finding and are therefore part of the CERQual approach. We approach assessments of data adequacy in terms of the richness and the quantity of the data supporting each review finding, but do not offer fixed rules regarding what constitutes sufficiently rich data or an adequate quantity of data. Instead, we recommend that you make this assessment in relation to the nature of the finding. This paper aims to describe our thinking so far and help review authors and others assess data adequacy in findings from qualitative evidence syntheses. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.

Open peer review

Peer review reports for this article are available in Additional file 2.

Acknowledgements

Our thanks for their feedback to those who participated in the GRADE-CERQual Project Group meetings in January 2014 or June 2015 or gave comments to the paper: Elie Akl, Heather Ames, Zhenggang Bai, Rigmor Berg, Jackie Chandler, Karen Daniels, Hans de Beer, Kenny Finlayson, Signe Flottorp, Bela Ganatra, Manasee Mishra, Susan Munabi-Babigumira, Andy Oxman, Tomas Pantoja, Hector Pardo-Hernandez, Vicky Pileggi, Kent Ranson, Rebecca Rees, Holger Schünemann, Anna Selva, Elham Shakibazadeh, Birte Snilstveit, James Thomas, Hilary Thompson, Judith Thornton, Joseph D. Tucker and Josh Vogel. Thanks also to Sarah Rosenbaum for developing the figures used in this series of papers and to the members of the GRADE Working Group for their input. The guidance in this paper has been developed in collaboration and agreement with the GRADE Working Group (http://​www.​gradeworkinggrou​p.​org/​).

Funding

This work, including the publication charge for this article, was supported by funding from the Alliance for Health Policy and Systems Research, WHO (www.​who.​int/​alliance-hpsr/​en). Additional funding was provided by the Department of Reproductive Health and Research, WHO (www.​who.​int/​reproductiveheal​th/​about_​us/​en/​); Norad (Norwegian Agency for Development Cooperation: www.​norad.​no), the Research Council of Norway (www.​forskningsradet.​no); and the Cochrane Methods Innovation Fund. SL is supported by funding from the South African Medical Research Council (www.​mrc.​ac.​za). The funders had no role in the study design, data collection and analysis, preparation of the manuscript or the decision to publish.

Availability of data and materials

Additional materials are available on the GRADE-CERQual website (www.​cerqual.​org)
To join the CERQual Project Group and our mailing list, please visit our website: http://​www.​cerqual.​org/​contact. Developments in CERQual are also made available via our Twitter feed: @CERQualNet.

About this supplement

This article has been published as part of Implementation Science Volume 13 Supplement 1, 2018: Applying GRADE-CERQual to Qualitative Evidence Synthesis Findings. The full contents of the supplement are available online at https://​implementationsc​ience.​biomedcentral.​com/​articles/​supplements/​volume-13-supplement-1.
Not applicable. This study did not undertake any formal data collection involving humans or animals.
Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated.
Literatur
1.
Zurück zum Zitat Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.CrossRefPubMedPubMedCentral Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.CrossRefPubMedPubMedCentral
2.
Zurück zum Zitat Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, Devereaux PJ, Montori VM, Freyschuss B, Vist G, et al. GRADE guidelines 6. Rating the quality of evidence--imprecision. J Clin Epidemiol. 2011;64(12):1283–93.CrossRefPubMed Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, Devereaux PJ, Montori VM, Freyschuss B, Vist G, et al. GRADE guidelines 6. Rating the quality of evidence--imprecision. J Clin Epidemiol. 2011;64(12):1283–93.CrossRefPubMed
3.
Zurück zum Zitat Colvin CJ, Garside R, Wainwright M, Lewin S, Bohren M, Glenton C, Munthe-Kaas HM, Carlsen B, Tuncalp Ö, Noyes J, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0691-8. Colvin CJ, Garside R, Wainwright M, Lewin S, Bohren M, Glenton C, Munthe-Kaas HM, Carlsen B, Tuncalp Ö, Noyes J, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 4: how to assess coherence. Implement Sci. 2018;13(Suppl 1): https://​doi.​org/​10.​1186/​s13012-017-0691-8.
4.
Zurück zum Zitat Munthe-Kaas HM, Bohren M, Carlsen B, Glenton C, Lewin S, Colvin CJ, Tuncalp Ö, Noyes J, Booth A, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 3: how to assess methodological limitations. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0690-9. Munthe-Kaas HM, Bohren M, Carlsen B, Glenton C, Lewin S, Colvin CJ, Tuncalp Ö, Noyes J, Booth A, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 3: how to assess methodological limitations. Implement Sci. 2018;13(Suppl 1): https://​doi.​org/​10.​1186/​s13012-017-0690-9.
5.
Zurück zum Zitat Noyes J, Booth A, Lewin S, Carlsen B, Glenton C, Munthe-Kaas HM, Colvin CJ, Garside R, Bohren M, Rashidian A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 6: how to assess relevance of the data. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0693-6. Noyes J, Booth A, Lewin S, Carlsen B, Glenton C, Munthe-Kaas HM, Colvin CJ, Garside R, Bohren M, Rashidian A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 6: how to assess relevance of the data. Implement Sci. 2018;13(Suppl 1): https://​doi.​org/​10.​1186/​s13012-017-0693-6.
6.
Zurück zum Zitat Lewin S, Bohren M, Rashidian A, Glenton C, Munthe-Kaas HM, Carlsen B, Colvin CJ, Tuncalp Ö, Noyes J, Booth A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 2: how to make an overall CERQual assessment of confidence and create a summary of qualitative findings table. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0689-2. Lewin S, Bohren M, Rashidian A, Glenton C, Munthe-Kaas HM, Carlsen B, Colvin CJ, Tuncalp Ö, Noyes J, Booth A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 2: how to make an overall CERQual assessment of confidence and create a summary of qualitative findings table. Implement Sci. 2018;13(Suppl 1): https://​doi.​org/​10.​1186/​s13012-017-0689-2.
7.
Zurück zum Zitat Ames HMR, Glenton C, Lewin S. Parents’ and informal caregivers’ views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database Syst Rev. 2017;2:CD011787.PubMed Ames HMR, Glenton C, Lewin S. Parents’ and informal caregivers’ views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database Syst Rev. 2017;2:CD011787.PubMed
8.
Zurück zum Zitat Aslam RW, Hendry M, Carter B, Noyes J, Rycroft Malone J, Booth A, Pasterfield D, Charles JM, Craine N, Tudor Edwards R, et al. Interventions for preventing unintended repeat pregnancies among adolescents (protocol). Cochrane Database Syst Rev. 2015;(1):CD011477. Aslam RW, Hendry M, Carter B, Noyes J, Rycroft Malone J, Booth A, Pasterfield D, Charles JM, Craine N, Tudor Edwards R, et al. Interventions for preventing unintended repeat pregnancies among adolescents (protocol). Cochrane Database Syst Rev. 2015;(1):CD011477.
9.
Zurück zum Zitat Bohren MA, Hunter EC, Munthe-Kaas HM, Souza JP, Vogel JP, Gulmezoglu AM. Facilitators and barriers to facility-based delivery in low- and middle-income countries: a qualitative evidence synthesis. Reprod Health. 2014;11(1):71.CrossRefPubMedPubMedCentral Bohren MA, Hunter EC, Munthe-Kaas HM, Souza JP, Vogel JP, Gulmezoglu AM. Facilitators and barriers to facility-based delivery in low- and middle-income countries: a qualitative evidence synthesis. Reprod Health. 2014;11(1):71.CrossRefPubMedPubMedCentral
10.
Zurück zum Zitat Bohren MA, Vogel JP, Hunter EC, Lutsiv O, Makh SK, Souza JP, Aguiar C, Saraiva Coneglian F, Diniz AL, Tuncalp O, et al. The mistreatment of women during childbirth in health facilities globally: a mixed-methods systematic review. PLoS Med. 2015;12(6):e1001847. discussion e1001847CrossRefPubMedPubMedCentral Bohren MA, Vogel JP, Hunter EC, Lutsiv O, Makh SK, Souza JP, Aguiar C, Saraiva Coneglian F, Diniz AL, Tuncalp O, et al. The mistreatment of women during childbirth in health facilities globally: a mixed-methods systematic review. PLoS Med. 2015;12(6):e1001847. discussion e1001847CrossRefPubMedPubMedCentral
11.
Zurück zum Zitat Colvin CJ, de Heer J, Winterton L, Mellenkamp M, Glenton C, Noyes J, Lewin S, Rashidian A. A systematic review of qualitative evidence on barriers and facilitators to the implementation of task-shifting in midwifery services. Midwifery. 2013;29(10):1211–21.CrossRefPubMed Colvin CJ, de Heer J, Winterton L, Mellenkamp M, Glenton C, Noyes J, Lewin S, Rashidian A. A systematic review of qualitative evidence on barriers and facilitators to the implementation of task-shifting in midwifery services. Midwifery. 2013;29(10):1211–21.CrossRefPubMed
12.
Zurück zum Zitat Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database Syst Rev. 2013;10:CD010414. Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database Syst Rev. 2013;10:CD010414.
13.
Zurück zum Zitat Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database Syst Rev. 2017;(11):Art. No.: CD011558. Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database Syst Rev. 2017;(11):Art. No.: CD011558.
14.
Zurück zum Zitat Munthe-Kaas HM, Hammerstrøm KT, et al. Effekt av og erfaringer med kontinuitetsfremmende tiltak i barnevernsinstitusjoner. Oslo: Norwegian Knowledge Centre for the Health Services; 2013. Munthe-Kaas HM, Hammerstrøm KT, et al. Effekt av og erfaringer med kontinuitetsfremmende tiltak i barnevernsinstitusjoner. Oslo: Norwegian Knowledge Centre for the Health Services; 2013.
15.
Zurück zum Zitat O’Brien TD, Noyes J, Spencer LH, Kubis HP, Hastings RP, Edwards RT, Bray N, Whitaker R. ‘Keep fit’ exercise interventions to improve health, fitness and well-being of children and young people who use wheelchairs: mixed-method systematic review protocol. J Adv Nurs. 2014;70(12):2942–51.CrossRefPubMed O’Brien TD, Noyes J, Spencer LH, Kubis HP, Hastings RP, Edwards RT, Bray N, Whitaker R. ‘Keep fit’ exercise interventions to improve health, fitness and well-being of children and young people who use wheelchairs: mixed-method systematic review protocol. J Adv Nurs. 2014;70(12):2942–51.CrossRefPubMed
16.
Zurück zum Zitat Rashidian A, Shakibazadeh E, Karimi-Shahanjarini A, Glenton C, Noyes J, Lewin S, Colvin C, Laurant M. Barriers and facilitators to the implementation of doctor-nurse substitution strategies in primary care: qualitative evidence synthesis (protocol). Cochrane Database Syst Rev. 2013;2:CD010412. Rashidian A, Shakibazadeh E, Karimi-Shahanjarini A, Glenton C, Noyes J, Lewin S, Colvin C, Laurant M. Barriers and facilitators to the implementation of doctor-nurse substitution strategies in primary care: qualitative evidence synthesis (protocol). Cochrane Database Syst Rev. 2013;2:CD010412.
17.
Zurück zum Zitat Whitaker R, Hendry M, Booth A, Carter B, Charles J, Craine N, Edwards RT, Lyons M, Noyes J, Pasterfield D, et al. Intervention now to eliminate repeat unintended pregnancy in teenagers (INTERUPT): a systematic review of intervention effectiveness and cost-effectiveness, qualitative and realist synthesis of implementation factors and user engagement. BMJ Open. 2014;4(4):e004733.CrossRefPubMedPubMedCentral Whitaker R, Hendry M, Booth A, Carter B, Charles J, Craine N, Edwards RT, Lyons M, Noyes J, Pasterfield D, et al. Intervention now to eliminate repeat unintended pregnancy in teenagers (INTERUPT): a systematic review of intervention effectiveness and cost-effectiveness, qualitative and realist synthesis of implementation factors and user engagement. BMJ Open. 2014;4(4):e004733.CrossRefPubMedPubMedCentral
18.
Zurück zum Zitat Downe S, Finlayson K, Tuncalp Ӧ, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG. 2016;123(4):529–39.CrossRefPubMed Downe S, Finlayson K, Tuncalp Ӧ, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG. 2016;123(4):529–39.CrossRefPubMed
19.
Zurück zum Zitat Odendaal WA, Goudge J, Griffiths F, Tomlinson M, Leon N, Daniels K. Healthcare workers’ perceptions and experience on using mHealth technologies to deliver primary healthcare services: qualitative evidence synthesis (protocol). Cochrane Database Syst Rev. 2015;(11):CD011942. Odendaal WA, Goudge J, Griffiths F, Tomlinson M, Leon N, Daniels K. Healthcare workers’ perceptions and experience on using mHealth technologies to deliver primary healthcare services: qualitative evidence synthesis (protocol). Cochrane Database Syst Rev. 2015;(11):CD011942.
20.
Zurück zum Zitat Lewin S, Booth A, Glenton C, Munthe-Kaas HM, Rashidian A, Wainwright M, Bohren MA, Tuncalp Ö, Colvin CJ, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(Suppl 1): https://doi.org/10.1186/s13012-017-0688-3. Lewin S, Booth A, Glenton C, Munthe-Kaas HM, Rashidian A, Wainwright M, Bohren MA, Tuncalp Ö, Colvin CJ, Garside R, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(Suppl 1): https://​doi.​org/​10.​1186/​s13012-017-0688-3.
21.
Zurück zum Zitat Baker S, Edwards R. How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. Southampton: National Centre for Research Methods; 2012. p. 1–42. Baker S, Edwards R. How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. Southampton: National Centre for Research Methods; 2012. p. 1–42.
22.
Zurück zum Zitat Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998;8(3):341–51.CrossRefPubMed Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res. 1998;8(3):341–51.CrossRefPubMed
23.
24.
Zurück zum Zitat Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2016;26(13):1753–60. Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2016;26(13):1753–60.
25.
Zurück zum Zitat Green J, Thorogood N. Qualitative methods for Health Research. London: Sage; 2005. Green J, Thorogood N. Qualitative methods for Health Research. London: Sage; 2005.
26.
27.
Zurück zum Zitat Charmaz K. In: Baker S, Edwards R, editors. How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. Southampton: National Centre for Research Methods; 2012. p. 21–2. Charmaz K. In: Baker S, Edwards R, editors. How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. Southampton: National Centre for Research Methods; 2012. p. 21–2.
28.
Zurück zum Zitat Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.CrossRef Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.CrossRef
29.
Zurück zum Zitat Francis J, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, Grimshaw JM. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.CrossRefPubMed Francis J, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, Grimshaw JM. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.CrossRefPubMed
Metadaten
Titel
Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data
verfasst von
Claire Glenton
Benedicte Carlsen
Simon Lewin
Heather Munthe-Kaas
Christopher J. Colvin
Özge Tunçalp
Meghan A. Bohren
Jane Noyes
Andrew Booth
Ruth Garside
Arash Rashidian
Signe Flottorp
Megan Wainwright
Publikationsdatum
01.01.2018
Verlag
BioMed Central
Erschienen in
Implementation Science / Ausgabe Sonderheft 1/2018
Elektronische ISSN: 1748-5908
DOI
https://doi.org/10.1186/s13012-017-0692-7

Weitere Artikel der Sonderheft 1/2018

Implementation Science 1/2018 Zur Ausgabe