Background
Colorectal cancer (CRC) is the second most common cause of cancer death in the United States (US) [
1,
2]. The US Preventive Services Task Force recommends CRC screening (CRCS) for adults aged 45-75 who are at average risk [
3]. While CRCS rates have increased in the years prior to the COVID-19 pandemic, they still lag behind national goals and the pandemic caused additional delays or halts in screening [
4]. For example, recent estimates suggest 65.2% of adults were screened, while the target for US Department of Health and Human Services (DHHS)
Healthy People 2030 goal is 74.4% [
5] and the National Colorectal Cancer Roundtable’s goal is to achieve 80% CRCS rates in every community [
6]. Moreover, CRCS rates are disproportionate in racial and ethnic groups, and disparities in screening uptake persist [
7]. For example, CRCS uptake is highest among Whites and lowest among Hispanics [
8].
Federally qualified health centers (FQHCs) provide affordable healthcare for many Americans, many of which are at or below the federal poverty level and come from underserved communities with lower CRCS rates [
9]. Despite serving many patients, CRCS rates among FQHCs (40.1% in 2020) remain below national averages (65.2%) [
10,
11]. CRCS is also a Uniform Data System clinical quality measure for health centers. To help increase CRCS rates, FQHCs utilize evidence-based interventions (EBIs), such as provider assessment and feedback, provider reminders, client reminders, and reducing structural barriers [
12,
13]. .EBIs provide guidance on strategies to implement and promote use of CRCS [
14]. Additionally, the Guide to Community Preventive Services (the Community Guide) [
15] disseminates recommended EBIs. Despite having these EBIs available, implementation remains a challenge; Hannon et al. and Adams et al. found FQHCs often discontinue an EBI because of capacity issues [
16,
17]. Thus, there is a gap in the motivation and capacity to effectively implement and sustain EBIs to improve CRCS. For example, when electronic health records cannot support integration of provider reminder systems or provider assessment and feedback reports, uptake, implementation success and the sustainability of the EBI is compromised. Additionally, provider related EBIs require strategic partnerships that take time to build, showing readiness can be an ongoing and shifting process [
17]. Moreover, CRCS is often a lower priority for providers, especially amongst patients with multiple chronic conditions or complex medical histories [
18]. Several initiatives exist to increase implementation of EBIs to promote CRCS in FQHCs including the Centers for Disease Control and Prevention’s (CDC) Colorectal Cancer Control Program [
19], the Cancer Prevention and Control Research Network (CPCRN) [
16], the American Cancer Society’s (ACS) Community Health Advocates Implementing Nationwide Grants for Empowerment and Equity (CHANGE) grant program [
20], and the Evidence-Based Cancer Control Programs (EBCCP) [
21].
In the health care setting, understanding and attending to organizational level barriers and organizational readiness has been associated with implementation success [
22‐
24]. Readiness represents a central construct in several implementation science frameworks including the Interactive Systems Framework for Dissemination and Implementation (ISF) [
25], the Consolidated Framework for Implementation Research (CFIR) [
26], Getting To Outcomes [
27], and Context and Capabilities for Integrating Care [
28]. Organizational readiness plays a role during all phases of program implementation [
22] and reflects the organizations’ commitment, motivation, and capacity for change over time [
24]. This idea of readiness emerged from the ISF [
22,
25]. Informed by the ISF [
22,
25] and past research identifying the importance of organizational capacity [
29] and motivation [
23,
30], Scaccia et al. [
22] developed a heuristic for organizational readiness known as R = MC
2. The R = MC
2 heuristic proposes that readiness is made up of three distinct components: the organization’s
motivation to implement an innovation,
general organizational capacities, and
innovation specific capacities.
Organizational readiness is critical to successful implementation, yet there is a need for a valid and reliable measure that aligns with the R = MC2 heuristic to increase implementation success [
22,
23,
31‐
34]. A readiness survey was originally developed based on the R = MC
2 framework to assess and monitor readiness for implementing a health improvement process among community coalitions and has since been used in other settings [
35,
36]. For example, one study applied the readiness survey in a mixed methods approach among primary care and specialty clinics, pharmacies within health systems, and community pharmacies. They found engaging in the readiness work was associated with many benefits including increased awareness of readiness challenges, ensuring alignment of priorities, and making sure the intervention was a good fit [
37]. Another study adapted the readiness survey to assess organizational readiness for integrated care and developed a Readiness for Integrated Care Questionnaire (RICQ). The tool was then piloted with 11 health care practices that serve vulnerable, underprivileged populations [
36]. The readiness survey has further been applied to operationalize readiness building in a variety of settings. Using the readiness survey is the first of three stages (assessment, feedback and prioritization, strategize) to develop and test practical strategies for supporting implementation in real-world settings [
38]. While used before, the readiness survey had not been rigorously evaluated in terms of its psychometric properties or used in FQHC settings to assess readiness for implementation of cancer control interventions. This study represents part of a rigorous process of adaptation, validation and testing of the readiness survey which is ultimately intended to be used in multiple settings for a variety of implementation efforts.
To be a well-established measure, the readiness survey must demonstrate adequate levels of reliability and validity [
39]. The measure development process can include initial item review from respondents and obtaining feedback to improve measures; however, there are few examples in the research literature of having members of the intended response community review items for interpretability and clarity. Cognitive interviewing is a widely used method used to improve understanding of question validity and reduce response error [
40]. Cognitive interviewing (sometimes called learner verification) is a process by which participants verbalize their thought processes while responding to written text, such as a survey. Cognitive interviews may be used to examine the clarity of meaning for words and phrases, the cognitive process used for arriving at an answer, identify problems with the measure’s instructions, and the optimal order and context for information as presented to the interviewee [
41,
42].
The work described in this article was part of a larger study to further develop, refine and test the previously developed readiness survey [
24]. The larger study consists of multiple phases that include both qualitative and quantitative analyses [
24]. Results presented in this paper represented one of the qualitative phases of this larger, more lengthy measure development process [
43]. The overall goal of the larger study is to adapt, further develop, and evaluate the validity and reliability of the existing readiness survey so that it can be used across settings and topic areas to assess readiness to inform implementation strategy development or other efforts to improve implementation of evidence-based interventions. The purpose of this paper was to describe a qualitative process used to assist improvement of the existing measure of readiness.
Discussion
Readiness assessments can be used to support and improve implementation of EBIs for cancer prevention and control and thus improve CRC outcomes. This paper described a process for collecting user-focused data to improve a comprehensive readiness measure based on the R = MC2 heuristic and assessed item understanding, its relevance to the healthcare setting/context, and general interpretations of the structure. A better measure of organizational readiness is an essential step towards informing strategies to improve implementation. This paper also provides an example of a rapid process to engage the intended response community in improving measurement tools. Despite challenges related to conducting this study during the COVID-19 pandemic, this study was able to gather opinions from a diverse range of voices (including job types), which strengthens the readiness survey. It is critical to ensure that tools are tailored to and representative of the intended audience.
Although the use of qualitative methods in implementation science is well established, there are not many published studies that describe use of cognitive interviewing for the development and refining of measures to assess contextual factors influencing implementation. This research provides opportunity to better understand the complexity of the implementation context, as well as incorporate a diverse range of perspectives to improve our measure of readiness [
50,
51]. Qualitative approaches explore the complexity of human behavior (feelings, perceptions, experiences, and thoughts) and generate deeper understanding participants’ experiences in certain settings. Incorporating qualitative data into this study helps better apply the readiness survey to the intended setting it is designed for [
52,
53]. Furthermore, using a comprehensive Excel document for summarizing changes to all subsets of the readiness survey was a good strategy because it helped organize a vast amount of information into an easily accessible format that multiple team members could use to make decisions on refining the readiness survey.
Within measure development approaches, there are common issues to avoid when developing items [
54‐
57]. Our interview participants identified some key examples within our measure (i.e. avoiding jargon, avoiding vague terms, avoiding words that can mean the same thing). It can be difficult to identify these issues if we only rely on the measure developers or “expert” reviewers. Thus, our study adds an important example of advantages for this user testing stage in the measure development process. Our study also demonstrates how we processed the information so others can follow this approach.
A potential limitation of our study is that interviews were conducted using online conferencing platforms (e.g., Zoom, WebEx) instead of in person. The ideal format for cognitive interviews is in person so the interviewer can see and observe body language [
58]. However, we were unable to do the interviews in person because of the COVID-19 pandemic and this interviewing format facilitated reaching more participants during a critical time for FQHC clinics balancing many responsibilities in both South Carolina and Texas. A second limitation of our study was that we were only able to show interview participants a subset of items. We broke up the readiness survey into subsets because we wanted the virtual interviews to not last longer than 1 h. A third limitation of our study was that we had three participants who identified as non-native English speakers. This may have influenced the way in which they interpreted and/or responded to the items. A fourth limitation was that data were collected from only two to three interview participants per item on the readiness survey. Because we wanted feedback on a large number of items, we focused on breaking up the item sets for participants, so the interviews were more manageable. Recruitment of participants was also a challenge due to COVID-19 overwhelming health centers at the time of the interviews. Therefore, each item was only reviewed by two to three participants.
There is a need for a comprehensive measure of readiness. Overall, the goal of this study was to improve the readiness survey based on the R = MC
2 framework (a measurement tool for readiness). Readiness is a critical step for successful implementation. This paper describes the use of cognitive interviews as part of a larger study [
43] to validate the readiness survey, the next phase (developmental phase) of our study includes distributing the readiness survey to a large sample of FQHC clinics across the U.S. for continued testing and development. The cognitive interview data will be combined with quantitative data collected from FQHC clinics who completed the readiness survey. These data will be analyzed and integrated with the cognitive interviews to develop a final version of the readiness survey. From there, the readiness survey will be distributed again to a larger, national set of FQHC clinics for survey validation (validation phase). This novel mixed methods approach allows for a comprehensive development and validation of a measurement tool.
Conclusion
Key recommendations included removing items interpreted as asking about the same concept and items that were difficult to understand. Additionally, participants recommended keeping terms consistent throughout the survey and changing pronouns (e.g., people, we) to be more specific (e.g., leadership). Moreover, participants recommended specifying ambiguous terms (e.g., define what “better” means).
By improving the readiness survey, the goal is to develop a theoretically-informed, pragmatic, reliable and valid measure of organizational readiness that can be used across settings and topic areas, by researchers and practitioners alike, to increase and enhance implementation of cancer control interventions. The finalized readiness survey will be used to support and improve implementation of EBIs for cancer prevention and thus reduce the cancer burden and cancer-related health disparities.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.