To understand the extent to which PH practitioners are ‘evidence-literate’, it is helpful to know how research participants understand the term evidence and what evidence means to them. Therefore, we begin this section with a discussion of participant understandings and definitions of evidence.
How is the term evidence understood or defined?
“I don’t know if I have a specific definition for ‘evidence’, but I think there’s a broad range of different types of evidence that have different strengths and weaknesses.”
In discussing what they believe counts as evidence, participants identified a wide range of evidence types and sources including: research, systematic reviews, literature reviews, surveys, surveillance data, population health data, coroner’s data, program evaluation, expert opinion, experience, epidemiological data, community-based data, and anecdotal evidence from colleagues and community members.
Overall, participants believed that evidence was important to inform PH practice and most reported using many types of evidence in diverse ways. Largely, they held very broad understandings of the term evidence. Although participants recognized that scientific evidence from research is important to consider, many also valued community-based knowledge very highly. Most saw the need to bring both types of evidence together recognizing that the scientific evidence needed to be contextualized for and relevant to the community.
“And I think we have to [define evidence broadly]. Because you can’t take what works, statistically, with this group and transform it to this group of citizens living in a low income housing unit, for example. Right? Like, you have to have some kind of needs assessment evidence from the community and then try to find some sort of hybrid as to what the best practice says” (Study Participant).
Some participants, particularly medical health officers, tended to value epidemiological data and research-derived evidence more highly. On the other hand, those working in food security preferred to draw on community-based experiential evidence over the scientific evidence, seeing it as more valid.
But evidence for me is community-based evidence that I see every day. So, the food security work that I do is so grass roots that the evidence just comes from the truth of what I’m being told by the people in my community.
Although many participants view evidence as important in guiding and informing their practice, they also believe that evidence is not the only thing to take into consideration. As one person described, it is not just about evidence, but “it’s the process and sometimes it can come back and bite us,” suggesting that lived experience and community development processes are important considerations in public health decision-making. Although evidence, in whatever form, is important to consider as a resource, it is not necessarily the main driver of the work.
Step 1 - defining the question
“When we start seeing a need come out, often times it’s, you know, we’re approached by clients or a group of clients that have come together and said, “You know, we’re seeing a problem with x and such and such. Can you please help us out with it?””
The first step in the EIPH framework is defining the question. This step answers the questions:
“Who is my target group? What is the issue we are dealing with? And what specifically are we trying to change?” [
18]. Participants did not talk specifically about target groups, in part because this was assumed or evident in the work they were doing. They did, however, discuss the issue they were dealing with in their comments about the reasons they needed evidence. In the quotation above, one participant noted that the issue was identified by the community in request for help in addressing it, thus creating a community-driven need for evidence. Other participants not only wanted evidence about what works to produce desired outcomes, but also wanted local epidemiological data to define the scope and extent of a PH problem that could be used to: 1) achieve specific aims or goals; 2) inform PH processes; 3) help set priorities; 4) justify a new program or policy; 5) assess changes over time as a result of an intervention; 6) monitor or track population health status or health trends; or 7) justify expenditures.
Each participant spoke about the specific health issues on which they were working, such as food safety, food security, various injuries or the causes of injuries such as falls or traffic crashes. However, they talked less about needing evidence on a specific health issue and more about needing evidence to inform processes or practices in the work they were doing. Many participants wanted evidence to demonstrate that an issue was indeed a problem that needed to be addressed more than they wanted evidence about the effectiveness of a program they might be considering. Similarly, others were interested in evidence to justify programs or policies already implemented rather than evidence of effective programs, as reflected in the following quotation:
“So there isn’t a lot of traction in going down the road with this, so anything I can get that has some weight to it, including some research backing, whether it is specific to brain injury or not, but something that says this is an area of practice that matters to the health regions gives me justification for staying involved” (Study Participant).
Several participants also expressed a need for data or evidence to document the work they were doing, what that meant, and whether it was effective. This could include locally relevant process data or evaluation evidence on the effectiveness of their work. Demonstrating that their work was having an impact was important because, with all the organizational changes and budget cuts going on in the health authorities, participants wanted to demonstrate the value of their work in hopes that additional resources could be obtained. As one participant stated: “We only have two dietitians for the whole island so it would be great to be able to show that what we’re doing is effective and we need more of it.”
Each participant identified specific changes they wanted to document through evidence such as: improving population health, reducing the burden of injury, improving indigenous health, developing safety standards, and shifting the health authority mindset from treatment to prevention. For these things, they believe they need locally generated epidemiological and evaluation data as well as research evidence of effectiveness.
Step 2 – Searching for evidence
“It’s so much easier to Google these things now: has anywhere else in the world done this?”
This second step in the EIPH framework asks the question
“Where should I look to find the best available research evidence to address the issue?” [
18]. The question itself implies that research evidence is the most important information for which practitioners should be searching. As discussed above, participants talked about searching for different kinds of data to guide their work, not just research evidence.
With respect to helpful sources for locating evidence, participants identified provincial and national sources, professional groups, disease-based foundations, Health Authorities, or local groups and networks as important. They were resourceful and creative in finding relevant and available sources of data. Some relied less on academic journals and more on grey literature, in part because grey literature is more accessible and understandable to them. Again, we note that participants equate data with evidence, even though it may not come from research.
Participants identified three sources of data for obtaining the best available evidence: formal research, big data, and practice. In formal research, participants said it was important to have access to a university library to track down references and not all of them had that access. They might search for systematic reviews from the Cochrane Database, or other research sites. Randomized controlled trials were important to some, but seen as not always feasible in PH. Expert organizations (e.g., WHO) were important sources because they are trusted groups that provide good evidence summaries. When it comes to searching for research, however, practitioners often relied on others to do this work:
“Personally, I usually rely on others. I look to my colleagues in Population Health and Wellness. They are usually the ones who have pulled together the lit searches. They have some program experts, so I go to them assuming that I am going to get the latest and greatest information” (Study Participant).
Most participants relied on electronic sources such as webcasts, list-serves, and e-mail lists to get updates on research. Distribution lists delivering monthly updates make it easy to scan for a match to something they might be working on. Connecting with peers and colleagues to ask for information is a common practice; finding out what others are doing is easier now with Google.
For some participants, searching for evidence meant looking to big data sets like the Canadian Community Health Survey or to medical service plan and hospital data. But, not everyone has the skills or confidence to access these data sources; having others available who can do this is important. At the same time that access to data was important, one participant cautioned about hiding behind the need for data before moving to action:
“I think we often hide behind getting the data. We always tell ourselves, we need more data, we need more data. It needs to be more applicable. It needs to be more localized. Well we can move forward still. You don’t have to be stymied by waiting for data. I think sometimes we’re always looking for evidence” (Study Participant).
This participant recognizes that there is often a tension between the need for data and the need for action. If local data are not available, waiting for it may delay action unnecessarily, hence big data might be better than no data. For a local perspective, however, practice can be a valuable source of information.
For many participants, searching for evidence largely occurred within practice. This involved looking in-house to clinical records, to the strategic information department, vital statistics, or various medical health officers’ reports that identify local priorities and information. They also commented on evidence being embedded in policies, guidelines and regulations found within the context of their work. As discussed previously, many participants displayed a strong preference for obtaining evidence from their communities. Having hands-on experience and getting community input was important evidence for them. They found evidence through windshield surveys or needs assessments. Thus, the community voice is an important part of searching for evidence, and seen by many as the ‘truth’ or the most valid form of evidence.
In some cases, locating evidence was informal through sharing at conferences, or talking with local experts, colleagues, and networks, or through partnerships with Masters student projects. In a few instances, evidence-sharing involved formal relationships with researchers. Despite the variety of sources accessed by participants, some identified particular challenges when searching for evidence. For example, access to data might be restricted by organizational agreements or privacy regulations. Practitioners often see too much formalization related to data access.
Step 3 – Appraising evidence
“Well the qualifier then is, what is the quality of that evidence?”
The third step in the EIPH framework is critical appraisal. Here, the quality of study methods is assessed to ascertain whether its findings are “trustworthy, meaningful and relevant” for uptake [
18]. The question asked in this step is: “
Were the methods used in this study good enough that I can be confident in the findings?” Participants clearly recognized the importance of critical appraisal skills for their work:
“But I certainly think that we need to be cognizant of the quality of evidence that we’ve got and what are the shortcomings and strengths of the type of evidence that we have. And where evidence is weak, i.e., it’s not coming from a gold standard type of methodology” (Study Participant).
Although some articulated a preference for a traditional hierarchy of research designs, this was tempered by the realization that such studies were not always feasible or ethical to conduct in PH and that other types of research could be useful.
“So I guess while, you know, random controlled studies are seen as the best evidence, the ultimate, I’m not sure that they’re the only sources of evidence. Certainly, we do have to look at those other sources as well” (Study Participant).
Some interviewees spoke about “relenting” to the idea of accepting “promising practices” as evidence in PH when higher quality evidence was not available. At the same time, they recognized the need to be critical about the evidence: “So hopefully being a little bit more critical about what we take in and look at and use. Not just taking it and going ‘well, this sounds good.’ Plop. Let’s use it.” On the other hand, as discussed above in the defining the question and searching for evidence sections, several participants preferred evidence that came from the community, and their confidence in the evidence relates to how well the evidence is connected to the local context.
Reports of local projects or programs may not provide strong evidence of success but may be “good enough” to adapt in the local context and then evaluate, thus contributing to the evidence base. But what constitutes “good enough”? Practitioners have little guidance on making these decisions and the EIPH appraisal step focusses only on appraising research evidence. There may be a risk in taking up an unstudied intervention, but it may be all that is available. Coupled with evaluation, however, participants believe that it can provide useful information.
Although not related to the specific appraisal question in the EIPH framework, our participants also spelled out the characteristics of evidence that are important for PH practitioners. They appraise the usefulness of evidence on the extent to which it reflects these characteristics. In general, they require evidence (or data) that is timely, relevant to their context and purpose, current and regularly updated, synthesized and translated into manageable bite sized pieces, trustworthy, and of different types at different levels. Examples are presented in Table
4. In general, practitioners recognise appraisal as an important step in using evidence, but the type of evidence most often used in PH practice does not easily lend itself to methods for appraising research evidence.
Table 4
Characteristics of evidence for PH
Timely. | We want data to be timely. We want to know what’s going on now, not data that’s three years old and that’s always a hurdle for people to get over and get through and work through. |
Relevant. | Canadian Dietitian’s Association has lots of diet evidence and it’s rated, but it’s mostly clinical right now. It’s starting to look at public health nutrition evidence and community nutrition practice [which is more relevant]. |
Regularly updated. | …to pull together the data to you know, update it on an ongoing basis to put it in front of the decision-makers. |
Synthesized. | So being the age that I am and being very comfortable with Internet research, I often go to the Cochrane database first of all to have a look to see if there’s any evidence out there that people have agreed on. |
At different levels. | Well the level of evidence – and we can go back to the old Canadian primary prevention guidelines. What were those called? The levels of evidence were a, b, c, d, and e. |
Manageable bite sized pieces. | Unfortunately, it can’t be an evidence paper that’s ten pages long. It has to be something that at the operational level you can scan it and get the evidence bites out of it and then incorporate it into your plans. |
Trustworthy. | If the CDC publishes something or the Ontario Tobacco Research Unit – we feel that it meets a certain standard that we can expect…. and because it’s from a stakeholder we trust, we read that evidence with interest. |
Step 4 – Synthesizing and interpreting evidence
“And you know, one needs to wonder… to what extent then after all those layers of translation, does the program that’s put on the ground or the quality improvement agenda resemble the type of evidence in the literature, and to what extent would that even be suited to the context.”
In the fourth step of the EIPH framework, users are asked to interpret the evidence they have gathered in synthesized form to produce ‘actionable messages’. They are advised to produce recommendations from the “highest quality and most synthesized research evidence available” [
18]. The question answered in this step of the EIPH is
“What does the research evidence tell me about the issue?” [
18]. The synthesis step in the EIPH does not focus on the synthesis process itself but rather on interpreting the evidence derived from already synthesized research. There is no discussion of how to synthesize the other types of evidence in the EIPH Model (i.e., community health issues and local context, community and political preferences and actions, PH resources, and PH expertise).
As intended in the EIPH framework, some participants report drawing on findings from systematic reviews and RCTs and interpreting this evidence to make decisions about practice:
“So if we’re trying to recommend, for example, the use of child restraints for motor vehicle safety, I mean we wouldn’t put out that recommendation as is. We would justify it and back it up with a lot of studies that have, you know, shown the use of this, that have been evaluated. We look at systematic reviews, we look at if at all there have been any recognized control trials about data showing the actual use of it” (Study Participant).
In addition to systematic reviews, some participants draw heavily on synthesis work as reflected in policy and position papers conducted by trusted agencies and organizations. Other participants discussed using clinical practice guidelines as exemplifying high quality evidence and embodying the synthesis process. This is very much in keeping with the evidence hierarchy discussed on the EIPH website.
“So these are the ‘treating tobacco use and dependence’ clinical practice guidelines, the 2008 update. So we use these as kind of like our bible for tobacco control. And that is probably what I would look at as the best evidence right now” (Study Participant).
What is very telling, however, is that many participants make reference to a synthesis and interpretive process that involves pulling together evidence from multiple sources such as community needs, local context, and practice experience, in addition to research. One participated stated:
“I think a lot of where we find our need is from the community partnerships that we have built with vulnerable populations and the different agencies that work with them. And then we kind of take that information and combine it with evidence that we have, and try to move forward with some kind of an initiative. So there is a real grassroots component to our evidence-finding, as well as taking academic and clinical knowledge and all of that, to make it work.”
Here, it is not just high quality research evidence that is being synthesized to make decisions about action but other types of knowledge are integrated with the research evidence. Consistent with the stated purpose of the EIPH that different types of evidence need to be considered in decision making, many participants are putting various types of information together, interpreting that data, and drawing implications for practice. The EIPH synthesis step, however, focuses primarily on using synthesized research evidence.
The lack of skills and capacity in the organization to do the synthesis work was a common theme in the data. In some cases, however, practitioners could rely on someone else, either a trusted organization or an in-house staff member who had the skills or for whom it was part of their job, as it was for the participant who made this comment: “It’s part of my job to keep up with the evidence that is there and then to incorporate that into all of the work that I do.” There is very little in the research literature, however, about the process of synthesizing other forms of evidence with research evidence. Participants expressed the need for support in doing this work.
Step 5 – Adapting evidence to the local context
“It’s always a matter of adapting. It’s never straightforward – something that we can take from somewhere else and plunk it down here.”
The fifth step of the EIPH framework is adapting the evidence to the local context. It addresses the question
“Can I use this research with my client, community, or population?” [
18]. Adapting the evidence is an important step when using it in practice, as stated by this participant: “There is lots of evidence but it doesn’t always really fit with what you’re doing”.
Several participants who talked about adapting the evidence were well grounded in their communities and expressed a strong sense of the importance of working with community members, not only in adapting evidence to apply to the community, but also in recognizing that initiatives coming from within the community should be made available as evidence for others. As one participant noted:
“I think we always adapt the guidance or evidence to our own community. And, it’s kind of always a partnership with other community groups or organizations that we’re working with and how does implementation make sense for our own community? Also, a lot of great initiatives have come from our own community that have gone or may be able to go and move elsewhere as well.”
The idea that “knowledge is in the communities” is important to consider when there is a strong focus on evidence-informed practice because participants believe that the local context influences program implementation and success. That is, local knowledge and community engagement can help to modify evidence-informed programs to make them fit the context. At the same time, as the participant quoted above stated “It’s never straightforward”.
Step 6 – Implementing evidence
“I think, for the most part, if there is evidence, I think that would inform most of the planning, implementation, and evaluation of my work. Otherwise, you’re just kind of flying by the seat of your pants.”
Implementing, or applying evidence, is the sixth step of the EIPH framework. It addresses the question “
How will I use the research evidence in my practice?” [
18]. At this stage, practitioners act on the evidence to make a practice change. Many participants reported relying on evidence to broadly inform their strategic operations, front-line planning, and the delivery of PH programs. They also used evidence to make funding and resource decisions, and to inform the adoption of indicators/benchmarks for evaluating their work.
“As we were doing program planning and implementation, we were basing it on sort of the best knowledge that we could get at that point in time. So, we initiated a number of new programs or new directives and we did try to collect the evidence before we did that to help us decide. So, for example, one was looking at postpartum depression. We then created a program plan for implementation” (Study Participant).
Some participants also talked about using evidence as part of communications: “As our Senior Medical Health Officer said in a conversation with me, like we’ve got to use the data to tell compelling stories.” Some expressed concern, however, that evidence might be cherry picked to reaffirm existing practices rather than to inform necessary policy or program changes:
“Because they would say that they look at the evidence and they probably do, but I think it’s used more in a symbolic use rather than an instrumental use. …. But symbolic use is more like, I’ve already made up my decision. I’m going to go and find some research to support my decision and I think that’s the strongest use of research in the Health Authority when it comes to decision-making” (Study Participant).
Thus, many participants report using evidence to inform program planning, implementation, funding and evaluation decisions and they say they try to use the best available evidence. At the same time, they also indicate that evidence may be used inappropriately in some circumstances to justify what is already being done rather than to make necessary changes.
Step 7 – Evaluating evidence use
“So, we would use that form of evidence or research to check back on is this working kind of thing – benchmarks.”
The seventh and final step in the EIPH framework is to evaluate the use of the evidence to inform PH programs. The questions addressed in this step are: “
Did we do what we planned to do?” “Did we achieve what we expected?” [
18]. The first is an implementation question; the second is an outcome question. Both are important and can be addressed through evaluation, which can contribute to the evidence base. Participants consistently agreed on the value of evaluation, specifying:
“I’ve thought of indicators for success, and that’s where evaluation comes in, of the programs that we are doing. And for the education piece, we did a formal evaluation of that through the research centre here, and we had time to do that”.
Additionally, participants recognized the challenges inherent in the process. Evaluation takes time, preparation, and knowledge. As one person noted:
“I think that sometimes I feel that I don’t necessarily have the time to do, sort of like a logic model: ‘this is what I’m going to do, these are the indicators, and that …’. You are just going so quickly, and trying to keep up with everything. We have minimal staff time, where you have a lot of things going on.”
Participants acknowledge that they may need help doing an evaluation, perhaps from a research centre in the organization. But time for evaluation is always limited. It seems that consistent evaluation, while recognized as valuable, is not factored into the regular workload and is not a high priority in the organization. A lack of funding may contribute to a tension between a recognized need for evaluation evidence to guide practice and the competition for scarce resources.
Another issue that influences whether evaluation is done is the organizational capacity for evaluation. Although some participants are comfortable with the evidence-informed process and see evaluation as a natural part of this, others are not as comfortable. Some have taken courses in evaluation and can provide some of that capacity, but overall, evaluation capacity is limited, and is not well supported in many organizations.
Some participants reported doing evaluation in a more informal and less rigorous way by asking simple questions like: “Have you learned anything from this program?” For many, a more informal process is all that is feasible for them with such limited resources available for evaluation.
“I think it’s hard to think of doing that in a formal process because we are so stretched thin that I don’t have time to come back and report that or make up a document saying this garden was successful because we had five people there digging today. You know, so it’s hard” (Study Participant).
Overall, evaluating the use of evidence or the success of evidence-informed programs is not at top of mind for participants, but they did identify the value of evaluating programs, recognizing the need for assistance and organizational capacity. They appear to rely more on informal evaluations to provide feedback for improvements.