Local knowledge
Individuals obtained knowledge from a range of both formal and informal sources. The sources of knowledge evolved over the course of the evaluation. Pre-evaluation questionnaires identified resources found within the Memory Clinic Training Manual as the most frequent source of formalized knowledge. Following the evaluation however, the weekly evaluation e-newsletter was identified as the source most frequently accessed for information. Virtual practice networks and online materials also provided important resources for participants.
Team members were a critical source of knowledge. During post-evaluation interviews, all but one of the members identified the team as the first place they would turn to for information. In addition to the immediate team, individuals from the Memory Clinic training team were also identified as key sources of information. In these situations, communication was primarily between similar disciplines, for example the nurse at the Memory Clinic would contact the nurse at the training site. Overall, when knowledge was local and research was considered within context it was seen as relevant and directly applicable. “I want the local, and the reliable [information], and a study from Toronto, from someone with who knows what credentials, isn’t any help to my clients that are here right now” (FpP7:8:46).
The intentional knowledge translation strategies used during the course of the evaluation [
10], coupled with evaluation processes and emerging results provided the team with local practice-based knowledge.
“The evaluation informed my practice for sure, because not just the evidence-based approach and articles that [the evaluator] was sending, but also we have program objectives and knowing what our focus was informed me as well” (PostP1:1:4).
Orientation to practice based inquiry
While a KT-informed evaluation sought to sensitize individuals to research the Edmonton Research Orientation Scale (EROS) [
26] did not demonstrate any shift in orientation towards research. EROS subscale scores could not be calculated due to missing data on a number of items; most notably within the Involvement in Research and Evidence Based Practice subscales. As a result the average rating per item was calculated (see Table
2). Item averages remained essentially the same across all four subscales, with two subscales slightly lower at follow-up and one slightly higher, suggesting the evaluation had minimal impact on the individual’s orientation towards research. Knowledge related to five aspects of research increased slightly from pre-to-post evaluation. There was no change in time spent reading or participating in research or research related activities.
Table 2
Edmonton Research Orientation Scale (EROS)
Valuing Research | 3.7 | 0.7 | 3.7 | 0.6 |
Research Involvement | 2.4 | 0.9 | 2.2 | 0.6 |
Being on the Leading Edge | 3.8 | 0.8 | 3.9 | 0.5 |
Evidence Based Practice | 3.7 | 0.6 | 3.5 | 0.7 |
Total: | 3.4 | 0.9 | 3.3 | 0.9 |
Understanding Research Design | 2.6 | 1.5 | 2.8 | 1.2 |
Statistics | 2.6 | 1.1 | 2.8 | 1.2 |
Research articles in journals | 3.4 | 1.1 | 3.7 | 0.9 |
Grant application procedures | 1.8 | 0.8 | 1.8 | 0.9 |
Ethical review procedures | 1.8 | 1.2 | 2.0 | 1.1 |
While general research orientation, as measured by the EROS [
26], remained unchanged, interview data highlighted the role evaluation played in making research more accessible.
“I think [the evaluation] humanized the idea of research instead of it being all the research out there that I am not part of, so this brought it into my realm of general practice and day to day practice” (postP4:4:1).
The evaluation served to orient clinicians to practice based inquiry, bridging the research-practice divide. “It seems so practical, it just seems so natural and I always saw research as more academic” (postP4:18:48). Through the evaluator’s presence and engagement in the evaluation, clinicians not only gained knowledge about the process of conducting an evaluation, but more specifically how knowledge created through evaluation translated to practice. “Having [the evaluator] so involved helped us learn more about what an evaluation is, what it looks like, how it works into the day-to-day stuff we are learning, and how it translates” (postP1:42:92).
The KT-informed evaluation sought to model sustainable practice based inquiry. While the evaluation did not appear to influence individuals’ orientation or attitude to research broadly, it supported an orientation to local practice-based inquiry and knowledge.
Shaping clinical practice
Changes to clinical practice were documented over the course of the evaluation and were related to both the evaluation processes and results. Not only did individuals gain knowledge during the evaluation they were receptive to making changes to practice as a result of this knowledge. “We have to be open to change what we find does need to be changed…you have to be willing to change” (postP3:6:30). Over the course of the 8-month evaluation a number of refinements were made to the assessment and intervention practices and Memory Clinic processes. Refer to Table
3 for description of changes that were made and how the evaluation process linked to these changes.
Table 3
Influence on clinical activities and processes
Clinic assessments | Evaluation activities implemented that supported program enhancements |
1. Addition of a gait assessment into the assessment protocol. | Memory clinic network conference |
Weekly evaluation update - E-newsletter |
Memory clinic process meeting |
2. Addition of vital statistics into the assessment protocol. | Memory clinic network conference |
Memory clinic process meeting |
3. Additional of “Since we last saw you”, an assessment of community supports into the assessment protocol. | Evaluation process meeting |
Evaluation results – chart audits, patient and caregiver feedback surveys |
Clinic intervention/follow-up activities | Evaluation activities implemented that supported program enhancements |
1. Enhancement of educational materials (driving, enhanced mail-out package, educational binder) | Evaluation results - patient and caregiver feedback surveys |
Weekly evaluation update - E-newsletter |
2. Patient action plans | Evaluation process meeting |
Evaluation results – patient and caregiver feedback surveys |
3. Patient/caregiver Workshop: brain gym | Evaluation process meeting |
Evaluation results – patient and caregiver feedback surveys |
4. Patient/caregiver workshop: dementia and diabetes | Evaluation process meeting |
Evaluation results – patient and caregiver feedback surveys |
Memory clinic processes | Evaluation activities implemented that supported program enhancements |
1. Patient services coordinator: new title and timing of introduction | Evaluation process meeting |
Evaluation results – patient and caregiver feedback surveys |
2. Stopping of evaluation process meetings | Evaluation process meeting |
4. Assessment summary forms (Under consideration at end of evaluation. | Evaluation process meeting |
Evaluation results – chart review |
5. Timing of patient/family education | Evaluation process meeting |
Evaluation results – patient and caregiver feedback surveys |
6. Patient chart scanned into EMR | Evaluation results - physician feedback survey |
Three elements of the evaluation were seen to influence clinical practice; knowledge gained from engagement in the evaluation process, empirical evidence provided during the evaluation, and emerging evaluation results. Participant engagement in evaluation created a culture of learning and laid the foundation for knowledge translation. “When you see [the evaluation] and you’re involved in it, and doing it, it’s more hands on, it’s more practical, it’s apt to be more useful” (postP3:25). Similarly, evaluator engagement in the program supported knowledge translation.
“Having [evaluator] so involved has helped us learn about what an evaluation is and what it looks like, how it works into the day to day [information] we are learning and how it translates…[evaluator] being involved really helped us getting it and understanding it (postP1:42:92).
Fundamentally, the knowledge translation focus of the evaluation sought to support patient care “this evaluation…it is being done to produce better quality patient care and I think we all know that now” (postP1:41:90). Weekly e-newsletters, offered a source of empirical evidence upon which practitioners grounded their assessment practices.
Just knowing what is happening…the updates and some of the research articles…that guides me and that started the gait [assessment] process, so it helped us if we got stuck in our ways and gave us new ideas (postP2:2:4).
Interventions were also supported by the intentional knowledge translation activities of the evaluation.
What we developed here was a [patient education] binder…some tips about eating and exercise and all of that was pulled from the evidence based practice stuff I pulled from Dr. [X,] or things [the evaluator] sent us or things that the team provided that they found to be helpful (postP1:35:66).
The Memory Clinic team was particularly receptive to emerging data derived from patient and caregivers, which in turn had a strong influence on Memory Clinic processes and clinical practices. The patient focus was seen at the clinical level and many of the clinicians identified that patient interactions was the element of clinical practice most influenced by the emerging evaluation data. “I have learned to ask more open ended questions and dig deeper and get better detailed answers” (postP5:6:24). Another clinician reported “it has changed the way I do the testing and assessments, building that relationship” (postP6:9:24).
The Memory Clinic was part of a larger network of clinics and receiving the ongoing feedback from the emerging evaluation also gave individuals the confidence and the structure to refine their practice. It also gave clinicians confidence in their own clinical practice.
We kept refining the process based on the feedback, based on [the evaluation], refined it, refined it, refined it, and the whole collection of information from the patients, and how we recognize that, and how we record that, and access it later. We are more confident (postP4:21:56).
Feedback also supported changes to program delivery. “So, once we got that feedback… that changed how we were thinking about educating people and the timing of the education” (postP1:36:68). Changes were also made to administrative processes based on feedback “We kept changing our forms and making them better” (postP2:10:28).
Participants reported an increased use of memory related assessments and interventions over the course of the evaluation. Individuals (n = 5) reported using an average of 3 assessments (range1-5) on the pre-evaluation Memory Disorders Knowledge Questionnaire, compared with an average of 8 (range 4 to 17) assessments after the evaluation. The same trend was observed for interventions. Individuals (n = 5) reported using an average of 2 (range 0 to 3) interventions when working with individuals with memory disorders before the evaluation and an average of 5 (range 3 to 8) interventions on the post-evaluation questionnaire. Referral to community supports was not identified as an intervention strategy on the pre-evaluation questionnaire, whereas all but one of the respondents on the post-evaluation questionnaire reported accessing community resources for patients and their families/caregivers.