The first phase of the project, which began in July 2014, is focused on the adaptation of the HFTAT. The second will study the effectiveness of the HFTAT as an implementation strategy among four housing organizations. We estimate it will take 16 months to completely modify the HFTAT and an additional 20 months to test it. Indiana University's Institutional Review Board has approved all research-related procedures described below.
Adaptation of the HFTAT
The adapted HFTAT will encompass two main types of activities. (1) Technical assistance will be provided at the organizational level through initial consultation activities and through monthly check-ins with administrative staff. (2) Training activities will happen at the staff-level and will consist of didactic and interactive learning activities, assessment activities with formative and summative feedback, and engagement in an active online community of practice. The adapted HFTAT will be delivered over a 12-month period; the technical assistance portion will last the entire year, while training activities will last a maximum of 6 months.
Adaptation of technical assistance activities
The technical assistance component will utilize the following: (1) the HFM Fidelity Index (developed previously by members of the project team) to facilitate initial discussions with organizations, tailor an implementation plan, and track progress; (2) an expanded implementation package including the previously mentioned HFM Fidelity Index, templates for HFM policies and protocols, informational materials for consumers and stakeholders, and an implementation manual; and (3) video conferencing and telecommunications technologies to facilitate monthly technical assistance meetings with implementation leaders, thus making the HFTAT more versatile as a training option. Fidelity instruments and implementation packages have both been used successfully to facilitate implementation in previous work [
32].
Adaptation of the training
The goal of training activities is to support implementation as meaningful learning and commitment to change. Meaningful learning is actively constructed and intentional [
33]. The challenge is in taking quantities of presentational material and related face-to-face activities and developing interactive e-learning activities to provide context, challenge, activity, and feedback while following appropriate principles of multimedia learning to support higher levels of cognition [
34]. The diversity in the HFTAT curriculum offers opportunities to provide a mix of self-paced and asynchronously facilitated learning activities to sustain learner interest and motivation.
The adapted training will be delivered in four modules: (1) overall introduction to the HFM, (2) running a HFM program (for administration and implementation leaders only), (3) housing case management, and (4) strategies (largely clinical) for working with consumers. Learners will be instructed to complete the training largely at their own pace, though there will be specified dates by which they are expected to complete individual modules. Building on the affordances of available technology and utilizing Anderson's model of e-learning as a guide, we are developing a combination of didactic materials and interactive learning activities (knowledge- and assessment-centered) that recognize both the individual and structural opportunities and challenges to implementation (learner-centered) and integrate a supportive, online, nation-wide community of practitioners (community-centered) to support participants in making and keeping their commitment to change.
Learner engagement and the provision of an active learning experience will be facilitated primarily through the utilization of two strategies. (1) We are integrating case-based narratives that will allow learners to explore the immediate utility of HFM concepts, tools, and practices [
35]. Narratives will not be presented as a whole, but will be cut into smaller segments and threaded throughout the training where they best serve to reinforce specific concepts. (2) We will also provide opportunities for learner engagement in an active community of practice by providing a virtual space that will be open to all HFTAT participants, as well as individuals not participating in the training but working in the field of housing (not just those working in HFM programs). Similar approaches have been demonstrated to have a positive impact on the implementation and sustainability of EBPs [
36],[
37]. The community of practice will provide a virtual space for social and collaborative learning, making information presented in the HFTAT more meaningful by embedding it within the larger HFM conversation [
28]. It will also serve as a resource for implementation leaders to gain technical assistance beyond the end of the HFTAT, thus increasing the potential sustainability of the implementation strategy.
We are also utilizing the following additional strategies to facilitate a meaningful and engaging learning experience: cognitively effective design that will break longer topics into smaller, learner-controlled segments including a mix of audio, images, text, and video [
38]; branched learning scenarios that allow the learner to influence content based on their choice of options available; providing learners with opportunities to put skills and knowledge gained into practice through authentic, performance-focused challenges, activities, and assessments, which will receive individualized feedback from training staff; and providing opportunities for reflection on prior work activities within a treatment first model and the assumptions on which they were based to support conceptual change [
30].
Alpha testing of training modules
We will conduct an alpha test of each training module and make necessary refinements before testing the full HFTAT as an organizational implementation strategy. We are currently recruiting front-line staff working in housing programs to participate in the alpha test using a snowball sampling approach. We are recruiting participants from programs that self-designate as Housing First and treatment first so our data represent experiences of both HFM experts and neophytes. We plan to recruit a total of 10 participants with a range of HFM work experiences (from no experience to substantial experience). We will recruit five participants from the City of Chicago and five from Central Indiana (the two areas where the full implementation strategy will be tested; see below). To best understand the effectiveness of the training under "real world" conditions, we will ask participants to complete the modules in a setting comfortable to them using their own equipment.
Participants will keep a detailed log as they independently work through each module and engage with the community of practice, an approach often used to understand user experience of new technologies [
39]. We will instruct them to use a form provided with specific spaces to record: (1) questions they have on the content, presentation, and assessment; (2) any technical issues they may experience; and (3) general thoughts and affective responses to the material and activities. The open-ended question regarding general thoughts and responses will allow us to capture concerns and ideas unforeseen in our instrumentation which emerge from the user experience. We will also conduct one focus group with users in each city (two focus groups total), which will allow participants to respond to all comments and feedback received, thus eliciting a variety of views on the material [
40]. Interview guides for focus groups will be structured similarly to the logs in terms of querying content, presentation, assessment, and experiences of technology. We will develop additional queries and probes from early analyses of log data (see "Analytic strategy" section). Participants will receive $100 for each module they complete and $30 for the focus group ($430 total per participant).
Testing of the adapted HFTAT
We will utilize a convergent parallel, mixed methods design to test the adapted HFTAT among a sample of four housing programs [
43]. This design involves the concurrent but separate collection of both qualitative and quantitative data. The use of mixed methods is common practice in implementation research due to the complexity of implementation, the multiple levels of an organization involved, and the importance of understanding how process affects implementation outcomes.
Organizations will be purposefully selected so they are unique enough from each other to ensure findings are related to the implementation strategy and not similarities related to structural- or organizational-level factors. We are currently in the process of selecting two organizations from Chicago and two from Central Indiana, two areas that are extremely different in their receptiveness to the HFM. Chicago is very receptive to the HFM, as the city has had a Plan to End Homelessness based on Housing First principles in effect since 2003. Indianapolis faces several barriers to HFM implementation, most importantly a reliance on Medicaid funds requiring service engagement that is not compatible with the principle of consumer choice emphasized by the HFM. We are also selecting the programs so they have different levels of familiarity with the HFM (i.e., HFM neophytes and programs seeking to improve their HFM practice). Due to the small size of some housing programs, we are only including those with 10 or more employees with direct client interaction as part of their job duties (e.g., case managers, program assistants, admissions staff, etc.). We are also requesting that members of the administrative team participate in the technical assistance portion of the HFTAT and associated data collection activities.
We will collect data reflecting staff characteristics such as demographics, job title, type of degree, primary discipline, length of time providing housing services, and length of time in current position. Agency characteristics we will collect include location, clients served, number of staff, length of time in existence, type of housing offered (single- or multiple-site), and primary source of funding.
The rest of our measurement selection is guided by the conceptual framework depicted in Figure
1. As such, measurement will focus on three main areas: causal factors, training, and implementation.
We will utilize the following three measures to assess causal factors hypothesized to affect implementation existing at multiple levels within which the intervention is embedded. (1) We are currently developing an instrument to measure the structural-level factors affecting implementation, as we have been unable to find any preexisting measures suitable for this task. The development of this instrument will be described in a separate paper. (2) Organizational-level and provider/staff-level factors will be measured using the context assessment portion of the organizational readiness to change assessment (ORCA). The ORCA is designed to assess organizational-level variables believed to affect implementation that has tested positively for both inter-rater and convergent/discriminant validity (C. Helfrich, personal communication, August 30, 2013) [
44]. The context assessment portion of the instrument is comprised of 23 five-point Likert-type items. (3) We will measure patient/consumer-level factors using items we have developed for this purpose, which we will insert at the end of the ORCA. Four questions are preceded by a stem: "In the past year, how frequently have you observed clients in your organization: (a) express belief that current practice patterns can be improved; (b) encourage and support change in practice patterns to improve their care; (c) demonstrate willingness to participate in new programs or services; (d) cooperate with staff and management when there are changes in services, practices, or procedures that affect them?" Respondents will be asked to rate the questions using the same five-point scale as the ORCA questions.
We will also collect data for the purposes of assessing outcomes directly related to the training and technical assistance provided through the HFTAT. The learning management system that will host the training will track (1) frequency of visits to training and (2) time spent in e-learning at the staff-level to understand the use and access patterns of learners and the time engaged in learning activities and use patterns. (3) We will measure cost at the staff-level by multiplying the number of hours providers engaged in training by staff hourly pay and fringe rates. (4) A summative test delivered to staff at the end of the HFTAT will measure HFM knowledge (exact questions for this test will not be determined until the HFTAT is fully adapted). (5) We will assess satisfaction with training using 12 items from the Training Satisfaction Rating Scale [
45]. We have selected 12 items from this instrument demonstrated to load the highest on three training dimensions: objectives and content, method and training context, and usefulness and overall rating. Each question is assessed using a five-point (1 = totally disagree through 5 = totally agree) Likert-type scale. The questions are general enough to be used to assess a wide array of trainings. (6) We will assess overall satisfaction with HFTAT using data collected through semi-structured phone interviews conducted with implementation leaders. These interviews will cover questions such as the following: How helpful did you find the initial implementation planning?; How helpful were the monthly technical assistance meetings?; How do you suggest the technical assistance portion of the HFTAT could be improved?; and How helpful was the training at preparing your employees to work in a HFM?
We will also assess 4 implementation outcomes. (1) We will measure fidelity using the HFM Fidelity Index. The index comprises 29 elements. Each element is scored regarding the degree to which it has been implemented along a scale that contains five descriptive anchors (1 = weakest level of implementation through 5 = strongest level of implementation) and has demonstrated construct and discriminant validity [
21]. A series of interview questions are used to collect information necessary for identifying the correct anchor through a structured phone interview. (2) We will utilize the stages of implementation completion (SIC) instrument to measure implementation process and organizational change. [
46] The SIC is an assessment tool comprising 31 items which measures and monitors completion of key activities related to implementation and the length of time to complete them. While still in development, there is evidence supporting the SIC's reliability and ability to predict implementation success [
47]. (We are currently working with the developers of the SIC to adapt it for use with the HFM.) Heartland staff will update the SIC through information gained in monthly technical assistance meetings. (3) The Evidence-Based Practice Attitude Scale measures acceptability of the intervention [
48]. This scale has 15 general questions asking respondents to state the extent to which they agree with a set of questions along a four-point Likert-type scale in order to understand their attitudes towards the adoption of a new intervention. (4) Finally, we will conduct individual interviews to assess a number of other implementation outcomes including feasibility (i.e., usefulness of an EBP in a particular setting), appropriateness (i.e., perceived fit with the organization), adoption (i.e., intention to employ an EBP), and penetration (i.e., the degree to which staff have implemented HFM practices in their daily work). The interviews will cover questions such as the following: How is/has the move to the HFM affecting/affected your work?; How compatible is the HFM with your organization?; How interested are you in learning and applying what you will learn in the HFTAT training?; and How are you integrating what you learned in the HFTAT into your work?
Table
1 displays the time point(s) at which we plan to collect data related to each measure. Measures of causal factors will be collected at baseline. Data related to training outcomes will be collected from staff and implementation leaders after the training is completed, though HFM knowledge will also be measured at 12 months. Overall satisfaction with the HFTAT will be measured at 12 months. Regarding implementation outcomes, all measures with the exception of the HFM Fidelity Index and the SIC will be collected at baseline, immediately following training, and at 12 months. We will measure fidelity at baseline and 12 months. Due to the nature of the instrument, we will collect SIC data on a monthly basis.
Table 1
Summary of data collection procedures for HFTAT testing
• Structural-level | Electronic | Staff | Causal factor | Baseline |
• Org- and staff-level |
• Consumer-level |
• Visit frequency | Electronic | Staff | Training outcome | After training |
• Completion time |
• Cost |
Training satisfaction |
HFM knowledge | Electronic | Staff | Training outcome | • After training |
• 12 months |
Overall satisfaction with HFTAT | Phone interview | Implement leaders | Training outcome | 12 months |
Fidelity | Phone interview | Implement leaders | Implement outcome | • Baseline |
• 12 months |
Implementation process (SIC) | Collected ongoing through technical assistance activities | Implement leaders | Implement outcome | N/A |
Acceptability | Electronic | Staff | Implement outcome | • Baseline |
• After training |
• 12 months |
• Feasibly | Individual interview | Staff | Implement outcome | • Baseline |
• Appropriateness | • After training |
• Adoption | • 12 months |
• Penetration |
Because staff will most likely be required by their administration to go through the training as part of their organization's commitment to HFM implementation, it is important to separate training and research activities. Participation in data collection related to frequency of visits to training, e-learning activity completion time, and HFM knowledge will be required as part of participation in training activities. Participation in all other data collection activities will be voluntary. For each instrument completed, we will enter staff participating in the collection of electronic data into a drawing for their organization to win one of two $50 gift certificates to a retailer or restaurant of their choosing at each data collection point. We will enter names into the drawing for each instrument completed so participation on multiple instruments will increase the chances of obtaining the gift certificate. We will provide staff participants in focus groups with a $10 Starbucks gift card for their time (focus groups will occur during work hours, so participants will also be compensated by their agency). We will not invite administrators, managers, and implementation leaders to participate in focus groups to ensure staff feel comfortable sharing information.
Analytic strategy, quantitative
The primary quantitative outcome of interest at the organizational-level is fidelity. We will compare fidelity scores at baseline and 12 months to gauge improvement. Mean and standard deviation of the improvement will be calculated. The implementation process and organizational change, measured by the SIC at the organizational level, are collected through the monthly technical assistance activities. For each organization, we will examine average improvement in SIC scores using a linear regression model and summarize the improvement across organizations using mean and standard deviation.
Acceptability of the intervention is measured at the staff level. The change in acceptability after training and at 12 months compared to the baseline will be calculated for each staff member and summarized using mean and standard deviation for each organization and across organizations.
At each time point, proportions will be reported for categorical variables and mean and standard deviations will be reported for continuous variables. Improvement on these outcomes is then reported by comparing the after-training and 3-month follow-up measures to the baseline measures.