Implementing new knowledge into existing practices has been a challenging task for health care services for many years [
1]. To improve implementation success, we need more guidance on how implementation is accomplished and sustained [
2,
3]. Most implementation efforts (i.e., clinical unit’s activities to implement a new practice in every daily practice) make use of data on implementation outcomes, such as fidelity scores, to monitor the progression and achievement of implementation [
4]. Fidelity can be defined as the degree to which an intervention was implemented as it was described in the original protocol [
2]. Fidelity scores provide information on the degree to which the new practice is implemented as designed [
4]. However, these data cannot inform us about why the implementation progression is incomplete or how we should address these issues to improve implementation outcomes. The literature suggests several implementation factors, i.e., interrelated, coexisting moderators for implementation outcomes to explain implementation, such as collegial support and readiness [
2,
5]. In mental health services, active leadership to redesign the workflow, measurement and feedback are found to be important implementation factors [
6]. However, the evidence is limited and currently, there is a lack of reliable techniques to effectively address deficits in these factors once they are identified [
5]. Evidence of implementation factors that are both associated with implementation outcomes and found to be affected by systematic implementation support is warranted to foresee challenges and effectively intervene based on these factors.
Few of the existing, theoretically founded, implementation factors have demonstrated reliable associations with implementation outcomes [
2,
7‐
9]. There may be several methodological reasons for this apparent lack of association. First, many of the constructs are investigated individually rather than as a set of interrelated factors. The interdependency between factors implies that implementation success is determined by a set of potentially shifting factors [
7]. Implementation should therefore be investigated as complex interventions, i.e., as a set of interacting factors [
10,
11]. Second, many investigations are based on the assumption that sufficient preparedness prior to implementation implies success. Examples are organizational readiness for change [
8,
12] and clinicians’ attitudes towards evidence-based practice [
13]. However, stages of change theory by Prochaska, Rogers and others describe stages of orientation, insight and acceptance for making and maintaining a change, and these stages are iterative [
14‐
16]. Care providers constantly assess pros and cons and may reconsider their involvement in the change. A more process-based approach to readiness than defining it as a pre-state is recommended [
17]. Investigations of implementation factors solely before entering the active implementation phases do not consider the continuous exposure of contextual factors under which care providers decide whether they should continue implementation [
18]. Third, many factors are investigated by observations. Examples are the model for understanding success in quality (MUSIQ) [
19] and stages of implementation completion (SIC) [
20]. External observation is not in line with theories that define alteration of behaviour as a social construct, i.e., an experience by those making the change. Examples are Agyris’ double loop learning and Rogers’ diffusion of innovation. They both emphasize the change agents’ sense of support, gains and obstacles, rather than objective observation, when explaining change in behaviour [
14,
21]. Finally, there are methodological limitations to existing research on implementation factors. Most studies employ retrospective, uncontrolled designs or qualitative investigations [
2,
22]. To reveal the relevance of implementation factors for fidelity and other implementation outcomes and whether systematic implementation support can impact these factors, we need a randomized controlled design with longitudinal data collections during implementation efforts. Randomised controlled designs have been employed to investigate the impact of implementation strategies on patient outcome, such as implementation of the Crisis resolution teams in mental health care [
23]. However, we lack systematic investigations of the associations among implementation strategies, facilitating and hindering contextual factors, and implementation outcomes [
24].
We aimed to assess the associations between fidelity and care providers’ perceptions of a set of implementation factors and investigate whether these perceptions can be affected by systematic implementation support. We wanted to investigate these associations and impacts at various timepoints in time during an implementation effort.