Methods
Research design
We conducted two surveys of PHC organizations (2005 and 2010) in the two most populous regions of the province of Québec, Montréal and Montérégie [
24]. The 2010 mail questionnaire was mainly based on the 2005 one, with particular attention paid to comparability of the questions, so as to make valid comparisons between the two time periods. In the 2010 questionnaire, a section was added seeking retrospectively information on various aspects of the reform and its impact on the organizations surveyed. The other sections of the questionnaire covered the four organizational domains presented earlier: vision, structure, resources and practices (Additional file
1).
The questionnaire was first developed in the 2005 study. It was tested for face validity to assess the relevance of the questions it contains. It was also tested for content validity to determine the degree of exhaustivity of the questions covering the concepts. Because of the nature of the indicators derived from the questionnaires, a formative approach to index construction was preferred to the classic reflective approach to scale development and validation. In the reflective approach, items composing a scale are effect indicators because they all reflect the latent construct and are necessarily correlated. In the formative approach, items are causal indicators contributing to form the composite index and they may not be correlated [
25‐
27]. Consequently, factor analysis and Cronbach’s alpha coefficients are not appropriate statistical tools for validating a composite index and were not used in our study. An additional reason for not using these tools is that responses to questions are in the form of reporting rather than rating.
A total of 472 PHC organizations participated in the 2005 survey for a 72% response rate (67% in Montréal and 81% in Montérégie). In 2010, 376 PHC organizations participated in the survey for a 62% response rate (60% in Montréal and 66% in Montérégie). The various types of PHC organizations were well represented (FMGs-NCs, FMGs, NCs, LCSCs and other clinics). As pointed out earlier, FMG-NC, FMG and NC practices included main and affiliated settings. A key informant identified in each clinic, usually the doctor responsible for the professional and/or administrative matters of the practice, responded to the mail questionnaire. To draw inferences on all PHC organizations and to assess the impact of the healthcare reform on the whole system, we needed to include all the organizations. We thus applied an imputation technique to non-responding organizations based on the probability of their responses, given the region, type and size of the group of organizations to which they belonged [
28]. Details of these procedures and calculation are available from the authors, upon request.
This study was performed according to the principles of the Helsinki Declaration. The Research Ethics Committee of the Agence de la santé et des services sociaux de Montréal approved the study. Participants had to sign an informed consent and were told that they could withdraw during the study.
The ICIT score
As pointed out earlier, organizational change was assessed by comparing the index of conformity to an ideal type (ICIT) in 2003 and 2010. The index includes 26 attributes, corresponding to selected responses to the organizational questionnaire and distributed in the four domains described previously (Additional file
2). The choice of the 26 attributes was guided by recent literature and proposals promoting the implementation of the patient-centered medical home concept into PHC [
29‐
32]. An organization that possessed an attribute received a score of 1 or 2, depending whether the variable was on a two- or three-point scale and a score of 0 if it did not possess the attribute. The total maximum score was 52. The score was then expressed on a 100-point scale to facilitate interpretation.
Measure of organizational change
Since ICIT scores observed in 2005 underestimated the changes that had occurred for FMGs accredited prior to 2005, we adjusted the scores of these FMGs. Using a regression analysis model, we calculated changes in ICIT scores as a function of the time elapsed since accreditation for FMGs and NCs accredited after 2005. The rate of change increased steadily during the first 30 months and then leveled off (data not shown). We then applied these figures to the 48 FMGs accredited prior to 2005, which represent 8.9% of perennial PHC organizations, based on the number of months elapsed between their accreditation and the 2005 survey. The assumption is that the rate of change for FMGs prior to 2005 was similar to those accredited after 2005. This is a sound assumption, given the fact that eligibility criteria for FMGs have remained relatively stable since the onset of the reform. For the other PHC organizations, we made the assumption that they had not changed much in the period prior to 2005, more specifically in the two years prior to the 2005 survey. This is also a sound assumption, since we observed very minimal changes in PHC organizations other than FMGs and NCs during the post-reform period, extending from March 2003, which is the date of the first FMG’s accreditation, to April 2010, which is the time of the second survey.
Overall organizational change at the system level was measured by the difference between average ICIT score of all organizations existing in 2010 and those in 2005 to which their estimated pre-reform score was attributed. The difference is the results of three effects. The first is the effect of change in perennial organizations. The second is the addition effect attributable to new organizations created during the post-reform period and was obtained by calculating the difference in ICIT scores, with and without these organizations. The third effect is due to attrition of the clinics that closed, merged or ceased to operate in their current form during the post-reform period (attrition effect). This effect was measured by calculating what the ICIT score would have been in 2010 if these clinics had then been in operation and subtracting it from the current ICIT score, assuming that the clinics would not have changed otherwise during that period of time.
Since the size of an organization can influence the total supply of services it provides, and consequently has an effect on the system, we weighted the ICIT score by the clinic size, as measured by the number of full-time equivalent physicians, in the descriptive analyses and the multiple regression analyses. We defined a full-time equivalent physician as one providing 26 hours or more of clinical activities weekly in that clinic, based on expert opinions. The number does not include the time devoted to clinical activities in other settings.
Explanatory variables
There are two levels of explanatory variables: organizational and contextual (local network). The main organizational variable is receptivity. As pointed out earlier, receptivity of organizations is conceptualized as variability in the rate, sequencing and pace of changes adopted in response to coercive, normative, and mimetic environmental pressures. Accordingly, the process of receptivity unfolds over time from intention to become a new model of PHC service delivery to the critical event of accreditation and the time that follows accreditation. This variable included three categories: whether an organization had acquired FMG, NC, or the two status (FMG-NC); (was not FMG or NC in 2010 but expressed the desire to become one of these or both if it could; and finally, did not want to become either FMG or NC. This variable was meant to be categorical, as FMGs and NCs are considered to be receptive to change, since they were “early adopters” and certainly motivated to espouse this innovative approach. The second category represents those that are not yet FMG or NC, but would like to become one or both of these, if they could. The third category consists of those not interested in changing their current status to become FMG or NC. These three categories were coded as dummy variables in the analysis (Additional file
3).
The second set of organizational indicators at the organizational level was perceived coercive, normative and mimetic influences exerted on PHC organizations. Obviously, this information was available only for organizations in operation in 2010 (perennial). A score for each indicator was constructed by combining responses to the question “How would you assess the effect of the following factors on the evolution of your clinic?”. The effects could be positive, null or negative and were given a score of 2, 1 or 0, accordingly. Details of the items and the construction of the indices are presented in Additional file
3. Coercive, normative and mimetic influences were also assessed at the local network’s level, by calculating the percentage of organizations that assessed as positive the influence of these factors on the evolution of their organization. Correlations between organizational and local network variables were sufficiently low (.37, .32 and .35) to warrant their use while preventing endogeneity. Other organizational and local network indicators were used as control variables (Additional file
4).
Statistical analysis
The analysis was performed in two stages. First, descriptive statistics on organizational change are presented. Second, explanatory and control variables are entered into two-level models of analysis according to an analytical framework to be presented in the following section, using HLM (version 6.0) software. Considering that organizations are nested within HSSC territories, generalized linear mixed models (GLMM) with random intercept were used for these analyses, multilevel linear regression models with continuous variables and multilevel multinomial regression models with categorical dependent variables [
33]. These analyses were carried out on perennial organizations only. The two points in time were handled in the analysis by using, as covariables in the regression models, 2003 scores, the outcome variables being the difference between pre- and post-reform scores. Statistical significance is expressed in the tables by p-values.
Discussion
Our study results show that much of the change in the ICIT score from 2003 to 2010 was due to the “perennial” effect of PHC organizations that were present both in the 2005 and 2010 surveys. Given that the number of organizations in this category is substantial, it is not surprising that they contributed the most to overall organizational change. Paradoxically, the “addition” effect of organizations that opened after 2005 contributed negatively to the ICIT score change and those that closed during that period (attrition effect) contributed positively. In the former case, these organizations opened during a period of emergence of FMGs and NCs, when an important ICIT score increase was observed. It may have been difficult for an organization that opened during that period to rise to this level so soon after its inception. Therefore we can hypothesize that these organizations would have needed more time to fully develop and meet accepted standards. This could also suggest that the development of new clinics is not an area covered by the current PHC reform, which concentrates solely on established clinics accepting to conform to the accreditation process, to receive governmental support. In the second case, most organizations that closed were small or solo clinics and consequently had low ICIT scores at the time of the 2005 survey. This again is not surprising given their rudimentary form of organization and the fact that many of their owners might have been thinking about closing, therefore making organizational changes less relevant. Hence, aside from perennial organizations whose effect was more in line with emerging models of FMGs and NCs, the improvement in ICIT scores was in part due to a “natural selection” effect of clinics that closed, and this effect was mitigated by clinics that opened after the 2005 survey.
This result suggests that change in ICIT score could be associated to both the trend in evolution of organizations as well as to the reform policies designed to support the implementation of FMGs and NCs. Further analyses showed that this effect was not direct, but mainly mediated by receptivity of PHC organizations. Analysis of the relationship between “influences” and “receptivity” variables revealed that the most important factors explaining this relationship are “mimetic”, related to the influence of other exemplar PHC organizations. This mimetic influence played a major role mainly for FMG-NC, FMG, NC, and to a lesser, though significant degree, for PHC clinics wishing to become FMG or NC. Perceived influence of HSSCs did not play an important positive role.
Normative influence of professional association was not significantly associated with receptivity. This could mirror the reluctance and sometimes the opposition of professional associations to support the FMG movement, namely by promoting the alternative form of NC in the Montréal region [
1].
To sum up, changes have been noted between 2003 and 2010, in relation to reform policies initiated at the provincial level in the early 2000s. The most important changes were observed in the “structure” and “resources” domains of the ICIT score and much less in the “practices” domain. This is easy to understand since the creation of FMG and NC has brought about resources and required modifications of structural arrangements. “Practices” always lag behind during reforms and are slower to develop. Vision remained stable.
Much of the change is due to receptivity factors which, in addition to their direct effect, mediate the slight influence of HSSCs on variations in ICIT score. This is not to say that the reform policies did not have an influence. Obviously, the establishment of FMGs was initiated by the Ministry of Health and social services and NCs were strongly supported by the Regional Montréal’s Regional Agency. However, at the local network level, the supportive role of HSSCs in the implementation of FMG and NC was not perceived as major. More importantly, the mimetic influence of other clinics provided an incentive and support for PHC clinics that became or expressed the desire to become FMG or NC. This result reflects a “bottom-up” strategy of change.
Viewed together, these results suggest that a decreed “top-down” reform was instrumental and an obvious prerequisite for initiating change in the healthcare system but it still needed to be implemented in a context of receptivity to change and the presence of “local champions”, advocating for the new models and demonstrating their feasibility and desirability.
Limitations and strengths
Our study has some limitations. One is the way we measured receptivity. Except for the fourth and the fifth categories of clinics that expressed their desire or disinterest to become FMG or NC, the three first categories were only proxies for receptivity, since we did not have a direct measure of receptivity or readiness of these organizations before they became FMG or NC. It can be reasonably argued that clinics that became FMG or NC, were more receptive than those in the two last categories. Although there is an obvious link between receptivity as it is constructed and organizational change, there is a conceptual distinction between the two based on the notion of organizational change. As explained earlier, the process of organizational change initiated by the accreditation procedure did not stop there but improved over time, thus bringing about further changes, particularly in terms of structures and resources.
Secondly, although the design enables comparisons of the same organization at two points in time-corresponding approximately to a before-after scheme - it does not allow for comparisons with a control group where the reform has not been implemented. In this sense, it corresponds to a “natural experiment” rather than a true experimental design. However, in this case, variations in the influence exerted among HSSCs compensate to some extent for the lack of a comparison group.
A third limitation is the response rate and the probabilistic imputation method that we used for non-respondents. This method was applied to strata whose construction was based on region, size and type of organizations. It is a more rigorous method than simply randomly selecting subjects among respondents to replace non-respondents, particularly when response rates are acceptable, as they are in our study. As Haziza puts it, it is preferable to use imputation rather than do the analyses on incomplete data that may not be representative of the population [
28]. In other words, the non-response bias is greater with non-response than with imputation and this is particularly important when inference must be made to the system’s effect of a reform. To test the extent of this possible bias, we performed a sensitivity analysis by introducing the variable “imputation” into the full multilevel analysis model presented in (Table
3). This variable failed to reach statistical significance. We also conducted a statistical analysis on respondent organizations only, and the results obtained were very similar to those presented, with a slight loss of statistical power.
A fourth limitation is that key informants who responded to the questionnaire may have been different in 2005 and 2010, thus introducing a possible respondent’s bias. In the end, the figure was relatively low (27% of responding organizations). In addition, we compared the 26 organizational variables among organizations that had two different respondents and among organizations with the same respondents in 2005 and 2010. We matched them using the same stratification variables as for the imputations, namely region size and type of organization. There were no significant differences between two different respondents and a single respondent on most of the variables. We should recall that the questions used to construct ICIT refer to factual information and, as such, are less likely to be affected by perception biases. As for imputation, we did a sensitivity analysis by introducing the variable “respondent” (same = 0, different = 1) into the full multilevel model presented in (Table
3). This variable did not come out as statistically significant.
Like many studies on organizational change, this article focuses on the content of change and on contextual factors explaining change, but not on its outcomes and consequences. In the larger study of which this article is a part, population surveys were conducted concomitantly with organizational surveys in the two regions in 2005 and 2010, with nominal linkage of the respondents to their regular source of care. This enabled us to link population and organizational data. Therefore, it is possible to assess the outcomes of change on the population in terms of services utilization, unmet needs and experience of care (accessibility, continuity, comprehensiveness and responsiveness). This important area of research will be examined in further publications.
Our study also has singular strengths. The main one is the use of a quantitative measure of ICIT change that enabled us to assess the likely effect of the reform both globally and by domain. Likewise, identification of perennial organizations as well as of those that emerged or disappeared between 2005 and 2010 made it possible to partition the global effect into perennial, addition and attrition effects. Finally, we analyzed organizational change longitudinally, by comparing two time points over a seven-year period (2003-2010).
Acknowledgements
We acknowledge the contribution of the following collaborators and researchers associated with the project: P. Tousignant, D. A. Roy, J.L. Denis, P. Lamarche, M.D. Beaulieu, D. Feldman, J. Haggerty, D. Roberge, J. Côté, M. Fournier, M. Hamel, F. Goulet, M. Drouin, J. Rodrigue, L. Côté and Odette Lemoine. The study benefited from the financial contributions of the Canadian Institutes of Health Research (# PHE 101905), the Fonds de recherche du Québec - Santé (FRQS) (# 22032), the Agence de la santé et des services sociaux de Montréal, and the Agence de la santé et des services sociaux de la Montérégie. Isabelle Rioux and Sylvie Gauthier provided editorial assistance.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
RP participated to all the stages of the study and drafted the manuscript. RB and AC coordinated the organizational part of the study and helped to draft the manuscript. JFL and SP participated in the design, the conduct of the study and drafting of the manuscript. MF advised on and supervised the statistical analysis performed by AP. All authors read and approved the final manuscript.