Background
Clinical networks are burgeoning internationally and have been established in the United States, United Kingdom and other parts of Europe, Australia and Canada [
1‐
10]. These networks aim to engage clinicians in the implementation of quality improvement initiatives [
2,
3,
5,
8,
11] and there are data suggestive of networks being effective in improving quality of patient care [
2,
5,
7,
12]. While there are many different models of clinical network from fully integrated service delivery systems, such as Kaiser Permanente or the Veterans Health Administration in the US, to informal communities of practice, all share the aim of increasing the uptake of evidence based practice and improving quality of care and patient outcomes. In the current context, we define clinical networks as voluntary clinician groupings that aim to improve clinical care and service delivery using a collegial approach to identify and implement a range of strategies across institutional and professional boundaries [
13].
The effectiveness of clinical networks is often not formally evaluated. Published studies typically focus on one clinical area and provide anecdotal, experiential commentary using a mixed methods approach (e.g. document review, interviews, observation) [
14‐
17]. The psychometric properties of measures have rarely been explored or tested, resulting in a lack of standard or validated methodology.
A recent systematic review of measurement instruments developed for use in implementation science (specifically to measure self-report research utilisation) found a large majority of instruments demonstrated weak psychometric properties [
18]. Basic psychometric properties of reliability (e.g. internal reliability) and validity (e.g. construct validity) should generally be evaluated if a measure is to be implemented for research [
19].
Given the rapid development and investment in clinical networks internationally [
20,
21] there is a need to develop valid instruments to assess intrinsic and extrinsic features related to their performance. The aim of this paper is to outline the development, validation and descriptive results of an Internet survey designed to assess the effectiveness of clinical networks in order to guide future strategic and operational management and leadership in the wider context in which they operate. The survey was used in an Australian study involving 19 clinical networks of the Agency for Clinical Innovation [
13]. The survey was developed by building on the limited existing measures relating to clinical networks, the wider organisational literature, and findings of a qualitative pre-study [
22]. This paper addresses the following:
1.
Development of the survey instrument
2.
Psychometric assessment of the survey instrument (construct validity and scale reliability)
3.
Descriptive survey results from a sample of network members
1
Results
Response rates and sample characteristics
Three thousand two hundred thirty-four members of 19 clinical networks with a valid email address were invited to participate in the survey. The survey response rate was 18 % (
n = 592), which is less than the average response rate for online surveys reported at 33 % [
47]. A summary of the demographic characteristics of respondents is presented in Table
2.
Table 2
Characteristics of study sample (n = 592)
Gender | Male | 116 (32.0) |
Female | 247 (68.0) |
Missing | 229 |
Professional discipline | Medical Officer | 91 (23.3) |
Nurse | 150 (38.4) |
Consumer | 13 (3.3) |
Allied Health | 85 (21.7) |
Executive manager - non-health professional | 6 (1.5) |
Researcher/academic | 19 (4.8) |
Other | 27 (6.9) |
Missing | 201 |
Years involved in network | 1 | 42 (8.2) |
2 | 60 (11.7) |
3 | 74 (14.5) |
4 | 55 (10.8) |
5+ | 280 (54.8) |
Missing | 81 |
Role in network | Chair | 12 (3.6) |
Executive Committee Member | 24 (7.3) |
Executive & Steering Committee Member | 17 (5.2) |
Expert Advisor | 5 (1.5) |
Working group member | 109 (33.1) |
Participant | 162 (49.2) |
Missing | 75 |
Construct validity
In general the factor structure was consistent with the hypothesised domains. For the perceived engagement domain, two of the seven questions did not load well (factor loading <0.4) and these questions were excluded from calculation of the factor score for further analyses. The range of loadings for each domain, along with the means (and standard deviations) is shown in Table
3. Approximately two thirds (67 %) of the total variance was explained by the final factor solution.
Table 3
Outcomes of factor analysis for the seven hypothesised domains
Perceived engagement | 5 | 445 | 3.3 | 0.68 | 0.651 – 0.827 |
Perceived leadership of network manager | 7 | 314 | 3.9 | 0.78 | 0.392 – 0.922 |
Perceived leadership of network co-chairs | 8 | 261 | 3.7 | 0.68 | 0.611 – 0.873 |
Perceived leadership of Agency Executive | 2 | 317 | 3.8 | 0.80 | 0.958 – 0.958 |
Perceived strategic and operational management of a network | 6 | 342 | 3.8 | 0.70 | 0.660 – 0.868 |
Perceived external support | 7 | 228 | 3.3 | 0.60 | 0.503 – 0.802 |
Network perceived as valuable | 5 | 340 | 3.8 | 0.78 | 0.684 – 0.902 |
Internal reliability estimations
Table
4 lists the Cronbach alpha coefficients for each of the seven domains within the instrument. Cronbach’s alpha ranged from 0.75 to 0.92 indicating that all of the seven survey domains exceeded the acceptable standard (>0.70), with five of those domains achieving high internal consistency [
48].
Table 4
Survey internal reliability estimations
Perceived engagement | 5 | 0.51 | 0.75 | Acceptable |
Perceived leadership of network manager | 7 | 0.55 | 0.91 | Excellent |
Perceived leadership of network co-chairs | 8 | 0.50 | 0.89 | Good |
Perceived leadership of Agency Executive | 2 | 0.59 | 0.92 | Excellent |
Perceived strategic and operational management of a network | 6 | 0.43 | 0.87 | Good |
Perceived external support | 7 | 0.30 | 0.79 | Acceptable |
Network perceived as valuable | 5 | 0.54 | 0.87 | Good |
Descriptive results for the survey sample
Table
5 provides full details of mean summary scores and ranges across measured domains. Descriptive results for the survey sample are detailed in Additional file
2.
Table 5
Aggregate mean summary scores across domains
Perceived engagement | 17.7 | 0.26 | 16.24 – 20.64 | 27 |
Perceived leadership of network manager | 27.6 | 0.47 | 23.46 – 31.92 | 35 |
Perceived leadership of network co-chairs | 29.6 | 0.44 | 25.33 – 32.52 | 40 |
Perceived leadership of Agency Executive | 7.5 | 0.12 | 6.33 – 8.25 | 10 |
Perceived strategic and operational management of the network | 22.9 | 0.23 | 21.22 – 24.85 | 30 |
Perceived external support | 23.0 | 0.30 | 20.60 – 25.97 | 35 |
Network perceived as valuable | 18.9 | 0.27 | 17.32 – 21.70 | 25 |
One third of survey respondents reported that they spent less than one hour per week devoted to network activities (33 %); one quarter (25 %) spent between one and five hours per week; 20 % between five and 10 h per week; 11 % between 10 – 20 h per week; and 11 % more than 20 h per week. The mean summary score for perceived engagement across networks was 17.7 out of a possible 27 (65.5 %). There was strong reported commitment to the network (73.5 %) and belief in the work that the network undertakes (86.7 %). However, there was less agreement that respondents’ views and ideas had contributed to network activities (55 %) or that they had been able to help drive the network agenda (30 %).
Perceived leadership of network manager had the highest mean summary score across the seven measured domains at 27.6 out of a maximum 35 (78.9 %), suggesting that, on the whole, network managers were considered to have an evidence-based vision (71 %), were able to engage fellow professionals about service and quality improvement (73.5 %) and bring others together to facilitate service and quality improvement (75.9 %). Network managers were perceived to have built strong positive relationships with clinicians (71.4 %) but were perceived by fewer respondents to have done so as effectively with consumers (49.1 %) or hospital management (38.9 %). Ratings of the leadership of the network co-chairs (29.6 out of 40; 74 %) were similar to those for network managers. Co-chairs were considered to be champions for change (63.8 %) and to have built strong, positive relationships with other clinicians (61.6 %) but less so with consumers (39.7 %) and hospital management (40.4 %). There was variability in perceptions of co-chairs’ abilities to mobilise fellow professionals about service and quality improvement (47.8 %), collaborate with external parties to support network operations (42.1 %) or work cooperatively with senior health department leadership to make appropriate changes (51.7 %). The summary score for leadership of the Agency Executive was 7.5 out of 10 (75 %). Just over half of respondents agreed that there was strong leadership and clear strategic direction (53.8 %) and that the Executive worked cooperatively with leaders in the wider health system to make appropriate changes (55.3 %). More than 40 % of respondents, however, selected a “neutral” or “don’t know” response for the two items within this domain.
Perceived strategic and operational management of a network had a mean summary score of 22.9 out of a possible 30 (76.4 %). The majority of respondents were satisfied with the level of multidisciplinary representation (81.8 %), the level of information sharing across the network (75.1 %) and to a lesser extent communication with people outside the network (55.8 %).
Perceived external support had the lowest summary score (23 out of 35; 65.7 %). Just over half agreed that network agendas were aligned with state government strategic plans (52.3 %). Fewer network members felt that hospital management (28.6 %), clinicians working in hospitals (50.3 %) and local area health service managers (15.9 %) were willing to implement network recommended changes despite more than a third reporting that area health service managers (34.4 %) and state government health decision makers (35.5 %) were aware of these recommendations.
Overall, the networks were perceived as valuable (18.9 out of 25; 75.6 %) and were considered by members to have improved quality of care (72.8 %) and, to a slightly lesser extent, patient outcomes (63.2 %). More than 70 % of respondents would recommend joining the network to a colleague.
Discussion
Prior to the development of this network survey, to the best of our knowledge, there were no psychometrically validated surveys designed to measure the organisational, program and external support features of clinical networks. This paper describes the development and assessment of construct validity and internal reliability of a survey instrument, and provides descriptive results from a formative assessment of nearly 600 members of 19 diverse clinical networks across the seven measured domains. The survey was developed as an instrument to measure factors associated with successful clinical networks in an Australian study [
13]. It provides researchers and managers of clinical networks with a psychometrically valid and reliable tool that can be used to assess key features of successful clinical networks and to identify areas for further development within networks to increase their effectiveness and impact.
Confirmatory factor analysis supported the seven hypothesised domains, namely: engagement of clinicians; leadership of the network manager; leadership of network co-chairs; leadership of the Agency executive; strategic and operational management of the network; external support; and value of the clinical network. The survey has high internal consistency reliability as evidenced by Cronbach’s α values of 0.75 and greater.
For this sample of nearly 600 members of 19 clinical networks of the NSW Agency for Clinical Innovation there was strong reported commitment and belief in the work that the network undertakes. Network managers were generally perceived to be effective leaders who facilitated evidence-based quality improvement initiatives and built strong working relationships with clinicians. Network co-chairs were considered to be champions for change and to have built strong, positive relationships with other clinicians. Across both manager and co-chair leadership, however, there was variability in perceived effectiveness at forming good relationships with consumers and hospital management. Further, there were perceived inconsistencies in co-chairs’ abilities to collaborate with external parties to support network operations or work cooperatively with senior health department leadership to make appropriate changes. Just over half of respondents agreed that there was strong leadership and clear strategic direction from the Agency Executive. However, more than 40 % of respondents selected a “neutral” or “don’t know” response for the two items within this domain, perhaps reflecting a lack of awareness of the higher-level operational leadership of the Agency in members with limited exposure to this level of management or members with looser affiliations to the networks.
The majority of network members were satisfied with the level of multidisciplinary representation and information sharing across the network but only a little more than half agreed that communication with people outside the network was effectively coordinated. This indicates that there may be scope for improvement in external communication to raise awareness of network initiatives and impacts. There was a perceived lack of external support for the networks, with few network members agreeing that hospital management or local area health service managers were willing to implement network recommended changes. This may be a reflection of network managers’ and co-chairs’ lesser abilities to build positive relationships and work cooperatively with these groups and could explain variation in effectiveness or success across networks. Overall, the networks were perceived as valuable and were considered by members to have improved quality of care and patient outcomes.
These results would suggest that the strength of this type of managed clinical network lies in the strategic leadership of the network manager and their ability to form constructive working relationships with clinicians working in the health system. Managers of networks seeking to improve effectiveness should seek to build stronger relationships with hospital management and local area health service managers to leverage support for network quality improvement initiatives. Given the importance of cohesion in the local community, and local community support and participation as critical factors in the success of networks [
39] enhanced relationships with consumers and improved communication with those outside of the network would additionally seem important areas of focus.
It should be noted that the response rate for this Internet based survey was less than the reported average for online surveys [
47]. However, respondents were split equally between participants who were recipients of network activities with a loose connection to the network (49 %), and more actively engaged members with governance or steering roles or involvement in working groups (51 %). This latter group of respondents is better placed to accurately report on the external support, organizational, and program factors measured by the survey given their greater knowledge of network functioning adding credibility to their perceptions. 55 % of respondents had been involved with the networks for five or more years suggesting a degree of commitment to the network and a proxy measure of network sustainability. While it is acknowledged that the low response rate may have impacted on the external generalisability of the instrument’s construct validity, sensitivity analyses based on inverse probability weighting to adjust for any response bias, conducted as part of the main study for which this survey was developed, [
13] found correlation and regression results to be similar to the main (non-weighted) analyses.
A further potential limitation of this study is the reliance on self-reported perceptions of network members. Given the large and diverse study sample of more than 3000 members of 19 networks operating across multiple clinical areas and disciplines in a large geographical area a self-reported survey was deemed the most pragmatic, timely and cost effective method of data collection. Subjective self-reported measures were validated through document review and a sub-study, [
49] and a qualitative study [
50] was conducted to assist with interpretation of results.
The survey has potential for broader application beyond the context of NSW, Australia as an instrument for assessing and improving the operations of clinical networks. When other research groups outside NSW, Australia use this survey in their studies they can validate the utility and applicability of the tool and the domains selected to their contexts. Over time benchmarking and normative data across multiple jurisdictions with clinical networks could be obtained.
Given that the international literature formed the basis of the instrument, the domains measured are likely to be common across the various models of clinical networks internationally, which have the shared aims of increasing uptake of evidence-based practice and improving quality of care. A recent systematic review [
51] that included both quantitative and qualitative studies of the effectiveness of clinical networks operating in other regions of Australia, Canada, the UK and other regions of Europe, and the US concluded that appropriate organisational structure, effective leadership, multidisciplinary engagement, adequate resourcing, collaborative relationships, and external support from the patient community and other stakeholders, were key features of successful clinical networks. This supports the domain structure of our instrument and suggests it’s likely generalisability beyond the current context. It should also be noted that none of the studies included in the review used a validated measure of network effectiveness, rather relying on qualitative exploration or experiential commentary, highlighting the value of this validated instrument to enable more standardised, and hence comparable, future assessment of networks.
Further, given the commonality of determinants of successful networks and core competencies for network success across different policy fields [
39] there is scope for this survey to be adapted for use outside of clinical networks. For example, it could be used in the assessment of other types of public networks beyond health that deliver and manage public services such as education, job and training networks, community care or family and children’s services. The included domains relating to perceived: engagement of key stakeholders; leadership; strategic and operational management; external support; and value of the network, would all be equally applicable across these settings.
The results for this survey sample of nearly 600 network members can provide a point of comparison for others who wish to use the instrument.
Acknowledgements
The authors wish to acknowledge the contribution of the clinical network managers and co-chairs and the Agency executive for participating in this research. The authors are grateful for the contributions of the Clinical Network Research Group that provided ongoing critique and intellectual contributions to various aspects of the instrument design through their involvement in the broader study. The Clinical Network Research group is comprised of: Chief investigators – Mary Haines (Sax Institute), Sally Redman (Sax Institute), Peter Castaldi (University of Sydney), Catherine D’Este (Australian National University), Jonathan Craig (University of Sydney), Elizabeth Elliott (University of Sydney), Anthony Scott (University of Melbourne); Associate investigators - Elizabeth Yano (University of California Los Angeles/Veterans Health Administration), Carol Pollock (Royal North Shore Hospital), Kate Needham (Agency for Clinical Innovation), Sandy Middleton (Australian Catholic University), Christine Paul (University of Newcastle); Honorary investigators – William (Hunter) Watt (Agency for Clinical Innovation), Nigel Lyons (Agency for Clinical Innovation); and Study contributors – Bernadette (Bea) Brown (Sax Institute), Amanda Dominello (Sax Institute), Deanna Kalucy (Sax Institute), Emily Klineberg (NSW Research Alliance for Children’s Health), Elizabeth McInnes (Australian Catholic University), Jo-An Atkinson (Sax Institute). Finally, the authors wish to thank Daniel Barker (University of Newcastle), Mario D’Souza (University of Newcastle) and Christopher Oldmeadow (University of Newcastle) for their input into data management and data analysis.