Background
Methods
Search strategy
Study selection
Selection criteria
Data extraction
Features | Name of instrument | ||||
---|---|---|---|---|---|
HSOPSC (1) | SAQ (2) | PSCHO (3) | SOS (4) | Can-PSC (5) | |
Authors | Sorra & Dyer | Sexton | Singer et al | Vogus & Sutcliffe | Ginsburg et al |
Publication year | 2010 | 2006 | 2007 | 2007 | 2013 |
Country | USA | USA | USA | USA | Canada |
Instrument details: • Number of items • Type of Likert scale • Level of analysis • Results Reporting | 42 | 60 (30 core items) | 38 | 9 | 19 |
5 point | 5 point | 5 point | 7 point | 5 point | |
Individual, Unit, Hospital | Individual, Unit | Individual, Unit, Hospital | Unit | Unit, Hospital | |
Positive percentage scores | Positive percentage scores | Percentage problematic scoring | Not reported | Not reported | |
Setting & Staff | Hospital setting Healthcare Staff | Hospital setting Healthcare Staff | Hospital setting Healthcare Staff including non clinical staff | Hospital setting Nursing units | Hospital setting Healthcare Staff |
Features | Name of instrument | ||||
---|---|---|---|---|---|
HSOPSC (1) | SAQ (2) | PSCHO (3) | SOS (4) | Can-PSC (5) | |
Safety Climate Dimensions: • Number of Dimensions Safety Climate Dimensions: • Scope of Dimensions | 12 | 6 | 9 | 1 | 6 |
Communication openness, Feedback and communication about error, Frequency of event reporting, Handoffs and transitions, Management support for patient safety, Non-punitive response to error, Organisational learning –Continuous improvement, Overall perceptions of patient safety, Staffing, Supervisor/manager expectations & actions promoting safety, Teamwork across units, Teamwork within units. | Teamwork, Safety climate, Job satisfaction, Stress recognition, Perception of management, Working conditions. | Senior manager’s engagement, Organisational resources for safety, Overall emphasis on safety, Unit safety norms, Unit recognition and support for safety efforts, Fear of shame, Provision of safe care, Learning, Fear of blame | Self-reported “behaviours enabling safety culture” through collective mindfulness. | Organisational leadership support for safety, Incident follow-up, Supervisory leadership for safety, Unit learning culture, Enabling open communication I: judgment-free environment, Enabling open communication II: job repercussions of error. | |
Theoretical basis | Literature review in areas of safety management; organizational & safety climate & culture; medical error & error reporting; patient safety. Existing safety climate and culture instruments. | Based on Vincent’s framework for analyzing risk & safety, Donabedian’s conceptual model for assessing quality Derived from an aviation safety culture questionnaire | High reliability organizations Derived from a naval aviation safety culture questionnaire | High Reliability organizations | Based on Zohar & Hofmann &Mark’s work on safety climate & error literature Adapted from work by Singer and colleagues |
Key features | Tested on a large sample of hospitals Ability to benchmark data Self-report outcome measures | Tested on a large sample of hospitals Cross-industry comparisons Ability to benchmark data Favourable scores were associated with shorter lengths of stay& fewer medication errors in other studies | Measures safety climate among all hospital personnel and across multiple hospitals of different types Cross-industry comparisons | SOS is negatively associated with reported medication errors and patient falls | Validated for use across a range of care settings |
Limitations | Supervisor/ Manager Expectations & Actions Promoting Patient Safety CFI =0.88 at unit & hospital levels Item A7 in the Staffing composite had a low within- unit & within hospital factor loading (0.36). Staffing had Cronbach’s alpha =0.62 | (SRMR) model fit statistic at the clinical area level was larger than desirable, indicating further scale refinement Modest Response Rate | Three individual dimensions demonstrate low internal consistency. Selection Bias | Validated using a sample composed exclusively of registered nurses | Questions about generalizability Further research and cross- validation of will be required with international samples More appropriate for improvement and research Data was not suitable for multilevel CFA |
Features | Name of instrument | ||||
---|---|---|---|---|---|
HSOPSC (1) | SAQ (2) | PSCHO (3) | SOS (4) | Can-PSC (5) | |
• Psychometric properties: • Content Validity • Construct Validity: ◦ Factor Structure • CFA ◦ Model fit indicesa • EFA ◦ Discriminant Validity • Criterion Validity • Reliability • Item analysis • Test Re-test Reliability • ANOVA | Yes | Yes | Yes | Yes | Yes |
Convergent Validity: CFA: 12 factors | Convergent Validity: CFA | Convergent Validity: MTA | Convergent Validity: CFA: Single factor | Convergent Validity: CFA | |
CFI 5 out of 6 factors > 0.90 exceptb SMEA | CFI 0.90 RMSEA 0.03 | ___ | CFI > 0.90 RMSEA 0.06 | CFI > 0.90 RMSEA 0.033 | |
SRMR < 0.08 | SRMR 0.17 between & 0.04 within clinical areas | SRMR 0.033 | – | ||
Chi < 0.05 Good model fit | Chi < 0.0001 Satisfactory model fit | Chi < 0.0001 Good model fit | Chi < 0.0001 Good model fit | ||
EFA: Yes 14 factors | EFA: Yes 6 factors | EFA: Yes 7 factors | EFA: No | EFA: Yes 6 factors | |
Yes | Yes | Yes | Yes | Yes | |
No | No | No | Yes | No | |
Cronbach’s Alpha ≥0.70 except staffing | Raykov ñ coefficient = 0.90 | Cronbach’s Alpha (0.50–0.89) | Cronbach’s Alpha ≥0.88 | Cronbach’s Alpha (0.70–0.80) | |
Yes | Yes | Yes | Yes | No | |
No | No | No | No | No | |
No | No | No | Yes | No |
Quality appraisal
Quality Appraisal Criteria | HSOPSC Sorra and Dyer (2010) [23] | SAQ Sexton et al. (2006) [22] | PSCHO Singer et al. (2007) [24] | SOS Vogus & Sutcliffe (2007) [25] | Can-PSC Ginsburg et al. (2013) [15] |
---|---|---|---|---|---|
Aim(s) or research question(s) clearly stated? | ✔ | ✔ | ✔ | ✔ | ✔ |
Study methodology and design evident and appropriate? | ✔ | ✔ | ✔ | ✔ | ✔ |
Data collection described and appropriate? | ✔ | ✔ | ✔ | ✔ | ✖ |
Study population described and appropriate? | ✔ | ✖ | ✖ | ✔ | ✖ |
Data analysis method(s) described and appropriate? | ✔ | ✔ | ✔ | ✔ | ✔ |
Response Rate acceptable (60% or above) | ✖ | ✔ | ✖ | ✖ | ✔ |
Results reported in sufficient detail? | ✔ | ✔ | ✔ | ✔ | ✔ |
Total Score | 14/12 | 14/12 | 14/10 | 14/12 | 14/10 |
0–5 Poor Quality 6–10 Fair Quality 11–14 Good Quality Yes ✔, No ✖ | Good | Good | Fair | Good | Fair |
Safety Climate Dimension | SOS items | HSOPSC Items | SAQ items | PSCHO items | Can-PSC items | Total Number of items/dimension | Percentage of items/dimension % |
---|---|---|---|---|---|---|---|
Top management support & institutional commitment to safety | 0 | 7 | 6 | 9 | 7 | 29 | 20.6 |
Teamwork | 5 | 8 | 7 | 0 | 0 | 20 | 14.2 |
Safety systems: “Policies&Procedures, Safety Planning, Hand offs & Transitions, Staffing, Equipment” | 2 | 7 | 3 | 6 | 0 | 18 | 12.8 |
Safety perceptions & Attitudes of staff, Risk perceptions | 0 | 3 | 3 | 9 | 1 | 17 | 11.3 |
Reporting Incidents & “non-punitive” response to error | 0 | 3 | 1 | 6 | 5 | 14 | 10.6 |
Communication openness | 1 | 4 | 4 | 1 | 0 | 10 | 7.1 |
Organizational learning and continuous improvement | 1 | 3 | 0 | 1 | 4 | 9 | 6.4 |
Beliefs about the causes of errors & adverse events | 0 | 0 | 4 | 3 | 0 | 7 | 5.0 |
Training & continuous education | 0 | 0 | 3 | 2 | 0 | 5 | 3.5 |
Staff satisfaction | 0 | 0 | 5 | 0 | 0 | 5 | 3.5 |
Feedback & Communication about adverse events | 0 | 2 | 0 | 0 | 2 | 4 | 2.8 |
Work Pressure | 0 | 2 | 0 | 1 | 0 | 3 | 2.1 |
Other | 0 | 0 | 0 | 0 | 0 | 0 | 0.0 |
Total | 9 | 39 | 36 | 38 | 19 | 141 | 100% |
Results
General characteristics of reviewed studies
Methodological quality and psychometric assessment of reviewed studies
Methodological quality of reviewed studies
Psychometric properties of reviewed instruments
Content Validity | |
Haynes et al. (1995, [77] p.238) defined Content Validity as “the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose”. It is used for ascertaining whether the content of the measure was appropriate and pertinent to the study purpose and is usually undertaken by seven or more experts in addition to other sources including review of empirical literature and relevant theory [78]. | |
Criterion Validity | |
Criterion validity delivers evidence about how well scores on a measure correlate with other measures of the same construct or very similar underlying constructs that theoretically should be related [79]. As Flin et al. (2006) [20] indicated, Criterion Validity could be established by correlating the safety climate scores with outcome measures. Outcome measures of safety in health care could include items such as patient injuries, worker injuries, or other organizational outcomes [20]. | |
Construct Validity | |
Construct validity can be defined as the degree to which items on an instrument relate to the relevant theoretical construct [80]. A variety of ways exists to assess the construct validity of an instrument, including Factor analysis. Factor analysis is a statistical method that “explores the extent to which individual items in a questionnaire can be grouped together according to the correlations between the responses to them”, thus reducing the dimensionality of the data (Hutchinson et al., 2006, [81] p.348). Convergent Validity represents the degree to which different measures of the same construct show correlation with each other and is tested using confirmatory factor analysis (CFA). Conversely, Discriminant Validity represents the extent to which measures of different constructs show correlation with one other [78]. The two main techniques of Factor Analysis are Exploratory Factor Analysis (EFA), and Confirmatory Factor Analysis (CFA). EFA is used to uncover the underlying factor structure of a questionnaire, while CFA is used to test the proposed factor structure of the questionnaire [81]. A CFA measurement model shows convergent validity if items load significantly (.40 or greater) onto the assigned factor and model fit indices suggest adequate fit [25]. Models with a cutoff value close to .90 for CFI; a cutoff value close to .08 for SRMR; and a cutoff value close to .06 for RMSEA are indicative of good model fit [38]. | |
Reliability | |
Reliability reflects the degree to which test scores are replicable [76, 82]. It ensures that respondents are responding consistently to the items within each composite. Reliability is also referred to as consistency. It can be assessed using Cronbach’s alpha, which is the most commonly used internal consistency reliability coefficient. Cronbach’s alpha ranges from 0 to 1.00 with the minimum criterion for acceptable reliability is an alpha of at least .70. [83, 84]. |
Content validity
Criterion validity
Construct validity
Factor structure and internal reliability
Intercorrelations
Discussion
Psychometric properties
Safety climate dimensions
Safety culture dimensions | Safety climate/culture studies | |||||
---|---|---|---|---|---|---|
Colla and Bracken et al. [8] 9 Tools | Flin and Burns et al. [20] 12 Tools | Singla and Kitch et al. [21] 13 Tools | Fleming and Wentzell [52] 4 studies | Halligan and Zecevic [49] 130 Studies | Current systematic review | |
Top management support | √ | √ | √ | √ | √ | √ |
Teamwork | √ | √ | √ | √ | √ | |
Safety systems | √ | √ | √ | √ | ||
Feedback & Communication | √ | √ | √ | |||
Reporting Incidents | √ | √ | √ | √ | √ | |
Communication openness | √ | √ | √ | √ | ||
Organizational learning | √ | √ | √ | |||
Beliefs about the causes of errors & adverse events | √ | |||||
Work Pressure | √ | √ | ||||
Risk perception | √ | √ | ||||
Beliefs about the importance of safety | √ | |||||
Safety Attitudes of staff | √ | √ |