Background
Quality of care from the patient’s perspective is increasingly in the spotlight, but what exactly does it mean? From the mid-80s onward, there has been a general shift in healthcare to view patients as consumers of care [
1]. With that shift has come a notion that consumer satisfaction can serve to measure the quality of public health services [
2,
3]. Throughout the past decades, the same concept was studied, though with different names and with slightly different contents, from patient satisfaction, patient empowerment, patient-centeredness to patient experiences. Patient scores of satisfaction with certain aspects of healthcare proved hard to interpret, as the term satisfaction was not well defined and its simplicity did not acknowledge the multidimensional nature of satisfaction [
4]. A shift was made from measuring the opinion of the patient to measuring facts to assess the quality of care. With that came a tendency to see the patient as a whole, autonomous person (patient-centeredness) who needed to be empowered to act as a full partner in the treatment process (patient empowerment) [
5]. The more general term ‘patient experience’ arose around the same time and incorporated the former two terms. In this study, the latter term is used, as it does the most justice to the multidimensionality and complexity of quality of care from a patient’s perspective. Over time, there have been a lot of initiatives to measure patient experiences. An important survey for measuring quality of care from the patient’s perspective is the Consumer Assessment of Healthcare Providers and Systems (CAHPS), a programme of the U.S. Agency for Healthcare Research and Quality [
6]. This survey captures patients’ experiences in four dimensions (receiving necessary care, receiving care quickly, how well doctors communicate, and customer service).
In the Netherlands, a national programme started in 2007 measuring the quality of physical therapy care. The programme was developed as consensus-based among patients, physical therapists, health insurance companies, as well as the Health Care Inspectorate [
7]. Apart from the dimensions of the quality of a practice’s performance and that of the actual organisation, a tool was developed to assess the quality of care from a patient’s perspective. A modified RAND appropriateness Delphi procedure was used, in which evidence for the dimensions from a literature review were not sent to the experts, but rather the framework that was extracted from literature [
7,
8]. An agreement was reached in three rounds on ten quality dimensions from the patient’s perspective focusing on the following dimensions: accessibility, accommodation, information and communication, physical therapist’s approach, continuity, self-management support, intervention outcome, global perceived effect (GPE), length of intervention period, and patient-centeredness (see Table
1). A patient questionnaire covering 41 items was developed to measure these dimensions [
7] (see Additional file
1).
Table 1
Proposed dimensions for patient experience: dimension, description and items measured
1 | Accessibility | The average degree (in%) of accessibility | access by phone; access by transport; free choice therapist; free choice appointment time; waiting time until first appointment; waiting time in practice (less than 15 minutes); appropriate treatment time; appropriate expertise (n = 8) |
2 | Accommodation | The average degree (in%) of accommodation requirements | hygiene; comfort (waiting and exercise room); enough chairs waiting room; privacy; accessibility (n = 6) |
3 | Information and communication | The average degree (in%) of perceived information and communication | open attitude to questions; clear explanations; tried to understand my problem; informed about course of disease; informed about intervention period; clear intervention; explained daily exercises; advised on daily life; fit between the actual and expected intervention period; results of treatment discussed (n = 10) |
4 | Physical therapist’s approach | The average degree (in%) of perceived physical therapist’s approach | empathy; politeness; attentive listening; taken seriously; feeling at ease; taking into account specific needs (n = 6) |
5 | Continuity | The average degree (in%) in continuity | treatment by more than one therapist; adequate preparation; consistency of information; progress discussed with general practitioner (n = 4) |
6 | Self-management support | The average degree (in%) in perceived self- management support | working together to reach intervention goals; advice to prevent new complaints; monitoring the accuracy of the exercises at home; monitoring the adherence to the advice given (n = 4) |
7 | Intervention outcome | The average degree (in%) in which the intervention outcome is reached | increased performance in daily activities; fit between actual and expected intervention outcome (n = 2) |
8 | Global perceived effect (GPE) | The average degree (in%) in which the outcome in terms of the GPE is reached |
Global Perceived Effect scale (GPE) (n = 1) |
9 | Length of intervention period | The average degree (in%) in which the length of the intervention period is as expected | fit between actual and expected intervention period (n = 1) |
10 | Patient-centeredness | The average degree (in%) of patient- centeredness | free choice therapist (see 1); appropriate expertise (see 1); privacy (see 2), GPE score (see 8), fit between actual and expected outcome (see 7); discussed different treatment methods |
In these patient surveys, high item scores combined with low variance [
3] raised questions about the usability of patient experiences to measure differences in quality and of using the patient’s perspective as an instrument to improve the quality of care. In other words, does the knowledge gained equal the weight of the burden that is placed on the patients? In the meta-analysis by Hush, Cameron, and Mackay [
3] for example, the average satisfaction rate of patients in physical therapy was 4.44 out of a five-point scale with a 95% confidence interval of 4.41–4.46. With such high scores and low variance, it becomes very difficult to distinguish high performing practices from practices with lower quality of care. As a consequence, these measurements are not appropriate for pay-for-performance strategies of insurance companies or as consumer information to guide choices between health care providers.
Low variance has been associated with the length of the questionnaire [
9], as respondents become bored and fatigued with long surveys and less willing to put effort into answering questions. More uniform answers are given in longer surveys, affecting the variance in the data. Related to this is the lack of consensus regarding the definition of separate dimensions, and thus on the number of items needed. Four to ten dimensions are described in literature that should capture patient experiences with health care [
10‐
13]. The reduction of the number of dimensions, resulting in a decrease of the number of items, and thus a lessening of the burden placed on the respondents, should be part of the development process to ensure the collection of high quality data.
In this study, whether the consensus-based dimensions that measure patient experiences of physical therapy in primary care can be statistically identified will be tested along with whether or not item reduction is possible. The dimensions of quality measurements are often evaluated through an examination of their internal consistency; a factor analysis at item-level to clarify the number of dimensions is much less common. Testing the internal consistency of the dimensions separately will not show whether the distinction between dimensions was justified to begin with. Factor analysis at item level will show if the same dimensions can be extracted from the data. The aim of this study therefore is to perform an exploratory factor analysis at item-level to detect the number of dimensions in patients’ experiences with physical therapy.
Discussion
The aim of this study was to test how many dimensions in patient experiences with physical therapy in primary care could be distilled. Factor analysis showed that the ten proposed dimensions within patient experience can be reduced to three, and as a result the number of items can be reduced by 15, which is more than a third.
The reduction of dimensions from ten, sometimes overlapping dimensions to three clear and easy to interpret dimensions creates clarity for health care professionals, who can now see at a glance in what areas they can improve their services, as well as for patients for whom the information on the quality of care is easier to comprehend. Last, the item reduction makes the survey more feasible, putting less of a burden on the patients. Further research is needed to assess the quality of the shorter version of the questionnaire.
The dimensions found are comparable to the results of other studies in the field. Concurrent with the field test in this study, the CAHPS was introduced to the Netherlands, and so-called Consumer Quality Indices (CQI) were developed for several conditions and care settings [
16]. The CQI uses three dimensions to measure patient experiences with health care providers (‘conduct of health care providers’, ‘access to care’, ‘receiving the care needed’). The dimension ‘conduct of health care providers’ is comparable to the dimension ‘personal interaction’ in this study, although the CQI only uses five items [
13] of which four are exactly the same as those found in this study. The key area ‘relationship with the professional’ distinguished by May in his review of patient satisfaction in management of back pain [
10] and the dimension ‘clinical behaviour’ (of general practitioners) found by Wensing [
17] are also comparable to ‘personal interaction’. Wensing [
17] uses 16 items (including two on outcome) of which half are comparable to our items, and the other half are occupation specific for general practitioners. Further, the dimension ‘interpersonal care’ (of general practitioners) found by Bower, Mead and Roland [
18] covers eight items, of which five are practically the same as in the current study. May’s review [
10] further distinguished a key area ‘environmental issues’, which can be compared to the dimension ‘practice organisation’ in the current study covering access and facilities components. Wensing [
17] found ‘organisation of care’ (seven items), Bower, Mead and Roland [
18] found the dimension ‘access’ (five items) and De Boer et al. [
13] found ‘access to care’ (eight items). Again, about half of the items of these studies are similar to the items for ‘practice organisation’ in the current study. May’s [
10] was the only study to distinguish a separate key area on ‘clinical outcome’. Wensing [
17] incorporated outcome in the dimension ‘clinical behaviour’, whereas the others did not mention outcome at all.
The concepts of personal interaction and organisational aspects are largely agreed upon in literature, with some differences in content as well as in the number of items needed to form the scale. The aim should always be to minimise strain on the patients while maintaining quality information. Further research on item reduction in quality dimensions of patient experience is needed to achieve this goal. The main difference within literature is of the dimension ‘outcome’, which was treated as a separate dimension in the present study. As May [
4] points out, a positive outcome is not always correlated to a satisfied patient and should therefore be measured separately. Further, patients who seek the best treatment for their conditions might value information on outcome scores of health care providers.
One of the major limitations regards data collection. Selection bias might have played a role in this study, as physical therapists themselves recruited the patients for participation . It was clear that the information from this survey could have financial consequences for physical therapy practices in the future, since health insurance companies are making a shift from paying for quantity to paying for performance. It is therefore conceivable that physical therapists selected patients suffering from less complex problems, for example, who were treated successfully, or with whom they had good communicative relations. There are roughly three other ways to collect data from patients. The first option is to have a permanent collection. However, the high scores on the dimensions of patient experiences do not justify such a time-consuming effort, both for patients and physical therapists. A second option is to randomly select patients for invitations from the databases of health insurance companies or directly from the Electronic Medical Records (EMRs), for example. A third way is to compare the experiences of patients with measurements of the quality of the physical therapy process of the same patients. Measuring the quality of physical therapy care from a patient’s perspective was part of a broader attempt to monitor the quality of physical therapy care as a whole. Besides patient experiences, the quality of the clinical reasoning process with respect to the screening and diagnostics process, the intervention process, and the outcome, was also measured [
7]. This survey was based on the existing guidelines concerning the necessary steps in the clinical reasoning process and was completed by the physical therapists. If this data could be extracted directly and randomly from the EMRs, and if the selected patient cases could also be invited to participate in the patient experience survey, the results could be compared. Assessing the same process from different perspectives can be very valuable, since understanding differences in perceptions between therapist and patient can help the professional to better understand the needs of the patients they are treating and thus improve the (perceived) quality of care. However, as this has never been described in literature to the knowledge of the authors, more research is necessary to establish the added value to measuring and ultimately improving the quality of care.
Secondly, most quality dimensions are developed through a consensus-based process. Consensus is a very important first step to create a basis for quality research and the development of quality measurements. Involving all stakeholders can create the support base necessary to ensure participation of all parties involved. Besides this, a good starting point is to prioritise subjects with a broad scope and to discuss what is important for patients. In this way, ten dimensions of patient experiences were proposed to be tested in the field. Statistical testing should be part of the development process. It is often seen however, as was the case in our study, that quality programmes have already been nationally introduced while, in the meantime, information is still collected on the testing properties. Pressure from stakeholders to supply data is high. Still, this study has shown that factor analysis is a valuable next step in the development process as it can redefine and sharpen the proposed dimensions of quality of care from a patient’s perspective. In trying to satisfy the patients and to meet their needs, the consensus procedure has led to an overestimation of the number of dimensions patients distinguish, as the analysis showed, even though patient organisations were involved in the development process. Sharpening the definitions of the dimensions of the patient’s perspective will help to better measure the quality of care. Further, it becomes clearer where possibilities for the improvement of the quality of care lie. Finally, patients do not benefit from too many, vaguely formulated dimensions, but with three clear dimensions they can compare practices with ease on the dimensions they value the most.
Lastly, only a small number of patients who participated in the data sampling had finished their treatments (n = 350), although this was a requirement in the instruction to the physiotherapists. This means that most of the patients were still being treated, a situation that could also lead to bias, as the patients still depended on the physical therapists. It also means that the items measuring outcome were calculated on a small proportion of patients instead of the patient sample as a whole. This last limitation could be a result of the relatively short period of data collection.
A compelling question, given the high scores and low variance, is whether or not patients should be bothered with surveys on the quality of care at all, as the CQI, for instance, produced very high scores and low variance as well [
13]. Further studies need to examine whether the reduced length of the questionnaire increases variance and thus increases the quality of the data. However, there are other ways to monitor quality of care, or to extract the bad apples. The quality of ‘personal interaction’ can also be monitored by having a mandatory open-access complaint registration. However, studies of such complaint systems within hospitalised care conclude that a lot of adverse events are unreported by patients and health professionals [
19]. Therefore, a combination with other forms of quality measurement is necessary, such as a combination with a shorter survey on patient experiences every three years or so to ensure sufficient information on the quality of care, thereby minimising the strain on patients. Practices can be audited at all times by the Inspectorate, should the complaint registration or low performance scores on the patient experience survey give rise to concerns on the quality of care. ’, It is questionable whether the patient should be asked to evaluate the dimension ‘practice organisation’ as well. To assess the most basic organisational requirements, certifications can serve as quality measurements just as well as asking patients, if not better. Since a lot of the physical therapy practices already have a certification, why ask the patients as well? One problem with this is that the certifications cost a lot of money and time. Besides this, they are not mandatory, so practices can choose not to participate.
Based on the above, we recommend a thrice-yearly, shorter survey of triangulated patients who are randomly selected from the EMRs. Besides this, a visible and mandatory complaint desk (physical or digital) should be implemented to monitor the quality of care at all times. If need be, the Inspectorate can audit the low performing practices based on the number of complaints or low performance on the surveys.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
MS participated in the design of the study, performed the statistical analysis, and drafted the manuscript. HC helped to draft the manuscript. MWGN and JB conceived of the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.