Article Text

Download PDFPDF

Protocol for a process evaluation of a cluster randomised controlled trial to improve management of multimorbidity in general practice: the 3D study
  1. Cindy Mann1,
  2. Alison Shaw1,
  3. Bruce Guthrie2,
  4. Lesley Wye1,
  5. Mei-See Man1,
  6. Sandra Hollinghurst1,
  7. Sara Brookes3,
  8. Peter Bower4,
  9. Stewart Mercer5,
  10. Chris Salisbury1
  1. 1Centre for Academic Primary Care, University of Bristol, Bristol, UK
  2. 2Division of Population Health Sciences, University of Dundee, Dundee, UK
  3. 3School of Social and Community Medicine, University of Bristol, Bristol, UK
  4. 4Centre for Primary Care, University of Manchester, Manchester, UK
  5. 5Department of General Practice and Primary Care, University of Glasgow, Glasgow, UK
  1. Correspondence to Cindy Mann; Cindy.mann{at}bristol.ac.uk

Abstract

Introduction As an increasing number of people are living with more than 1 long-term condition, identifying effective interventions for the management of multimorbidity in primary care has become a matter of urgency. Interventions are challenging to evaluate due to intervention complexity and the need for adaptability to different contexts. A process evaluation can provide extra information necessary for interpreting trial results and making decisions about whether the intervention is likely to be successful in a wider context. The 3D (dimensions of health, drugs and depression) study will recruit 32 UK general practices to a cluster randomised controlled trial to evaluate effectiveness of a patient-centred intervention. Practices will be randomised to intervention or usual care.

Methods and analysis The aim of the process evaluation is to understand how and why the intervention was effective or ineffective and the effect of context. As part of the intervention, quantitative data will be collected to provide implementation feedback to all intervention practices and will contribute to evaluation of implementation fidelity, alongside case study data. Data will be collected at the beginning and end of the trial to characterise each practice and how it provides care to patients with multimorbidity. Mixed methods will be used to collect qualitative data from 4 case study practices, purposively sampled from among intervention practices. Qualitative data will be analysed using techniques of constant comparison to develop codes integrated within a flexible framework of themes. Quantitative and qualitative data will be integrated to describe case study sites and develop possible explanations for implementation variation. Analysis will take place prior to knowing trial outcomes.

Ethics and dissemination Study approved by South West (Frenchay) National Health Service (NHS) Research Ethics Committee (14/SW/0011). Findings will be disseminated via a final report, peer-reviewed publications and practical guidance to healthcare professionals, commissioners and policymakers.

Trial registration number ISRCTN06180958.

  • Process evaluation
  • Multimorbidity
  • Patient-centred care
  • Protocol
  • Family Practice

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • A context-sensitive, preplanned process evaluation of a highly complex health service intervention addressing an important healthcare challenge.

  • The main limitation is the number of case studies and that a usual care case study is not included. However, we have planned a comprehensive survey of practice characteristics and usual care in all practices at the beginning and end of the trial to evaluate the influence of context.

  • A strength of this evaluation is that it has been subject to careful prioritisation based on a detailed description of the intervention and of the trial processes (figures 1 and 2).

  • Key decisions about focus have been made prospectively, but by committing to only four case studies initially, we have retained flexibility to respond to emerging issues and/or to investigate one usual care practice.

  • Depth of evaluation in the case studies has been balanced with breadth by collection of quantitative implementation data from all intervention practices to contribute to assessment of fidelity of adoption, delivery and maintenance.

Introduction

Multimorbidity presents considerable challenges for the organisation and delivery of healthcare because of the increased complexity of managing several coexisting conditions and the increased demand on time and resources.1 ,2 It is now a common situation in primary care1 and the need to find efficient and effective approaches to this challenge is becoming more urgent as multimorbidity becomes more prevalent.3 However, there is a current lack of evidence about which interventions may be effective.4 Successful interventions are likely to be complex and pragmatic in order to be transferable to different patient situations and different healthcare contexts and will need to be tested in large-scale multisite trials. Evaluation is therefore challenging. This paper presents the protocol for the process evaluation of a trial of one such intervention.

The 3D study

The trial that is the subject of this process evaluation is a multisite cluster randomised controlled trial of an intervention to improve the management of people with multimorbidity in general practice—the 3D (dimensions of health, drugs and depression) study. The 3D study protocol is published in full,5 so this paper will only briefly describe the 3D intervention to give context to the process evaluation design.

The 3D intervention is driven by the concept of patient-centred care, developed from previously published definitions6 ,7 and stated in the trial protocol as:

  • A focus on the patient's individual disease and illness experience: exploring the main reasons for their visit, their concerns and need for information.

  • A biopsychosocial perspective: seeking an integrated understanding of the whole person, including their emotional needs and life issues.

  • Finding common ground on what the problem is and mutually agreeing management plans.

  • Enhancing the continuing relationship between the patient and doctor (the therapeutic alliance).

The intervention aims to address known problems for people with multimorbidity. These include high illness burden, high treatment burden, poorly coordinated care, low quality of life, increased risk of depression and polypharmacy.8–10 The intervention consists of strategies to address these problems (figure 1) implemented at general practice level and individual clinician level, which include improved continuity, coordination and integration of care combined with a more holistic approach to achieve patient-centred delivery of care. Each patient will be allocated a specific general practitioner (GP), and nurse if appropriate, who will review all of the patient's long-term conditions (LTCs) in a single, longer, holistic, two-part review (3D review), repeated six monthly. The review is intended to balance the patient's agenda and quality of life issues with disease control and will replace separate disease-focused reviews. Each review addresses the 3Ds of dimensions of health, drugs and depression.

Figure 1

3D logic model including theoretical mechanisms of action. 3D, dimensions of health, drugs and depression; GP, general practitioner; LTC, long-term condition; QOL, quality of life.

General practices (GP practices) in two areas of England and in one part of Scotland will be recruited. The research team will train those randomised to deliver the intervention, and will discuss with them how they will fit the intervention to their own context in a way that maintains fidelity of function.11 The clinical staff (GPs and nurses) will be trained to implement certain strategies that fall within a patient-centred approach when reviewing those patients and to use a new consultation template that automatically adapts to the individual's conditions and supports the intervention. Reception staff in the GP practices will receive training to arrange the reviews and other appointments of the identified patients with their usual nurse or GP in extended time slots.

An external pilot took place prior to the main trial and influenced the final trial design. It was subject to a formative process evaluation, the aim of which was to support optimisation of the intervention by identifying processes that could be improved and factors that might threaten implementation of the intervention. It also provided the opportunity to test the process evaluation design for the main trial. Findings from this stage that resulted in changes to the intervention design are reported in the linked paper by Man et al.5

Process evaluation

A process evaluation accompanying a randomised controlled trial aids evaluation of the effectiveness of an intervention by investigating how it was implemented, the mechanisms by which it achieved its effect and how the intervention interacted with the context in which it was implemented.12–15 Researchers conducting process evaluations typically investigate the extent to which the intervention is reaching the participants for whom it is intended (reach), whether all the intervention components are being delivered as intended by the research team (delivery and fidelity), participants’ experiences of receiving an intervention (response), the extent to which an intervention is maintained over time (maintenance), and contextual factors that may influence maintenance and the precise form of the intervention delivered.12 ,16–18 Process evaluations can help researchers to distinguish between intervention failure (where the intervention concept is flawed), and implementation failure (where the intervention is poorly delivered).13 ,19 Theory may be used to focus the evaluation and provide additional insight into causal mechanisms affecting outcomes.14 ,20 ,21

Arguably, process evaluations are particularly necessary in multisite, pragmatic trials, where there is likely to be variation in the way the ‘same’ intervention is implemented.13 In cluster trials, outcomes are often measured at the individual level but the intervention is applied by the research team to the cluster, which implements it and applies it at the individual level. There may therefore be processes operating at both cluster and individual level, which are candidates for evaluation.22 Evaluation of processes within control clusters may also be important, for example, to understand the ‘usual care’ comparator.22

Process evaluations may employ mixed methods, both quantitative and qualitative, and apply these at different stages of the trial depending on the purpose of the evaluation.16–18 ,21 They may be used to optimise the design of the intervention and/or to provide insight into outcomes following its implementation.13 ,14 ,22 ,23

Until recently, there has been a lack of guidance for the design of process evaluations of complex interventions of healthcare trials and protocols are not commonly published. However, the recently published Medical Research Council (MRC) guidance on the process evaluation of complex interventions provides a comprehensive set of recommendations.12 Among these are recommendations that process evaluation researchers should prospectively state the aim of the evaluation and the research objectives, identify the processes they plan to study, the methods they will use and how they plan to integrate process and outcome data when analysing the results. This paper aims to fulfil those recommendations.

Process evaluation methods

Process evaluation design considerations

Variation in implementation of the 3D intervention is likely, due to multiple intervention components and diverse contexts and participants, both clinicians and patients. Practices’ differing characteristics influence their care arrangements for patients with LTCs and will affect the roles and expectations of clinical and administrative staff as well as patients' baseline experience and expectations of care. These differences, together with diverse local healthcare environments, policies and priorities may affect the ease with which the intervention integrates with existing practice and the extent of the change from usual practice. In addition, during the period of the trial, policy initiatives around health and social care integration and avoiding emergency hospital admission are likely to increase and other new incentivised services addressing the needs of those with LTCs may be introduced. These may differ across areas and will at least partly target the trial patient population, potentially affecting both the intervention and usual care. All of these factors may impact outcomes and constitute the context to be evaluated.

Contextual differences mean that assessment of fidelity must allow for local adaptation. Controlling the form of the intervention too tightly may undermine, rather than enhance, its effectiveness11 and local adaptation may actually result in more successful implementation of the principles of the intervention. The intervention description has therefore been framed in terms of strategies and principles, with additional detail where this is considered essential to implementation (figure 1). This will form the basis for evaluation—fidelity of function, rather than fidelity of form.11 Modifications made to the intervention to facilitate implementation within the local context can be assessed as to whether they are consistent with the definition of the intervention and fulfil fidelity criteria.24 A thorough record of varying contexts and corresponding adaptations will facilitate replication in other settings.24

The design of the main trial process evaluation was informed by the external pilot and is primarily based on a recently published framework for process evaluation of cluster randomised controlled trials.22 The framework, which describes the various trial stages as potential targets for evaluation, has been used to guide selection of the most relevant targets in this case. The intervention diagram (figure 1) serves also as a logic model for identifying assumed mechanisms of action, as recommended by the recently published MRC guidance on process evaluation of complex interventions.12 The mechanisms of action provide the focus of the process evaluation within the selected stages. Many of these can be grouped under the concept of patient-centred care but, in common with many complex interventions, there is no single intervention theory that explains them all.12 Normalisation process theory (NPT)25 is relevant to the implementation and maintenance of the intervention and has helped to inform the process evaluation design, for example, in considering the effect of practice context and existing organisation of care on how the intervention may integrate with usual practice.

To make best use of limited evaluation resources, it is necessary to focus on factors that may have the greatest impact on implementation and/or on areas that are considered critical to the success of the intervention and which may be vulnerable to poor implementation.12 ,22 Since the external pilot confirmed that there are significant differences in practice size and organisation and that these drive variation in how the intervention is adopted, delivered and maintained, context is a key focus. Variation in adoption, delivery and maintenance as well as patients' responses to the intervention, affecting both reach and fidelity, are likely to be important factors in outcome differences. Therefore, these trial stages, in addition to context, will form the scope of the evaluation. The concept of patient-centred care will inform the evaluation since this is the core concept underlying the intervention. The dimensions of the evaluation for the main trial are illustrated in figure 2 and are as follows:

  1. Context of each practice, including the practice's structure and organisation of care and its local context;

  2. Practices’ adoption of the intervention, including necessary organisational changes;

  3. Practice health professionals’ delivery of the intervention to patients;

  4. Response of patients to the intervention;

  5. Maintenance and unanticipated effects, including impact of expected incentives and other measures intended to support implementation and maintenance.

Figure 2

Conceptual diagram showing focus of 3D process evaluation. 3D, dimensions of health, drugs and depression; GP, general practitioner.

Recruitment of practices and patients will be evaluated as part of the main trial. Although research team activity in training the practices is arguably an important part of the intervention, this was evaluated and optimised during the external pilot and it is not included in the scope of the main trial evaluation.

Aims and objectives

The overall aim of the process evaluation is to better understand how and why the intervention was effective or ineffective, and to identify contextually relevant strategies for successful implementation as well as practical difficulties in adoption, delivery and maintenance to inform wider implementation.

Objectives are:

  1. To establish practice and local health area context in all intervention and usual care practices at the beginning and end of the trial period to:

    • Identify differences in usual care and how this might have affected adoption, delivery and maintenance;

    • Identify changes in the care of patients with multimorbidity occurring in intervention and usual care practices during the trial period which might affect outcomes.

  2. To explore how and why organisational aspects of the 3D intervention were implemented (or not).

  3. To explore how health professionals in case study practices delivered the intervention to patients, whether all components were included, how and why it varied, and to what extent they changed their practice to make it more patient centred.

  4. To explore how patients responded to the 3D intervention and to what extent they experienced care as patient centred.

  5. To explore how and why practices maintained (or did not maintain) reach and delivery of the intervention.

Figure 2 illustrates the design of the process evaluation, the trial stages that have been prioritised and how the objectives relate to the stages.

Overall design

The overall design is a mixed-methods study using quantitative and some qualitative data from all practices, and observation, interview and focus group data from four purposively selected case study practices.

Data from all practices

Mixed quantitative and qualitative data will be collected from all trial practices, including those in the usual care arm, using a purpose-designed practice profile form. Practices will be profiled, including size and organisational characteristics and how they manage LTC reviews. This will facilitate assessment of practice context and effects of contextual variation. In intervention practices only, collection of quantitative data via the EMIS web electronic patient record system will contribute to the evaluation of fidelity by showing whether or not each component of the intervention was delivered in each intervention practice. Some of these data will be used as part of the intervention to provide monthly feedback to practices to support maintenance, as well as in the process evaluation. The data will provide information about continuity of care and appointments as well as clinician delivery of 3D review components and the reach of the intervention.

Case studies

Sampling and recruitment

A minimum of four intervention practices will be purposively sampled as case studies.26 The aim will be to achieve a sample that encompasses variety in intervention implementation and maintenance, using information available shortly after practices are randomised. A judgement will be made on the basis of information available from the researchers training the practices about how the practices plan to implement the intervention and any difficulties they have identified, supplemented by information about differences in size of practice, and assessed similarity of usual care at baseline to the 3D intervention. This is based on the assumptions that: (1) practices that are already organised in a way that is more consistent with 3D may adopt it more readily and (2) that larger practices may have lower continuity of care and a lower proportion of their clinicians taking part in 3D which may affect implementation, maintenance and outcomes. Data that would allow us to sample directly for variation in adoption, delivery and maintenance will not be available prospectively. However, we anticipate that the above criteria will result in sufficient heterogeneity to provide examples of relatively poor and relatively good adoption, delivery and maintenance, and will allow us to identify barriers and facilitators to implementation and to generate hypotheses about factors that may be associated with differing outcomes. Adoption includes the steps described in figure 1 under ‘Practices organisation of care’, such as allocating a usual GP, and delivery includes the components listed in figure 1 under ‘Clinicians conduct of reviews’, for example, provision of a health plan.

The practices will be sampled and enrolled into the process evaluation as case studies before or soon after they have received their training and begun implementation because not all the information on which sampling is based will be available before then. Data will be collected from that point and continue throughout the delivery of the intervention, which will allow us to observe the whole process of their participation in the trial from set-up through to completion. Although only four case study practices will be followed through the entire process, some flexibility over the number of practices investigated and the number of interviews conducted will be retained to allow for ‘responsive’ qualitative data collection in additional practices to investigate emerging issues.12 For example, a practice that has organisational challenges, such as staff shortages, or nursing skill mixes that may affect set-up or fails to adopt as expected, or one that fails to deliver an element of the intervention (see figure 1), will potentially be sampled later on. This will depend on findings that emerge from ongoing concurrent analysis. If resources allow, a usual care practice may also be recruited as a case study to provide a comparison to care provided in intervention practices and to check whether findings from case study practices are reflective of the intervention, and not simply due to evolving usual care.

Health professionals participating in 3D in the case study practices will be purposively sampled to achieve maximum diversity of role and experience, for example, inclusion of healthcare assistants as well as research nurses and practice nurses in the nurse sample. The lead GP, practice manager or other practice staff may be interviewed depending on implementation or delivery issues uncovered in the course of data collection. Patients of case study practices who have received 3D reviews will be sampled for variation taking into account age, gender, combination of health conditions and usual GP/nurse.

When practices and clinicians are approached to participate in the main trial, they will be informed that a process evaluation will be conducted and that participation in the process evaluation is optional. Additional written information will be provided to individual staff invited to participate in interviews and/or observation and recording of consultations and their informed consent secured. Patients consenting to participate in the trial will be informed that they may be invited to take part in an interview, focus group or observation of consultations for the process evaluation. If they are sampled for inclusion, they will receive detailed information and their consent will be obtained, prior to their involvement in the process evaluation.

Case study data collection

The qualitative data collected from the case study practices will provide more in-depth understanding of the context of each practice, how the practices implement, experience and maintain the intervention, and how their patients experience receiving it, for example, whether a patient-centred approach was achieved. The case studies will also shed light on whether, how and why variation occurs in delivery of each of the intervention components and how context and implementation interact. A variety of qualitative methods will be used in the case studies, including focus groups with patients, interviews with staff and patients, observation and/or recording of 3D review consultations and field notes. In the course of repeated visits to the practice, informal observation together with field notes will contribute to understanding the practice context. Interviews with reception staff and observation of reception functions will clarify how 3D appointments are being arranged.

To add to case study data into the effect of local context on implementation, broader contextual data will be obtained from interviews with healthcare commissioners in each trial site. This will enable assessment of differences between health areas and co-occurring changes in the wider healthcare environment and indicate how the intervention might be received in different healthcare environments.

Qualitative data will be collected using an encrypted digital audio-recording device. Field notes will be made at each practice visit. Audio data will be transcribed verbatim and anonymised prior to analysis.

Methods for each of the five objectives are described in tables 15.

Table 1

Methods for objective 1

Table 2

Methods for objective 2

Table 3

Methods for objective 3

Table 4

Methods for objective 4

Table 5

Methods for objective 5

Analysis

Data about usual care from all practices

Quantitative and qualitative data about usual care in all trial practices will be used to assess similarity of care at baseline to the intervention and to assess whether this changes over the course of the trial. A baseline description of the practice and of its usual care for patients with multimorbidity will allow comparison of care across practices and across areas. By repeating this at the end of the trial, it will also allow us to assess whether usual care has changed over the course of the trial, and feed into interpretation of results by facilitating comparison of care provided by intervention and usual care practices. If there is a clear trend towards initiatives that are very similar to 3D, any improvements in care resulting from 3D may be matched by those that occur as a result of ‘secular trends’.29 It is also expected that the various components of the intervention may not be uniformly implemented across practices, in part due to differences in context, for example, nurse roles, baseline arrangements for LTC care and whether the practice has their own pharmacist, and that this may be reflected in differences in outcomes. Therefore, variation in outcomes by practice will be compared with variation in practice characteristics to generate hypotheses about any associations observed.

Quantitative data from all intervention practices

The quantitative data collected from all intervention practices will be subject to descriptive statistical analysis to provide information about the differential implementation rates of the intervention components. This will be related to trial outcomes and will facilitate comparison of case study practices with all intervention practices regarding implementation fidelity.

Case study data

The qualitative data will be analysed in parallel with ongoing data collection, so that exploration of emerging issues can be incorporated into future data collection and further sampling of practices or individuals can take place subject to resource availability. NVivo V.10 software (QSR International) will be used to facilitate development of a coding matrix following framework principles,30 with built in flexibility to allow identification of anticipated and emergent themes. A flexible framework analysis is an appropriate method to use, in addition to detailed case study descriptions, since we are interested in specific problems to do with multimorbidity, specific intervention components and implementation issues that may run across practices, while also being concerned to capture new unexpected issues that arise during the course of the process evaluation. The field notes from training observation and practice visits will help to generate rich descriptions of the individual case study sites while the consultation observations will give detailed insight into the conduct of reviews. Evaluation of the patient centredness of the interactions will be based on an analysis of the interactions, informed by a conversation analytic approach.31 The transcripts of these will be coded alongside transcripts from the interviews and focus groups to provide different perspectives on the implementation of the intervention and contribute to fidelity assessment along with the quantitative data from all intervention practices. Detailed understanding of the context of the case study sites, for example, availability of nurses with appropriate skills, will contribute to understanding how different contexts may influence intervention implementation and to what extent findings can be transferred between sites. Techniques of constant comparison32 will be used to generate initial codes that will be built into higher level categories and themes, both within and across the case studies. Themes relating to the key components of the intervention and how they were implemented, maintained and received will help to interpret trial results and generate hypotheses about factors influencing effectiveness. In particular, we will examine the data for insights into whether patient-centred care has been delivered, experienced and maintained, as this is the concept underpinning the intervention and a main focus of the evaluation. Emerging themes and the relationship of the data to the conceptual literature underpinning the intervention and relevant theories, such as NPT,25 will be discussed and refined at team meetings throughout the research. Analysis will be strengthened by the involvement of more than one person in the analysis, including a member of the patient public involvement group, to check trustworthiness and credibility of interpretation of the qualitative data. This group will also provide the patient perspective on indicators of patient-centred behaviour to inform analysis of interactions in the consultation recordings.

Integration of the qualitative and quantitative case study data regarding implementation of different components of the intervention will allow detailed evaluation of the quality of intervention implementation in each case study site. For example, quantitative data relating to the number and type of patient problems identified for the agenda and included in the health plan can be integrated with observational data about how clinicians addressed agenda setting and health planning. The additional integration of interview data about patients’ and clinicians’ experiences of reviews will provide in-depth insight into those processes.

Integrating results of analysis

The process evaluation data will be analysed before knowing the trial results and the main trial will be analysed independently of the process evaluation findings. Once both analyses are complete, combined analysis of qualitative and quantitative data across the cases may help to develop explanatory hypotheses about why the intervention appeared to be implemented more successfully in one site than another and some components were implemented whereas others were not. This may lead us to identify factors which are plausibly and/or consistently related to successful or unsuccessful delivery of the components of the intervention and changes to patients’ experience of care.13 ,26 Analysis of the differential implementation of various components may also help to elucidate causal mechanisms and suggest which components of the intervention were more effective and how and why. Following statistical analysis of trial outcomes, the qualitative data may be re-examined in the light of the trial results to help explain them. If appropriate, the trial statistician may carry out additional analysis to test hypotheses generated from integration of the process evaluation data with the trial outcomes.

The final stage of analysis will be to draw together the findings from the broader quantitative analysis and the in-depth case studies to create an understanding of why the intervention did (or did not) work in all or some contexts, and identify implications for longer term implementation if appropriate.

Ethics and dissemination

The design of this process evaluation is covered within the ethics application and overall protocol of the 3D study (protocol v6.0 05-01-16). A favourable ethical opinion of the 3D study was given by the Southwest (Frenchay) NHS Research Ethics Committee: ref 14/SW/0011 on 20 March 2014.

Findings will be disseminated via peer-reviewed journals, conferences and seminars with health services commissioners and other interested stakeholders. Social media and newsletters will be used to disseminate headline outcomes. Reports will be provided to bodies with influence in primary care provision such as the Royal College of General Practitioners and the National Health Service (NHS) England, as well as to the funding body and participants.

Conclusion

This paper reports the design and methods for the planned mixed-methods process evaluation of the 3D trial. The process evaluation protocol and results will contribute to the developing understanding and overall body of work on the value of process evaluation in complex health service delivery and clinical trials. The process evaluation protocol conforms to recommendations intended to facilitate standardisation of process evaluation design and reporting12 ,16 ,22 in order that synthesis of results of similar trials may become possible in future.

References

Footnotes

  • Twitter Follow @CAPCBristol

  • Contributors All authors except LW contributed to the conception and design of the 3D study. CM, AS, BG, LW and CS developed the detail of the process evaluation protocol. CM drafted the manuscript and all authors reviewed it critically for intellectual content and approved the final version submitted for publication.

  • Funding This project was funded by the National Institute for Health Research Health Services and Delivery Research Programme (project number 12/130/15).

  • Disclaimer The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the HS&DR Programme, NIHR, NHS or the Department of Health.

  • Competing interests None declared.

  • Ethics approval South West (Frenchay) NHS Research Ethics Committee (ethics approval number 14/SW/0011).

  • Provenance and peer review Not commissioned; externally peer reviewed.