Background
Improved quality of care is a policy objective of health care systems around the world. Research findings are sometimes implemented by helping professionals to acquire skills or knowledge, sometimes by making systems changes within health care organisations, and sometimes by legislation which restricts or controls practice. Over the past decade, health care systems have invested heavily in the development of clinical practice guidelines and associated quality improvement interventions [
1]. However, these efforts have had variable success [
2].
Implementation research is the scientific study of methods to promote the systematic uptake of clinical research findings into routine clinical practice, and hence to reduce inappropriate care [
3]. It includes the study of influences on the behaviour of health-care professionals and health care organisations. The emphasis is generally on how treatments can be delivered effectively, rather than on the measuring the difference an idealised treatment makes. Experimental studies that use cluster randomised designs are generally more appropriate for evaluating interventions in implementation research than individual patient randomised controlled trials [
4]. Cluster randomized trials randomize an intact social unit (cluster) to an intervention and collect data from individuals within that social unit. In implementation research, a cluster may be defined as an individual health care professional, a family practice, a hospital department, or a hospital, and data are commonly collected on patients cared for in the cluster. Cluster randomized trials are commonly undertaken to minimize the risk of contamination that could occur in a patient randomized trial, if the care of control patients was influenced by the experience of the health care professional providing care to experimental patients [
4]. Various codes of medical ethics, such as the Nuremberg Code [
5] (see text below) and the Declaration of Helsinki [
6] inform medical research. We have previously examined their applicability to cluster randomized trials in general [
7], but their application to cluster randomised trials in implementation research is not obvious.
Key ethical considerations: The Nuremberg code
1. Voluntary consent of the human subject is absolutely essential. Ascertaining the quality of the consent rests upon each individual: responsibility which may not be delegated.
2. The experiment should ... yield fruitful results for the good of society.
3. Anticipated results must be justified by background knowledge.
4. Avoid all unnecessary physical and mental suffering and injury.
5. Not conducted if a priori reason to believe death or disability will occur.
6. Degree of risk taken to be balanced by the humanitarian importance.
7. Proper preparations should be made to protect the experimental subject.
8. Only conducted by scientifically qualified persons.
9. Subject should be at liberty to end the experiment.
10. Early stopping of experiment if risk of injury, disability, death.
The primary ethical requirement of consent (central to statistical and biomedical codes of conduct [
8‐
10]) raises particular issues for cluster randomised designs [
7,
11]. Examples of three cluster randomized trials in implementation research are described in the text below.
The NEXUS Trial [12]
This study evaluated the effectiveness of audit and feedback and educational reminder messages to implement the UK Royal College of Radiologists' guidelines for lumber spine and knee x-ray in UK general practices. The study was undertaken in six radiology departments and the 247 general practices that they served. The study design was a before-and-after pragmatic cluster randomised controlled trial using a 2 × 2 factorial design. A randomly chosen subset of general practice patient records (paper and computerised) were examined to assess concordance with criteria derived from the guidelines. The effect of educational reminder messages (expressed as x-ray requests per 1,000 patients) was an absolute change of -1.53 (95% CI: -2.5, -0.57) lumbar spine requests and of 1.61 (95% CI: -2.6, -0.62) knee x-ray requests, relative reductions of approximately 20%. Similarly, the effect of audit and feedback was an absolute change of -0.07 (95% CI: -1.3, 0.9) lumbar spine x-rays requests and an absolute change of -0.04 (95% CI: -0.95, 1.03) for knee x-rays requests, relative reductions of about 1%. None of the differences in concordance between groups were statistically significant.
The COGENT (Computerised Guidelines Evaluation in the North of England) Trial [13]
This was a before-and-after cluster randomised controlled trial, which used a two by two incomplete block design to evaluate the use of computerised decision support (CDSS) to implement clinical guidelines for the primary care management of two conditions: asthma in adults and angina. Practices eligible to participate in the study were those with one of two computing systems, and with at least 50% of the general practitioners reporting use of their practice computer system to view clinical data and for acute prescribing. Process of care data were collected in two ways: by electronic retrieval from the computerised medical record and by abstraction from paper medical records. (At the time of the study the majority of general practices had both electronic and paper records on the same patient.) Patient-based outcomes were assessed by postal surveys using a range of generic and condition specific measures administered at three points in time: approximately a year before the intervention; just before the intervention and approximately a year after the intervention. There were no significant effects of CDSS on consultation rates, process of care measures (including prescribing) or any quality of life domain for either condition. Levels of use of the CDSS were low.
The DREAM Trial [14]
This was an evaluation of the effectiveness and efficiency of an area-wide 'extended' computerised diabetes register, which incorporated a full structured recall and management system, actively involved patients, and included clinical management prompts to primary care clinicians based on locally-adapted evidence based guidelines. The trial, in 58 general practices in three Primary Care Trusts in the northeast of England, was a pragmatic cluster randomised controlled trial with the general practice as the unit of randomisation. The computerised structured recall and management system improved care for people with diabetes. Patients in intervention practices were more likely to have at least one diabetes appointment recorded (OR 2.00, 95% CI 1.02, 3.91), to have a recording of a foot check (OR 1.87, 95% CI 1.09, 3.21), have a recording of receiving dietary advice (OR 2.77, 95% CI 1.22, 6.29), and have a recording of blood pressure (BP) (OR 2.14, 95% CI 1.06, 4.36). There was no difference in mean HbA1c or BP levels, but the mean cholesterol level in patients from intervention practices was significantly lower (-0.15 mmol/l, 95% CI -0.25, -0.06). There were no differences in patient-reported outcomes, or in patient-reported use of drugs or uptake of health services. NHS investigation and treatment costs, and costs to patients were not significantly increased by the intervention; there were administrative costs and there may have been an impact of the intervention on costs within general practice.
The conduct of cluster randomized trials in implementation research raise a series of questions: What is consent, and who should give it? What does the freedom to withdraw from an experiment mean in implementation research? Indeed, should implementation research be considered 'biomedical research' for ethical purposes? Although the use of medical records in research is considered by the Council for International Organizations of Medical Sciences [
15], these guidelines claim that public health and other forms of health care research designed to contribute directly to the health of individuals or communities can be distinguished from biomedical research. The Declaration of Helsinki [
6] on biomedical research states that a 'research protocol should always contain a statement of the ethical considerations involved ...'. However, the ten ethical considerations, listed in the text below, are not easily translated into the context of implementation research. While ethical justification for randomised controlled trials relies heavily on the current state of clinical knowledge and individual consent, for implementation research aspects of distributive justice, economics and political philosophy inform the debate, and the ethical theories of virtue, duty, and utility are important. In this paper, we discuss the ethical challenges relating to consent in cluster trials in implementation research.
Summary
Greater access to well-grounded information benefits society, so implementation research, which endeavours to translate improvements in clinical research into improvements in health care, is ethically commendable. Implementation research should be guided more by the principles of social science research, with the clinical treatment of patients being governed by ordinary professional practice. Seeking individual informed consent is not merely expensive: it may be futile, as those choosing to respond will almost certainly be unrepresentative, hence the study results will be biased. In reality, societies may face a political decision between individual informed consent and implementation research.
While ethical justification for clinical trials relies heavily on individual consent, for implementation research aspects of distributive justice, economics, and political philosophy underlie the debate. These ethical issues have been thoroughly debated in the social sciences. Biomedical codes focus on doctor-patient relations, whereas obligations to a variety of interest groups, ownership of information, rights of access and exploitation of data, and responsibilities for professional development are addressed in social science codes. We suggest that social sciences codes could usefully inform the consideration of implementation research by members of research ethics committees. We recommend that training on the particular features of implementation research be offered to those on research ethics committees.
Competing interests
JLH: none declared. MPE and JMG have both submitted implementation trial protocols to ethics committees and had difficulty explaining to them the differences between implementation trials and individual patient clinical trials.
Authors' contributions
JLH, ME and JMG together developed the idea for this paper. JLH led the writing. All authors commented on sequential drafts and approved the final version.