The BeWEL trial met its recruitment target. However, despite close monitoring and considerable resources and effort being targeted at recruitment, the trial still required a six month no-cost extension from the funder to meet this target. Although the funder provided no additional funding, extending the trial by six months was possible because of flexibility shown by contracted staff and because core-funded and other departmental staff contributed more time. By three months into the trial, it was clear that recruitment was not going as planned and a series of interventions to increase recruitment were implemented, including interventions with evidence of benefit from systematic reviews (for example, telephone reminders to non-respondents [
4,
5]). Whilst these probably helped, as demonstrated in Figure
3, which shows a steady increase in the proportion of eligible individuals consenting and being randomized, there was no magic-bullet recruitment intervention that led to a step change in recruitment rates. Some interventions required substantial effort, such as repeated telephone attempts to contact participants and visiting remote rural locations. It is also clear that, as others have reported [
16], recruitment in the first couple of months or so is indicative of later recruitment unless action is taken. To have assumed that slow recruitment was merely trial growing pains rather than a problem to be dealt with immediately would have been a mistake.
Estimating the number of potentially eligible participants who will agree to take part
The estimate of 200 patients diagnosed each year at the three original sites turned out to be accurate but the proportion who were eligible was 75%, not 81%, as estimated. This relatively small difference was then compounded by eligible participants who changed their mind after initially saying yes (42, or 9%) to participation, which was not expected.
A greater problem though, was the consent rate, which was 49%, not the estimated 70%. Formative qualitative research was conducted to help check the acceptability of the intervention concept and to refine methods before recruitment started [
17]. This gave insight into how the initial approach to patients could be improved; for example, that colorectal cancer health professionals act as advocates for the study and repeat its endorsement by the lead clinician, suggestions that were incorporated into the recruitment strategy for BeWEL. One important finding from the formative research was that patients had little understanding of the potential link between adenoma risk and their own health behaviour, and consequently struggled to see the relevance of an invitation to participate in a lifestyle-change study. This was reinforced by the tendency of health professionals during and after adenoma treatment to adopt a reassuring tone that downplayed risk. The ‘all-clear’ messages that patients picked up from written and verbal communication after their adenoma operation implied a ‘clean bill of health’ and indicated that there was nothing about their lifestyle requiring modification. Although efforts were made in the trial recruitment process to address this problem by making clearer the potential links between adenoma risk and lifestyle behaviour, it is possible that the link was still not sufficiently salient or believable for some patients, and that this may have contributed to reluctance to participate in the study.
The estimated consent rate of 70% was based on the BHBH study [
11], a trial similar to BeWEL. The BHBH study reported two consent rates: an overall rate of 51% and an ‘initial’ rate of 68%; the latter rate was the rate seen before a second Dundee-based trial started recruiting from the same patient pool. The 68% rate seemed to be a reasonable choice for BeWEL, given the clear link between the fall in the BHBH study consent rate and the start of recruitment by the second trial. As we found later, the overall BHBH consent rate would have been a better bet. Additionally, although the BHBH study was similar to BeWEL there were two key differences. The first was that the BeWEL intervention placed more demands on patients, requiring a 12-month commitment from participants rather than three. The second was that weight management was included in BeWEL but not in the earlier trial, which again might have deterred some from participating. Although most of the BeWEL participants who were interviewed at the end of the program found the 12-month duration and inclusion of weight management acceptable and not too onerous or intrusive, these were of course patients who had agreed to participate; we do not know how many were put off coming forward in the first place by the perceived demands of participation in the study.
There are a couple of key lessons in this experience. Firstly, how should investigators estimate consent rates? One simple approach would be for investigators to estimate no more than 50% unless they have experience of higher consent from several studies in the same population being recruited in the same setting. This appears rather arbitrary but a study of recruitment in 207 breast cancer trials calculated the number needed to recruit one additional participant for the 69 trials that provided sufficient information to do the calculation and the result was remarkably consistent, with a median of two individuals being approached for every person recruited [
18]. Gross
et al. [
19] found a median of 1.8 (range, 1 to 68) for their study of 172 trials, whereas Toerien
et al. [
20], in their study of 133 trials, found that investigators assessed a median of 230% of their target number. A consent rate estimate of 50% is perhaps a reasonable rule of thumb in the absence of compelling evidence upon which to base it. More compelling evidence would comprise data from two or more studies that have recruited the same population, in the same setting, using the same sort of staff, for the same sort of intervention and all within recent history. Even with these data, investigators would need to make evidence-informed, judgement-based decisions about the similarity between earlier recruitment contexts and their own. It would be possible to put confidence intervals around consent rates from other studies, including pooled estimates, but it is context, not statistical uncertainty, that is likely to be the main driver of variability in consent rates. It is far from clear that taking the lower bound of the confidence interval would provide more reassurance than would be the case if investigators (ourselves included) were simply more conservative when estimating consent rates, and many other parameters besides. The best approach to consent rate planning remains in-context pilot work prior to the full-scale trial.
The second lesson was that once it became apparent that fewer patients were consenting to take part, one additional strategy would have been to conduct a second stage of qualitative research. This need not have been a major exercise, as one or two focus groups or a small number of individual interviews would have sufficed. This second stage could have focused on exploring the views of those patients who expressed an initial interest but then did not follow through, to see whether any of the reasons for reluctance were amenable to action. This kind of research exercise has been used to explore how patients interpret and respond to informed consent materials provided in clinical trials [
21]. ‘Consumer research’ of this sort could play a valuable role throughout the development and implementation of an intervention in remedying problems as they occur. Indeed, given the commonplace nature of trial process problems, investigators would do well to build in the possibility of adding rapid, response-mode qualitative work to their initial ethical and other approval submissions.
There were 13 months between the trial management team’s first discussion with a Glasgow site and the first Glasgow recruit. At the end of the trial, the two sites together had been able to recruit only five participants. Conjecture as to what might have been is, perhaps, of limited utility but had the two extra sites both recruited at a similar rate to the other sites (Scenario 2 in Figure
2 and Table
4), then the trial would have met its target only one month behind the original recruitment schedule. In the three original sites, considerable negotiation had been undertaken well before the funding bid had been submitted, and again after the funding award announced, highlighting the time required to match sites to study requirements. Thus, by the time the study started, the preparatory work at each site had been undertaken. This was not the case in the two Glasgow sites, where many months were taken up as a consequence of NHS R&D departmental work practices being different from other sites, a situation made worse by the loss of the trial manager to take up another post during the latter stage of funding negotiations. Moreover, the anticipated time for recruitment from these two sites was not fully factored in to the revised recruitment plan.
This experience points towards a number of suggestions linked to site selection:
-
Identify more sites than are expected based on pre-trial assumptions as an insurance policy against those assumptions proving incorrect. The number of extra sites will depend on how confident investigators are about their pre-trial assumptions. If all, or some of the approvals for these sites can be obtained up-front (that is, before they are actually needed), so much the better.
-
Formally assess all sites for suitability for the trial. The trial team should use a checklist of key features of a site that they believe are essential for successful participation. The team could develop its own checklist, or modify an existing one (for examples, see Warden
et al.[
22]).
-
Sites that do not meet the requirements listed on the checklist should be reviewed to determine whether measures could be put in place by the trial team to support the site in meeting the checklist criteria. If not, the site should not be considered for the trial.
Clearly, no site is ideal for all trials but there is growing agreement that sites should be selected based on a formal assessment of their past performance and the likelihood of success in the trial being planned [
22‐
24]. As Shah pointed out in a recent roundtable discussion of heart failure trials [
23]:
‘There is a peculiar paradox that exists in trial execution - we perform clinical trials to generate evidence to improve patient outcomes; however, we conduct clinical trials like anecdotal medicine: (1) we do what we think works; (2) we rely on experience and judgement and (3) limited data to support best practices.’
What features of a site might predict its future performance in a given trial is worthy of more research but some features that may be relevant [
22‐
24] include:
-
Previous experience with multicentre trials;
-
Familiarity of operating a trial protocol and the closeness of the trial protocol to the clinical procedures currently in place at the site;
-
Familiarity with the local approvals process;
-
Previous recruitment performance;
-
Case mix and access to eligible participants;
-
Availability of resources such as research nurses, study coordinators, research pharmacists, administrators;
-
Lack of competing demands that would hinder the site’s ability to fully engage with the trial.
Future research could focus on systematically reviewing the trial management literature for studies in order to evaluate formal site-selection methods and develop prediction rules and metrics that can be used for site selection.