Responses
As Table
1 shows, we emailed a criterion sample of 60 researchers, and achieved a response of 50/60 between 1 and 21 June 2010. In addition to our criterion sample, we received responses from nine other 'snowball' respondents. Of the 10 members of the criterion sample that did not respond, four were away on sabbatical or other leave. We have no information about six other non-respondents. Of the criterion sample, 10/50 communicated their views about statements and explanations by email or telephone to CRM. Only one member of this group provided a detailed critique of the statements and explanations. The remainder made general comments about their focus and orientation. The majority of data we received was derived from 40 criterion sample respondents, and the nine snowball sample respondents who replied using the Survey Monkey tool. We have combined responses from these two groups for qualitative analysis. Table
2 describes the structure and geographical distribution of the combined study group.
Table 1
Purposive Sample of Respondents: Statement development phase
Out of office auto-reply Sabbatical or other leave | 5 | 0 |
Did not respond | 5 | |
Responded using on-line pro forma | | 40 |
Responded by email or telephone | | 10 |
Total criterion sample
|
10/60
|
50/60
|
Additional snowball respondents |
N/A
| 9 |
Total all respondents
| |
59
|
Table 2
Professional structure of combined criterion and snowball samples: Statement development phase
Postgraduate Student
| 3 | 1 | 0 | 4 |
Assistant Prof/
| 8 | 5 | 4 | 17 |
Lecturer/Research Fellow Associate Prof/Senior Lecturer
| 4 | 5 | 3 | 12 |
Full Professor
| 13 | 2 | 2 | 17 |
Non-academic
Practitioner* | 6 | 3 | 0 | 9 |
Total
|
34
|
16
|
9
|
59
|
Respondents using the SurveyMonkey pro forma asserted that they were familiar with NPT. Only 12 suggested that they possessed a low level of familiarity with the theory. We asked participants to read the statements and their explanations and to work through them in relation to an implementation practice or research problem. These respondents applied NPT to a wide variety of problems. Not all respondents provided sufficient information to identify these, but we could identify problems related to Primary Care (n = 14), Hospital Medicine (n = 7), Nursing and Midwifery (n = 6), Health Informatics (n = 5), Social Care (n = 4), and Public Health (n = 3). Ten respondents identified themselves as already using NPT as a basis for ongoing studies, and six were, or had been, involved in designing studies in which NPT was integral but which were not yet operational. In at least five of these cases, this work was accomplished in groups. A further 23 respondents said that they had reviewed the statements and their explanations through the medium of thought experiments about potential or actual implementation projects. A small number of respondents told us about the time committed to this task. This ranged from 20 minutes to three hours.
The web-enabled tool had been released for testing in a way that maximized commentary from real users. We embedded Google Analytics html code in the website and this enabled us to obtain some limited data about its usage and users. During the pilot period (26 July -26 August 2010) the website attracted 327 visits (139 new visitors and 188 return visitors) and details of these are given in Table
3. Time on site ranged from 21 to 0 minutes (mean was 4.15 minutes), and page views ranged from 21 to one (mean was 5.11). From 139 new visitors we received some 15 detailed comments on their experience of the site, using free text boxes that users could fill in as they worked through the site.
Direct from URL | 96 | 157 |
Twitter.com | 1 | 2 |
Academia.edu | 2 | 16 |
Google search | 13 | 7 |
Wikipedia | 15 | 1 |
Yahoo search | 1 | - |
Harvard Business | 10 | 2 |
Review Blog RSS | | |
Mayo Clinic Intranet | 1 | 4 |
The on-line survey
All but three participants were supportive of the approach we had taken and about the statements presented to them. Many made enthusiastic comments about this, and remarked that the statements improved the workability of NPT in practice. This was especially so amongst those without a background in the social sciences. We had invited respondents to be critical, however, and most had important and useful comments to make. These took two forms. First, many respondents offered specific criticisms about the statements and their explanations. These are grouped and described in Additional File
1 (see second column, 'Users' Critique'). They related to three main kinds of problem: ambiguously worded statements and explanations; overlap, where some statements and their explanations appeared to cover the same ground as others; and dissonance, where some statements and their explanations appeared to express different concepts. As we have noted, most respondents were very positive about the statements and explanations. A medical researcher told us that:
It provided food for thought about the issues involved in trying to bring together a team of both researchers and practitioners to design and implement an intervention. In particular it helped me to understand that the reasons why we are having so much difficulty is that the research team themselves do not have a shared view and understanding of what the intervention is we are trying to implement and this is contributing to our problems in engaging the primary care partners in the project.
A nurse researcher told us that:
The questions serve as an inventory; anticipatory guidance before embarking on a change in practice or as a reflective/evaluation tool. In my example, the intervention was introduced to the inter-disciplinary [team] as a 'pilot'. I was asked to assist with evaluating the 'pilot'. If these 16-questions would have been available I could envision utilizing them as a guide for evaluation focus groups/interviews with end-users.
In these contexts, respondents seemed to be using the statements and their explanations in exactly the way we had intended them - as sensitizing tools, heuristic devices, to support thinking through an implementation task. Importantly, though, we did not intend these statements to be used as the basis for specific research instruments or as verbatim statements for an interview schedule.
Beyond this, respondents offered interesting and useful general critiques that often made wider methodological points. One health services researcher wrote that:
[I] can see why it is seductive. I imagine some of it might work for trial interventions where you have a clear comparator - e.g. differentiate the intervention from usual practice (our 'intervention' is the work now and we do not really have a comparator as such). It looks helpfully simple (so will appeal to many because of this) - not too long - easy to read - etc but then using it, it unravels and seems less useful (I feel a bit the way I did the first time I used the SF36 in a face to face interview - I ended up wanting to qualify every answer)
This reflects the central problem with the process of translation and simplification. It reduces the potential for acknowledging complexity within the tool. But there is a further problem here which is the extent to which a small number of respondents saw themselves reading something that was analogous to a structured research instrument rather than a set of statements that were intended to sensitize users to process problems in implementation. Complexity was added, too, by the use of theoretical vocabulary within the explanations and beyond. Another respondent wrote that:
I felt some of the language was still too technical. I would not use your technical descriptions "differentiation" etc - just ... complicate the understanding of the concept by using words which could be interpreted as having a different meaning to the one expressed in the question. Specific examples: 3 "make sense of the work" - would understand better as "make sense of what they had to do" (and work in 7) 8 "define the actions and procedures" - perhaps "define what needs to be done" 9 "enact the intervention" - perhaps "carry out the intervention" 10 see above 14 and 15 - I prefer "think it is worthwhile" or "agree about the worth of the effects" - it is the phrase "worth of the effects" which feels a little foreign.
While for others it was:
A little tricky to work with at times. The terms don't always appear to coincide with the descriptions provided. Sometimes it was helpful to simply ignore the term, and concentrate on the description. Furthermore, the bolded "headline" doesn't always convey what is indicated in the explanation below it.
Several respondents remarked on the problem of seeking to integrate understanding the statements and their explanations at a more general level.
I am not sure if having 2 bits of text i.e. question and description for each question might confuse some people (as I have had this mentioned to me at a conference when I did something similar) although personally I do feel it helps the users understanding and quite like it.
Once again, these problems stem from the process of reduction and editing that led to the construction of the statements and their explanations. A small number of respondents sought to suggest solutions to such problems. For example:
It might be best to have a two part question with an amplification of the question in the second part. For example, "participants can/could discover the effects of the intervention", for example "from formal or informal evaluation". Also the "questions" are not phrased as questions but as statements - would be better as questions.
The qualitative analysis that we present here is a simple and descriptive one. Data was in the form of free text entries in an on-line pro-forma. Respondents invested a good deal of effort in working through the statements and their explanations. As we have seen, they identified problems that were about meaning (focusing on the content of statements and their explanations), and about structure (focusing on the relationship between individual statements and their explanations).
Responses to the web enabled tool
We received a small number of electronic and in-person responses to the web enabled tool. Most of these were congratulatory. One respondent - a sociologist - felt that the web-enabled tool over-simplified NPT and meant that it would be difficult to interpret. Two respondents pointed to continuing difficulties with continued ambiguity or overlap for statements 2 & 14, 3 & 15, 5 & 11, 6 & 7. To solve this problem we amended these items again. Other users sought more advice about how to solve implementation problems, and a reduction in 'jargon'. For one user, however, the result was clarity and workability:
Love it, at least I can understand it now. All I need to remember is SPAM (sense-making, participation, action, monitoring). This will be a great tool to map progress.
Despite the undesirable mnemonic 'SPAM', this was the result that we were aiming for.