Summary of findings
In this ethnographic case study, participants had experienced knowledge exchange in their daily practice as a variety of challenges, tensions and negotiations. Their new roles in the multisectoral field of transport and health varied in formality and priorities which had to be negotiated within and between sectors. Language and expectations of the type of evidence differed and created similar challenges. The most pronounced difference was that between what was described as a preference for precedence by transport specialists, and for systematic evidence synthesis by public health specialists. However, evidence was experienced as contextual, complex, sometimes conflicting and often incomplete. Complex research designs produced partial evidence, but participants suggested that decision-makers were used to making decisions despite such uncertainties. That said, evidence strength could be rated in different ways, sometimes shaped by the ability to “read” evidence correctly, and even seemingly clear-cut evidence could be politically contentious. In addition, the potential health co-benefits of transport schemes were appreciated to have traction in arguments made to decision-makers, but were not necessarily easily investigated by researchers or translated into convincing cost-benefit analyses. Finally, knowledge exchange happened within a context of multiple directions of influence and expectations thereof. Participants discussed that researchers needed the skills to make their evidence understandable, and that some might not be interested or well suited to undertake this task themselves. Instead, knowledge brokers might be better placed to do such translational work. Finally, policymakers and practitioners considered the benefits, but also the potential disbenefits, of influencing research agendas more directly.
Results in context
The findings echo much of the knowledge exchange literature, such as the necessity of translating knowledge into different sectoral and disciplinary languages and having the time and skills to do so [
8,
15,
26,
27], appreciating different types of knowledge or a combination thereof as useful, convincing or strong [
28‐
30], and the ways in which evidence can be used strategically to serve particular outcomes [
13,
30]. While it might not be a particularly new insight that different sectors inhabit different epistemological worlds and appraise knowledge differently, the experience of practitioners and policymakers whose remit spans several sectors has rarely been captured. Two further issues are selected here as particularly pertinent to this specific context: the role of uncertain evidence in knowledge exchange, and the political nature of evidence.
This case study of evidence derived from a natural experimental evaluation represents the challenges of much public health research. Interventions of this kind outside the health sector can create a natural opportunity to produce new learning about how best to produce beneficial outcomes across the health and transport sectors. However, evaluating the impact of complex interplays of social, environmental and political-economic contexts on population health behaviour produces complex evidence that can rarely be distilled into simple, definite answers [
17]. Rather it forms a jigsaw of partial evidence derived from a range of methods, that might gradually be built up across multiple studies over time to form an emerging, more generalisable picture [
28]. Researchers who produce this evidence can accept such uncertainty in their daily practice, relying on tools such as confidence intervals and sensitivity analyses to communicate levels of uncertainty, and suggesting further research and refinement of scientific concepts and methods. Practitioners and policymakers, however, face the dilemma of being tasked to put such partial evidence, and from sectors they might be less familiar with, into comprehensive action, eradicating uncertainty from business cases and policy documents in the process.
Lipsky’s seminal work on “street level bureaucracy” was one of the first that aimed to understand individuals’ efforts to implement organisational decision-making with its inherent uncertainties [
31]. He found that front-line public service workers had to negotiate multiple, interacting factors such as regulations, policy directives and expectations of elected officials which were not necessarily aligned. This resulted in discretions and inconsistencies in putting policy into practice. More recent work on evidence based policymaking has explored ethnographically how civil servants need to craft “persuasive policy stories” from a large but inconclusive amount of evidence [
32]. Similarly, a study on health policy has traced how varying evidence – in this case on screening for a range of cancers – and its context resulted in different appraisals and use of evidence, and different policy recommendations, by different expert groups [
33].
Part of the uncertainty is that evidence is not always understandable and may be produced through methods that are novel, specialised or complex. Studies from the field of environmental public policy, for example, have described how practitioners and policymakers have to grapple with new research technologies such as modelling, and those skilled to read this evidence are better placed to influence policy outcomes [
34]. In our study, researchers, practitioners and policymakers described that they needed to understand a range of evidence produced from multiple methods and disciplines, and negotiate what constituted relevant, reliable or relatable evidence across sectors. This varied, complex and challenging evidence formed part of their everyday practice – echoing research that suggests that evidence jigsaws can be helpful in making a policy case [
28] – but also hindered their ability to influence decision-makers with clear advice. It has been suggested that perhaps what policymakers need is not more evidence, but more systematic “methods for identifying, interpreting, and applying evidence in different decision-making contexts” [
33].
Also, complex population health research does not produce evidence to influence individual clinical decisions, but aims to effect population-level responses in policy and practice and is therefore more exposed to political and public scrutiny [
35]. This is particularly pronounced in the “more overtly politicised local government space” into which public health has recently moved in England [
30]. Importantly, it is not just actors and spaces that are political; evidence is itself far from neutral and can be positioned in varied ways in policy and practice, thereby adding to uncertainties surrounding evidence. Tension between the assumed objectivity of research and subjectivity of politics arise if decision-makers want the evidence to confirm that the chosen outcome from a decision is correct, or conversely if the evidence points to politically or socially contentious actions. Our participants, for example, explained that infrastructural schemes were highly politicised and that in the current climate of financial austerity in the public sector, any business cases were increasingly reduced to an economic bottom line. It has been suggested that evidence in policy should be considered as “socially embedded in authority relations” [
36], and that politics is not a third pillar besides research and policy but runs through all domains [
30].
While much of the knowledge exchange literature has focused on the practice or policy side, it is equally important to note that evidence production – research – is also not free of ideology and politics [
11]. Some of our participants felt strongly that research should be guided by policy agendas, and that research and practice should be developed simultaneously and iteratively for more policy-relevant evidence, better informed practice, and timely evaluation. Others were concerned that research ought to retain its public image of neutrality, and thus retain its authority or power to influence policy agendas. This sentiment, of course, also pointed to the political nature of research.
Implications for practice and future research
More research about the uncertain and political nature of evidence is clearly still needed as these play out in different public health contexts in research, policy and practice. While this case study particularly focused on the context of active travel and transport as a target of population health intervention, it indicates that through the lens of a particular context of transport and health, larger barriers to knowledge exchange in multisectoral, evidence-based public health can be identified [
35,
37]. Most public health remits span sectors, fields, remits and disciplines, whether by interest or institutionalised [
38]. We need to learn more about the realpolitik and contextual factors of multisectoral collaboration [
39], which are meant to underpin current public health policymaking, for example the ways in which partnerships between health, agriculture and national and global industry might or might not work to ensure the accessibility and affordability of healthy foods [
40]. Such analysis of institutional knowledge processes could help to understand how multisectoral working can be successful, for example how to achieve integration of priorities and partners in multisectoral collaboration [
38]. Moreover, multisectoral partnerships can be just as polarising and politicised as evidence, and this can affect how evidence can be used and abused strategically. In-depth policy analyses could, in fact, focus on a range of public health strategies such as the taxation of sugar sweetened beverages and other unhealthy foods [
41], understanding the broader perspective of “actor networks” that hold influence and authority over decision making as well as micro- or “street-level” negotiation. This could inform strategies for engaging effectively with industry opposition and political and public scepticism.
Strengths and limitations
We believe the strength of the research design was its case study approach; using a “real life” example of knowledge exchange attached to a particular research project enabled us to understand the contextual factors that shaped, supported and hindered the use of evidence by our stakeholders [
32,
42]. Exploring a particular, poignant context helped to focus our debate with them and encourage reflection. The case also helped to highlight particular topics, such as polarising evidence and evidence from complex research designs. Moreover, using an ethnographic method that combined participant observation with interviews strengthened our analysis through the triangulation of themes and discussion. We were able to record formal and informal exchanges between stakeholders and between “them” and “us”, and in follow-up interviews we could allow space for reflection on the evidence presented and on opportunities to make use of it in their daily practice. Reflecting with our stakeholders during the event and subsequent interviews, and with some of them during the analysis, we could develop “policy points” with them to summarise insights from this study (see Additional file
2).
Our research design also had limitations. We did not include much of a researcher perspective in this case study, and mostly interviewed stakeholders in policy and practice roles. This was mainly because most of the researchers who attended the forum were attached to the busway study, and were asked to write field notes for this case study. However, some of our participants from policy and practice had experiences as researchers themselves and included reflections from a research perspective. National government representation was also limited at the forum, and therefore in this study. This may have reflected the limited availability of civil servants shortly before a general election in particular, or for knowledge exchange events more generally. This study therefore included a self-selected pool of participants with a particular interest in the multisectoral subject matter and perhaps in evidence-based practice and policy and knowledge exchange more generally. Similarly, this may have limited the responses in the interviews, as those agreeing to be interviewed may have been particularly interested in the topic. While qualitative purposive samples aim for knowledgeable, “information-rich” participants [
22], discussions and interviews with less informed practitioners and policymakers in the field may have yielded different or additional insights. Finally, elected representatives and members of the general public were not represented at the forum or in this case study, and this was highlighted as part of critical feedback from participants. It was also suggested that for future events the organisers could encourage invitees to “bring a friend”, or indeed to “bring an enemy”, to broaden the range of perspectives reflected in discussion and the interviews.