Emily Hastings 1, Sue Morris 1, Kirsty Blackstock 1, Jannette MacDonald 1, Bob Ferrier 1
James Hutton Institute1
Centre of expertise for waters
This paper provides insights to evaluating knowledge exchange, drawing on lessons learnt from an evaluation of Scotland's Centre of Expertise for Waters (CREW). The centre is a knowledge exchange initiative funded by the Scottish Government. It aims to deliver accessible research and expert opinion to support Scottish Government and its delivery partners in the development and implementation of water policy in Scotland. CREW's objectives are to increase networks linking scientists and policy makers, and increase capacity to produce robust evidence through the co-construction of research by researchers and research users.
An evidence-based approach to policy requires stronger connections between policy makers and scientists in developing research, through communication of emerging findings, and reporting of evidence. This process of knowledge exchange, when researchers work with end-users in producing evidence, is vital to realising research impact. There are few worked examples of what makes for 'good' knowledge exchange that improves research impact. Nor are there agreed methodologies for assessing these practices. Our formative evaluation provides an opportunity to reflect on our approach to assessing knowledge exchange for research impact, and highlight key lessons learnt to date.
Here we outline our understandings of research impact and knowledge exchange, our approach to their evaluation, and our mixed method research. We give a brief account of our emerging findings; however, the focus is on KE between scientists and policy makers, and the key lessons learned about evaluating knowledge exchange for research impact.
The evaluation aims to understand existing science: policy: practice interfaces; measure and analyse how CREW's structure, members and activities contribute towards these interfaces; and evaluate performance and suggest ways to improve links between research, policy and implementation.
The evaluation is based on the views of all those involved in CREW (researchers and policy makers) in order to make recommendations for future improvements. We have adopted a 'theory of change' to explain how CREW's objectives link and should be evaluated. The theory demonstrates how building networks is a fundamental stage and needed to support the second objective, to build capacity. These are envisaged to result in conceptual changes, and to have a positive impact on Scotland's economic, environmental and social outcomes. Our review of impact evaluation suggests that research will not have an impact on 'final' outcomes without having some intermediate outcomes in place. Building networks and capacity should lead first to changes in the way we all work (cultures) and how problems are conceived.
This paper draws primarily on analysis of interviews conducted with a sample of researchers and customers (Scottish Government, SEPA, Scottish Water) involved in CREW. Interviews were conducted by phone in June -- August 2013. Qualitative interview data were analysed using a framework approach to identify common themes.
Results and Disucssion
The results show that CREW is working well to better connect water research and policy. Interview responses showed that most participants believed the work they delivered or requested responded to the needs of science, policy or practice. Generally, meeting these needs was seen to involve production of a useful evidence base, a focus on agreed research objectives, and direct responses to the research questions from policy and practice. This evaluation has allowed us to make recommendations to CREW for performance improvements but also in learning about evaluation of knowledge exchange initiatives more generally. These lessons include (i) assessing capacity building; (ii) knowledge exchange as co-construction of research evidence; (iii) taking a self-evaluation approach; (iv) perceptions of evaluation; and (v) identifying impact.
One of CREW's aims is to increase capacity building. Our evaluation has highlighted how difficult intangible aspects such as this are to visualise, agree upon (both managers and evaluators) and evaluate. Fundamental to this is all parties agreeing at the outset what capacity building means to them and how this can be measured, often with the use of proxies.
The interviewees generally agree that co-construction should lead to better research questions and policy solutions. Interviewees particularly value on-going dialogue and workshops for effective knowledge exchange, and also as mechanisms to allow research objectives to be revisited if required. It is therefore important to factor in time and resources within an initiative such as CREW to allow this to happen in practice.
Self-evaluation and perception of evaluation are two closely linked lessons relating to how others perceive the process. The benefits of this approach become apparent early on; self-evaluation means you are in a position to make necessary changes as the evaluation and initiative progresses rather than the traditional post hoc evaluation approach.
Identifying and assessing research impact is particularly challenging. Impact is more than dissemination and means making a difference. Impact can be about building capability or capacity to act; building knowledge or changing a way of behaving (conceptual impact); or making a concrete change (instrumental impact). Impact is more likely when research is co-constructed with research users and is designed with a specific context and use in mind. It requires on-going knowledge exchange and dedicated resources and skills to ensure that this occurs.
We argue that both process and outcome variables are needed to measure success, given creating networks and capacity building is often the basis of something more, in the case of CREW and as demonstrated in the TOC, leading to mutual understanding and changes in conceptual thinking and cultures. Moving from knowledge transfer to knowledge exchange and ultimately, co- production requires understanding of these intermediate impacts.
Evaluating a knowledge exchange initiative, designed from the outset to be dynamic is challenging. Particularly in establishing categories for analysis when evolution of policy and practice needs but also of relationships, is commonplace. Our results illustrate the need for qualitative and quantitative data and assessment over time rather than snapshots. More critically, attention to how these interfaces are enacted, by whom, and as framed by whom, can question how we perform our roles as scientists, policy makers and/or practitioners.