Nowadays, researchers are asked to give an account of their plans for implementing their research, and the anticipated impact of that research in practice, when applying for funding.
Most of us are familiar with the two ‘gaps in translation’ (Cooksey D. A review of UK health research funding. London: HM Treasury, 2006). The first translation gap refers to the translation of ideas from research into the development of evidence, and the second gap is the uptake or implementation of that evidence into routine clinical practice.
In the last twenty years or so, the field of implementation science has grown and given rise to many models and theories, and swathes of research - it is a very crowded space indeed. So when I was thinking about this blog, I didn’t really know where to start, which I expect mirrors the experience of those who are thinking about ‘doing’ implementation in the real world. What I didn’t want to do was to re-hash the models and their pros and cons as that would not be especially helpful or, dare I say it, interesting….Anyway, as luck would have it, I was working in another capacity and happened to virtually meet Kristian Hudson who is an implementation specialist at the Improvement Academy, part of the Yorkshire & Humber NIHR Applied Research Collaboration. We had a really good conversation and I want to share some of that here as well as signposting people who are doing, or thinking about doing implementation, to the wonderful series of resources called ‘Implementation Secrets’ (parts 1, 2 and 3) he has put together. These include podcasts with key thinkers, tales from the field, and of course those ‘secrets’.
So, turning back to our conversation, here are a few reflections on ‘real world’ implementation. Implementation research is often all about the ‘problem’. As ‘implementation science’ has evolved and crystallised into a discipline, much research has become distanced or abstracted from practice and concerned with implementation as a ‘problem’ and the object of analytical enquiry. What this means is that an awful lot of the knowledge that is produced is not especially helpful to those working to implement things in the real world. Making something work locally needs specialist local knowledge. We need to support people as they use their local knowledge to work out how to implement an intervention in their setting. Thus, implementation in practice is improved by an equitable and facilitative relationship between implementation specialists, practitioners, and the public. In my view, ‘implementation science’ might benefit from being renamed ‘implementation social science’.
Next, I want to share some thoughts on fidelity and function. ‘Fidelity’ refers to the notion that complex interventions must be delivered in a prescribed manner wherever they are used. ‘Function’ refers to the intended benefit derived from the intervention. Over the years, I have often been perplexed by the emphasis placed on ‘fidelity to form’ in both the literature and practice. Sometimes, I have felt that fidelity can get in the way of function. For example, I was involved in the evaluation of a questionnaire where there were very strict instructions about the mode of delivery and because of those, some people were unintentionally excluded. Furthermore, being unable to adapt and refine interventions according to local needs and conditions can mean that they either can’t be made to work in some settings, or that interest is lost because they are too ‘clunky’. It was pleasing, then, to hear that the Medical Research Council and the Health Foundation have addressed this issue recently and have both made statements that emphasise the importance of fidelity of function rather than fidelity of form. To put it another way, our focus should be on improvement; to give the last word to Kristian, ‘implementation is improvement’.
If you would like to find out more about implementation and how you might build it into your research application contact your local RDS for support.