My Implementation Scientist: Quick Start Guide

Well done on purchasing your all-new implementation scientist! Please note that coffee stains are part of the original design and there are no refunds.

How To Use
* Implementation science is the study of methods to promote the uptake and application of research findings. Your new implementation scientist can therefore provide support to achieve implementation outcomes, including: adoption, diffusion, acceptability, sustainability.
* We strongly advise that you deploy your model as early as possible during the research process. This will ensure that implementation factors are addressed throughout, including during design and evaluation. Users have reported problems similar to those encountered with health statisticians when they were first introduced, with the models only being activated at the end of the process “to sort that mess out”.

FAQ
Why does my model keep saying the word context?
Implementation scientists love context. Context can be understood as the Who, Where, and When of how something is implemented. Neglect of contextual factors may cause models to overheat.

My model keeps trying to deliver Local Adaptations which are ruining my intervention fidelity. How do I get them to stop?
Local adaptation is not incompatible with replication and fidelity. Many studies increasingly recognise the need for tailoring of some delivery elements, whilst maintaining a standardised core of key ingredients.

What accessories does my model come with?
All models come pre-programmed with a selection of implementation theories and frameworks. The Theory-Builder specialisation can be purchased separately. Please note that some users have reported these units becoming stuck in “add/refine construct” mode without ever progressing to “test/apply theory”.

I asked my model how we can get the end users to engage with the research and they just said “Talk to them.” Is this an error?
No.

Things To Avoid
* We must reiterate that your implementation scientist is best used early and often. As well as laying the neccesary groundwork for successful implementation in the future, lack of use can lead to them becoming dusty, which may aggravate the allergies on some models.
* Models are not restricted to the use of either qualitative or quantitative data alone, and may gravitate toward mixed-methods approaches.
* Models cannot magically create “pull” environments for “push” studies to deliver knowledge into. Problems with lack of fit can be avoided through early use and consideration of end user needs. Please consult the operating manual for further details.

 

Advertisements
Posted in Implementation science, Patient Involvement/Engagement | Tagged , | Leave a comment

Quick reflections on #KMb2018 – coproduction and cultivation (and Banarama)

I attended the Knowledge Mobilisation Forum this week, and wanted to get down some quick reflections on the pressing questions and key themes from the conference. This is very much a set of reflections for KMb community members, though I will try to write a more accessible version when I have time!

1.There is a tension between recognising that we need to embed KMb, and foster a culture of KMb (so that KMb doesn’t just disappear if the person responsible moves jobs) with recognition that KMb often does happen through specific individuals who have eg. Facilitation skills, time to develop relationships, and willingness to work across boundaries. What are the implications of this? How do individuals help establish a culture that will outlast them, or how do organisations take responsibility to sustain or reinvigorate KMb if individuals move on?

2. Debates persist around what knowledge is or isn’t, although perhaps now less a debate than trying to work out what we *do* with that fact that there are many kinds of knowledge, with unequal power or legitimacy in different settings, expressed in different ways, some explicit and some intangible. Certainly the knowledge pipeline and any automatic privileging of research knowledge are both considered debunked or unhelpful. As someone who worked for years in trials and systematic reviews, I wonder though if this is specific to the KMb community, and are those who produce research knowledge thinking like this? Or is this seen as an ‘implementation problem’ that KMb types are supposed to provide workarounds for (fix that leaky pipeline!)

3. Boundary objects are more complicated than they appear. Defining them, making them, measuring their impact. No, you can’t just say “so basically what I’m looking at is X as a boundary object” for anything in KMb. I am fully guilty of having done this and will stop it now.

4. Coproduction, involving knowledge users, was essential and consistently discussed across pretty much all the workshops, talks, and across all models and methods. How KMb is conducted was impressively diverse, and there isn’t a magic trick that guarantees KMb will happen or a specific method that has *the* KMb stamp of approval – focus groups, stepped wedge trials, statistical models, Lego, were all in play. But whatever the method, the way of working was considered more effective if it was collaborative, working with the knowledge users and coproducing knowledge itself, to avoid making assumptions, to recognise those different types of knowledge, and to understand what problems knowledge is actually needed to solve. This relates to my Banarama Principle of KMb – it ain’t what you do, it’s the way that you do it, and that’s what get results.

banarama

Early KMb Pioneers

5. Sorry Health, but you ain’t special. Education, social work, even law, all face similar, challenges, pressures, and we need to learn across different knowledge settings.

6. I’m not the only “lapsed statistician” in KMb who has moved from doing a lot of quantitative work into more qualitative work, often inspired or necessitated by a focus on coproduction. Interestingly I suspect there’s a fair body of quantitative expertise in KMb and do we too often assume everyone is inherently on the soft measurement side? There seemed to be a growing interest regarding how quantitative and participatory approaches may combine, which is something that interests me enormously in terms of Learning Health Systems and how big data and community/patient involvement can work together to improve care.

Posted in Knowledge Mobilisation | Tagged , | Leave a comment

Not your average patient: Thoughts on expert PPI

For some time now, researchers have recognised that research should be informed by the “end users” of their work. Collaborative working with those who we expect to be users of our findings, and coproduction to improve the research itself, has been heralded as the key to making research more valuable, impactful, sensitive and relevant.
 
Unfortunately, there are serious problems with this approach. Often there are only certain individuals involved, and typically those with a special passion for or experience of the topic of study. They can clearly not be said to be representative of their group, given their unusual interest in research, which immeadiately marks them out as unusual.  There is, to date, no formally agreed way of assessing what impact they have on research team working, beyond fairly subjective appraisals of “broader perspectives’ and “challenging the status quo”. Increasingly a system has arisen where these individuals are considered semi-professional researchers in their own right, encouraged to take on full research roles, with little consideration for whether they can legitmately straddle both academia and the ‘real world’ of health and truly continue to offer unique, non-research insights. They expect to be paid for their contributions, given equivalent membership on study teams and even demand authorship.
 
I’m speaking of course about clinicians.
I hope it’s clear that the above is intended as a parody of the views I hear about patients involved in research. I’ve often said that when patients use their experience to give feedback on research it’s ‘bias’, but when clinicians do it’s ‘insight’. We rightly place particular value on clinician-researchers, as they bring a special combined set of skills and perspectives.  This is conversely devalued in patients, where aquiring those skills are seen as them ‘going native’ or rendering them ‘unrepresentative’. We recognise that hybrid clinician-researchers can be great champions of research to colleagues who may not be that interested in it, and we appreciate the role they have in helping us connect with their wider communities, as they’re able to understand both sides. Patients who could play similar roles are too often dismissed as “expert” or “professional” PPI, with academia, a field not known to dislike expertise, suddenly taking issue with people having a lot of experience in something.
We make demands of patients that we never would of our colleague clinicians. We don’t expect them to ‘represent’ their entire discipline or to put aside their particular experience and passions – quite the opposite. We certainly don’t expect them to work for free and would never consider their acceptance of a salary evidence that they’re “in it for the wrong reasons.” We see that their ability and willingness to take on this joint role does them make them far from the average – it makes them exceptional. I can’t tell you how often have I heard complaints that people who get involved in PPI aren’t “the average patient”…  it’s true, but to put it another way, would they want to work with just an “average” research team?
                                                                            *
P.S. One rebuttal to this need for expert or experienced PPI may be from researchers who  need a ‘research naive’ perspective on their work. There’s no problem with this, but I would point out that achieving this takes effort and thought, to genuinely try to reach beyond academic borders and engage with people who don’t even know what research is, but in a meaningful way (it’s a challenge we’re grappling with and that I’ll blog about in future!). Often I see researchers make no attempt to do this, and instead simply consult an established group or public contributor who is more accessible to them because they’re already part of some research. The same researchers will then dismiss the feedback they get (and I think a “professional patient” can probably still give a better estimate of a naive view than a researcher can) as the patient “isn’t representative”. This makes me furious. If you need ‘representative’ input, it’s on you to find it – you should never request feedback from a patient which you then undermine and ignore.
In a nutshell: both ‘research naive’ and ‘expert’ patient involvement can be valuable and neccesary for different studies (or sometimes both are needed, with expert involvement to help us think about how to reach beyond our normal borders). Our systems and structures of involvement need space and support for both, and the responsibility for providing that is with us, the researchers, not the patients.
Posted in Patient Involvement/Engagement | Tagged , , | Leave a comment

Stories as the original empathy machines.

Yesterday, I talked about design as being “empathy in action”. I ended the post by questioning how Learning Health Systems can make sure empathy with their users – patients and professionals – is part of the drive for action and improvement.

There’s been a fair bit of chat lately about the potential of Virtual Reality tools to act as ’empathy machines’, which enable wearers to experience first-hand the perspective of others. Making sure Learning Health Systems come wrapped up with fancy tech like VR might appeal to some people (probably people who think adding the word “wearable” to something immediately makes it usable.) But we have a more old-fashioned way of doing this: we use stories.

The power of stories to evoke empathy, and consequently to encourage change, isn’t new. And the power is already being embraced in health care. Experience Based Co-Design makes patient stories central to the process for identifying where improvement is needed and encouraging action to make those improvements happen. Sites like Care Opinion recognise that stories about care can be hugely effective in communicating what the experience of care is really like, and how it should be made better – see, for example, this excellent post “Stories: the original data for improvement”. The post includes the quote “The plural of story isn’t data. The plural of story is culture.” I think this is especially worth reflecting on, given that Learning Health Systems talk about achieving a culture of data-driven improvement as well as a structure to enable it to happen.

If you’re still thinking that VR might be better, it’s worth reflecting on this: critics have disputed whether virtual reality can make you more empathic, but more crucially for us, it may not encourage you take action. Putting yourself in someone else’s shoes takes you out of your own, and distances you from your own responsibility or culpability. Recognising that something is not your experience, but is an experience that you can impact and you can try to change, is a key part of using stories for improvement.

Asssuming a role to try to provoke empathy can also be misleading – people who put on a blindfold (or a VR headset) to ‘become’ blind end up focusing on the difficulty of the experience for themselves, and assuming that all blind people struggle the same way, as opposed to learning from blind people how they have adapted and managed, and what they think the real problems are.

I think it’s essential that neither researchers nor health professionals think they can ‘take’ the perspective of a patient. We should listen to that perspective as told to us by patients themselves. We should recognise ourselves as outsiders to those stories, but as people with responsibility to act on them. Empathy in action, once again.

 

 

Posted in Codesign | Tagged , , , , , | Leave a comment

Empathy in action: the value of codesign.

I’m exploring how principles and methods from Design can help us to involve patients and health professionals in implementation research. Design focuses on two things that I think are, or should be, crucial to knowledge mobilisation: empathy and action.

Design is about empathy: user experience is key (that’s what the UX in UX design stands for), with a focus on how the user feels and what they see and do. I first came across this when reading about Experience Based Co-Design, which talks about the neglect of ‘aesthetics’ in much health service design. How patients feel, what they see, hear, even smell, when they interact with a service, are all crucial to how they engage or disengage. Designers talk about “touchpoints” of interaction, when emotions are particularly high or low, and consequently engagement with a service is either supported or threatened.

Design is about action: Service design and UX design are about producing things that people do things with. Designers even talk about “a bias to action” in development, with a focus on testing things in practice rather than just theorising. You need to see something in action to understand it, and to improve it.

When it works well, and when our end users are fully involved, design achieves both: empathy in action. Producing services or outputs that empathise with the user, to offer them the best experience, sensitive to their circumstances, and help them as engaged actors to do what they need to do.

Codesign can help different users to empathise with each other too. We conducted a study which brought together patients and health professionals to codesign new interventions around medication safety. By talking about the actions taken, or not, and by centreing the actual experience of both patients and professionals in the process, we found that both groups could better see the perspective of the other.

Action is key to an effective Learning Health System – using data to drive action to improve care. How can we ensure empathy has a place too? What would the Feeling Health System look (and feel!) like?

Posted in Codesign | Tagged , , | 1 Comment

Theory Vs The Thing Explainer

I’m rather boringly enthusiastic about theory. I genuinely think theory is useful, and practical, and valuable, and not just a way of showing off your intellectual academicness using impenetrable language and questionable philosophical references.

Quite often though, when academics talk about theory, they only do so using impenetrable language and with lashings of questionable philosophical references (“name check classical philosopher of unclear relevance” is a popular square on the conference bingo card).

The Simple Writer is a fab little tool by Randall Munroe of xkcd fame. It encourages you to describe things using only the 1000 most common English words. He wrote an entire book using this principle, explaining a variety of scientific things using the simplest language possible.

In the spirit of “honestly I think theories make a lot of sense and underneath the academic-babble they’re very practical”, I decided to try and describe three theories I’ve used in my research using the Simple Writer. Results below – in each example, the first image is some actual text about the theory from papers, and the second picture is my attempt at a simple version.

I started with Normalisation Process Theory. First of all, none of those three words were allowed, which was a great start. On the whole though I think this one worked the best. I actually think this should replace the Wikipedia page for NPT which is…extravagant, let’s say, in its use of academic bluster. Agentic contributions operationalised within existing socially patterned matrices anyone?

 

EFFF70D1-5FE3-4489-A230-86F37676290F

C5D1EF9B-2A09-492C-9C73-C1392102E1C1

Next was PARIHS (Promoting Action on Research Implementation in Health Services.) Context, evidence and facilitation were all no-nos. You really see why he called it The Thing Explainer with this one.

99A6CEFC-3AC2-4520-B105-DB126F38606FB9870F0A-CC24-4C27-85A8-8D9E97CF65A6

Lastly, the RE-AIM framework. Reach was allowed, but not the other four. I left in “health” in the Simple version because I think that should be allowed.

D872AE1B-C7F6-47B4-8384-89464DC64E90BEDD4AD0-E95F-4C40-BCB3-8BDC68C7E2D5

What do you reckon? Do the Simple Writer versions communicate what’s useful about the theory? Do they make it seems more or less valuable? Do they capture the key elements, or are they lost in translation?

One reaction I might expect is that the Simple Writer versions expose these complicated theories as “just common sense”. I’m absolutely fine with that. For me, people say something is common sense when they agree with it and think it’s obviously a good idea. They may well not have ever thought it before and it may be far from common, but common sense gets employed as a shorthand for “why would you not do it?”. I’d be delighted if people felt that way about theories.

Posted in Implementation science | Tagged , | 3 Comments

Pencils in Space, and other fairytales about technology.

You must have heard the story. It gets rolled out at conferences and in seminars as an example of how simple, straight-forward solutions are the best way to solve complex problems.

During the space race, so it goes, those clever folk at Nasa had a problem: pens don’t work in space. Astronauts need pens. Maybe they’re big doodlers. For whatever reason, this was a Problem. So, many years, many bright minds, and millions of dollars later, they come up with a better, more space-appropriate pen.

The Russians used a pencil.

Or, at least, that’s how it goes.

Sadly, at least if you like simple-solution-stories or berating Americans, the story isn’t true. Pencils don’t really solve the problem, and in fact can cause a whole load more. Bits of lead can flake off and interfere with delicate shuttle equipment. Pencils are also flammable, and fire it seems is a big no-no when you’re floating around the planet with only the vast unfathomable deep of space to escape to.

(Nasa also didn’t waste taxpayer money to invent the new pen. It was a private investor who stumped up the cash.)

Two things strike me about this story. The most obvious is the lesson that deceptively simply solutions are often just that – deceptions. They ignore the complicated context in which problems happen, and under estimate the risk. Being in space turns out to be pretty complex and risks rather high.

The second thing that strikes me is the popularity of the story itself. I’ve heard it twice already this year and we’re only in January. For me, this speaks to our desire to find simple solutions that bypass the cost, in time, money and expertise, of finding technology that works. It’s easy to laugh at the NHS, with its fax machines and its pagers, but it’s much harder to confront the challenging reality of helping health professionals get things done in complicated and risk-laden environments. But its vital that we do, or fax machines and pagers are going to stay around even longer. We need to think about where these things are supposed to work, and the consequences (and risks) of using them.

Beyond that challenge of technology implementation (understanding how things work, or fail tor work, where they need to work, and for whom), there is a challenge of knowledge mobilisation. How do we make sure that our stories of what works are true, and fair? How do debunk the myth that simple solutions can bypass complex problems? How do we fight the fairy tales?

cat in space

In space,  no-one can hear you knock both pens and pencils off the table.

 

 

Posted in Implementation science | Tagged , | Leave a comment

Three cheers for Implementation Science #3: You Make Me Want To Be A Better Researcher

If you didn’t get the As Good As It Gets reference then you’re too young to be reading this post. Or I’m too old to be attempting movie references in titles. Let’s call it even.

Our first cheer was for the role of Implementation Science in Doing Things with research. We need to cross that pesky 2nd Translational Gap and ensure the potential value of research knowledge is realised in practice. The second cheer was for the role of Science in Implementation. We need theory and robust evaluation to bridge that translational gap.

Our last cheer takes a slightly different view of that bridge. Rather than being a clean one-way system with our august research knowledge scientifically skipping it’s way over to the grateful naivetés on the other side, Implementation Science recognises we need this traffic on the bridge to travel both ways.

This is because while there may be a gap in our translation efforts, there isn’t necessarily an easy research-shaped gap on the other side, just waiting to be filled in with our sciencey polyfilla. You can’t work in implementation research for very long without being confronted by an uncomfortable truth: a lot of research just doesn’t fit.

cat flap

Crossing boundaries is hard, and sometimes you just don’t fit.

This might be because it doesn’t answer the right questions. Or the answer it provides isn’t feasible in the real world. Or it makes assumptions about how things work that turn out to be wrong, and crumbles under the pressure of how things really are.

In my opinion, science can help us a fair way with this. Better evaluations, better synthesis of existing evidence, better use of theory to anticipate problems and work out how to overcome them. But then, I am a scientist (ever heard the saying “never ask a barber if you need a haircut”?). And crucially it, and we, can’t do it alone.

cat haircut

When you just ask for a bit off the sides, and they give you an entire randomised controlled trial (it’s a science joke…)

Co-production, codesign, collaborative working – whatever you want to call it – is the best tool in the box for tackling this problem. Actually reaching across the gap in the first place to ask the users of our research, the patients, health professionals, policy makers, what they need, what would work, what would fall flat on its face. Working together, not just to translate the knowledge into practice, but to cocreate that knowledge itself. This can be messy, and challenging, but I have no doubt that it makes for better research.

Implementation is about Actually Doing with research. It turns out that, often, Better Doing = Doing Better.

better together

Knowledge users and knowledge producers: better together.

 

Posted in Implementation science | Tagged , , , | 2 Comments

Three cheers for Implementation Science: #2 Science Vs The Gap

Our first cheer was about the value of implementation science for getting the best value out of research itself. Implementation science is about helping us cross the “2nd translational gap” and getting knowledge from research to the people and places who need it.

The second cheer is about the value of implementation science for realising that this is a hell of a tough nut to crack. Crossing the gap doesn’t just mean tinkering around the edges or adding a hasty ‘dissemination plan’ at the end of a project. The evidence base on implementation warns us that getting research into practice is a complex problem, and we need to tackle that problem with appropriate tools. This means robust evaluations and theory-informed approaches. Implementation science recognises that the research underpinning translation efforts needs to be just as sound as the research we’re trying to translate.

Let’s put this another way: Would you buy a sign from this guy?bad sign maker (via buzzfeed)

I’m guessing no. And yet, as researchers, so often our plans and methods for getting health research into practice, getting the users (such as patients and health professionals) to buy-in to our ideas, are poorly put together. Why should people trust that the research itself is rigorous, valid, and uses the best evidence, when our own efforts to communicate research so often aren’t?

Implementation is about Actually Doing things with research. Implementation science means recognising we need to Do Translation Better (and more scientifically). This brings us to our third and final cheer: When translation helps us make the research itself better.

Posted in Implementation science | Tagged , , | 2 Comments

Three cheers for implementation science. #1 Research That Actually Does

[This is the first of three blogs – do check out #2 Science Vs The Gap and #3 You Make Me Want To Be A Better Researcher]

Let’s clear up something first – you might, just might, be thinking what on Earth is “implementation science”?

“Implementation” is a word you’re probably familiar with. It means The Actual Doing of something. The application, the execution, the delivery, the enactment, etc etc.

It turns out we need a science of that because scientists aren’t always very good at it.

Our first cheer explains more: 1. Implementation means we capitalise on the value of good research.

pexels-photo-221310.jpeg

This is a problem known as “the second translational gap” in health research. The first gap is between breakthroughs in the lab, and translating these clever science things into treatments (medications, equipment, diagnostic tests) that can help patients. The second gap is between the research that evaluates the effectiveness of those treatments (such as randomised controlled trials and systematic reviews), and translating those equally clever science things into treatments that do help patients.

Too often, the outputs of such research just sit on library shelves or in guidance documents. The knowledge that can help us choose the best treatment for patients, or understand the best way to deliver care, is lost in the gap.

Another term for this is ‘research waste’. The time and the effort, and, yes, the money, that goes into research, is also lost, if we fail to translate those resources into something that people can actually use. One of the older uses of “implement”, deriving from its Latin meaning of “filling up”, was “fulfilment” in the sense of fulfilling a debt. I like this. As scientists, we owe a debt, to our funders, to our participants, to anyone relying on us to Actually Do and help. Too often the debt remains unpaid.

5D327C12-960E-4249-A00C-D7C66EF17AAD

Not this kind of filling but look I needed to get a cat in here somewhere.

If you’re an implementation scientist, you think the value of research and the knowledge it produces is only really valuable once it becomes The Actual Doing of something. Predictably, The Actual Doing is no easy task. How do we tackle that? As the great hair advert once said, now for the science part.

I strongly believe that we need to do much, much better at getting knowledge from research out to the people and places that can use it. I think part of how we do this is to recognise that the translation problem needs to be treated with the same respect as we treat the clinical problems themselves. We need to properly measure and evaluate our translation efforts, test them like we would with a treatment. We need to report what we find clearly and consistently, and we need to use models and theories that help us predict and explain what happens. More on this in the second cheer…

 

 

Posted in Implementation science | Tagged , , , | 2 Comments