Talk, Tailor, Test & Track: Simple Rules For Complexity

Complexity theory is pretty sexy in implementation research right now. Like, Ian Malcolm in the first Jurassic Park talking about chaos and strange attractors kind of sexy. I must confess however to originally having some scepticism. It’s all very well talking about how “complex” the system is, but what does that mean in practice? What do we do with that knowledge?

Fortunately, this does seem to be moving on. More recent papers actually talk about the implications of this for how we work, what it looks like in reality to “embrace complexity” and how this is different to business as usual (check out for example SHIFT-Evidence by The CLAHRC North West London team, or the recent BMJ paper on complexity and improvement by Jeffrey Braithwaite.)


You’re welcome.

Nevertheless, a lot of it is still described in somewhat impenetrable academic-speak. Fractals, non-linear dynamics, emergent properties etc etc. To be fair, these are publications from academic journals so the academic speak is appropriate there. In the spirit of making theory more accessible though, I wanted to break it down, and also to think about what *doing* a complexity approach means in health research.

So, what does complexity theory really mean? I’ve boiled it down to 4 actions. Talk, Tailor, Test, and Track. 


Lots of the complexity papers describe health as ‘a living system’. It has People in it, and there’s nowt as queer as folk. They tend to have their own opinions. They have burdensome histories, they have urgent presents, they have desired futures. They sometimes don’t do what we want. They almost never do what we expect. How do we tackle that? We talk. And we listen.

People react to things, especially New Things that we’ve dropped on them. We need to understand that they are ‘active mechanisms’ (in academic speak) in what happens – how they react will be crucial to whether our shiny New Thing works or fails.


Your shiny New Thing might not actually be what people think they fit best.

Their environments are complex. There are ‘hidden variables’ (academic speak), meaning stuff going on that you might not know about if you only take a top-level view. The way things are actually done might not look much like the tidy flow chart that the chief exec has on their wall. If you only plan to intervene based on the flow chart, then you’re probably going to have a shock.

There are more academic ways of trying to understand this such as ethnographic observation and soft system modelling. But, in the interests of simplicity, probably the most obvious way to get to grips with this is to have a conversation. We talk.

So, now we (hopefully) understand our “living system” a bit better. We have a sense of what really happens, and what people really think. But now what? How do we use that to try to improve our projects?

We Tailor.

Tailoring means acknowledging that “one size does not fit all”. It’s about matching our New Thing to what the people there want or need, or fitting our New Thing better to how things are actually done on the ground. This might be mean adapting how something is done locally, so it fits with their workflows and resources. It might mean framing the project itself so it matches with their priorities, or helps them meet a new national directive. At it’s core, this is about understanding that the ‘research to practice gap’ is less about research jumping over a gulf than it is about research not fitting into spaces that exist in practice.

This is about recognising that local diversity is not noise to be analysed out but part of if and how something works, and if we continue to neglect it, we will continue to have stuff that doesn’t work. A colleague of mine described this perfectly as “no more drag and drop interventions”.

It’s important to acknowledge that there is a tension here between tailoring and the traditional research aims of replication, standardisation, and generalisability. In research we typically want things to be The Same everywhere, both for purity in our testing approaches and because this then makes it easier to roll out (if it’s found in those tests to be effective). If it’s not the same, then how do you efficiently scale up something that is different everywhere?  Personally I think there’s a fascinating space here to explore how we can identify and work with the core ingredients of a New Thing  (which would be The Same, and delivered and tested everywhere) but also map out where this is potential for adaptation. This would put the pressure on us to define exactly what works in our New Thing  (what are the core features that are essential to making something happen) but also how it works – what are the fundmental mechanisms that are at work, even if they are put into action in different ways? How are these activated in specific contexts, and who does this?


We need to identify and report what should look the same, and what’s different…

So, we talked (and listened) and we tailored.  Now we just get stuck in and deliver, right? Not quite…

We Test.  Yes, we now run something and check the effect. But, if you’re a complexity type , you think it’s better to start this small to begin with, and it’s better to do this repeatedly. Think feedback loops and iterative learning (this is where learning health systems fit in). This stage acknowledges that any amount of talking can’t beat a live run, and you learn a huge amount about what works and what doesn’t. Things Will Go Wrong. This is fine, if we’ve made space in our projects to learn from this. I’ve known of trials where it became obvious very early on that the intervention wasn’t being delivered, because it turned out people couldn’t do it properly, or the context changed (see below). But the trial still trundles on. This is arguably an enormous (and unethical) waste of time and money. This is why local testing, on the ground, with the users, is so essential. Fail faster, as the adage goes.

Ok, ok, we did all that. Now we just do it, right? Almost…

Now, we Track.

The key thing now is that things change over time (emergent properties in academic speak). There’s a paper by Carl May and colleagues that explains this really nicely: context is not just a where, but a when. Changes can be surprises – staff can leave, new leadership comes in, competing priorities or directives emerge, funding gets cut. Or they can be deliberate – local adaptations (such as the ‘tinkering’ that iPARIHS talks about), for example through the iterations you’ve undertaken in the “test” stage. They can be both deliberate and surprising for the research team, as when users ‘game the system’ or hack what we wanted them to do to do something else. These are our ‘non-linear dynamics’ which is a fancy academic way of saying “blimey, we never thought that would happen”.

Fundamentally, we need to treat time as a variable, and change as a mechanism of action rather than a deviation from course. We need to understand implementation as a process rather than a thing. Longitudinal data collection is our friend here.


Things look different over time…

So there we go. Much of the above is of course simplified, and there’s a lot more that complexity theory adds, or challenges, about health services research and implementation, and there are certainly challenges for complexity approaches too (the best ways of achieving these things in practice and how to adequately report and evaluate them) Nevertheless,  I think these are the actionable and hopefully sensible take-aways for implementing New Things.

What do you reckon?  Are they surprising, or obvious? Do they help? Let me know in the comments or on twitter.

PS. We have a paper that explores the implications of these things for an “intervention”approach to health research. How we do we include the Talking, how do we acknowledge change? You get messy.

Posted in Implementation science | Tagged , , , , | Leave a comment

My Implementation Scientist: Quick Start Guide

Well done on purchasing your all-new implementation scientist! Please note that coffee stains are part of the original design and there are no refunds.

How To Use
* Implementation science is the study of methods to promote the uptake and application of research findings. Your new implementation scientist can therefore provide support to achieve implementation outcomes, including: adoption, diffusion, acceptability, sustainability.
* We strongly advise that you deploy your model as early as possible during the research process. This will ensure that implementation factors are addressed throughout, including during design and evaluation. Users have reported problems similar to those encountered with health statisticians when they were first introduced, with the models only being activated at the end of the process “to sort that mess out”.

Why does my model keep saying the word context?
Implementation scientists love context. Context can be understood as the Who, Where, and When of how something is implemented. Neglect of contextual factors may cause models to overheat.

My model keeps trying to deliver Local Adaptations which are ruining my intervention fidelity. How do I get them to stop?
Local adaptation is not incompatible with replication and fidelity. Many studies increasingly recognise the need for tailoring of some delivery elements, whilst maintaining a standardised core of key ingredients.

What accessories does my model come with?
All models come pre-programmed with a selection of implementation theories and frameworks. The Theory-Builder specialisation can be purchased separately. Please note that some users have reported these units becoming stuck in “add/refine construct” mode without ever progressing to “test/apply theory”.

I asked my model how we can get the end users to engage with the research and they just said “Talk to them.” Is this an error?

Things To Avoid
* We must reiterate that your implementation scientist is best used early and often. As well as laying the neccesary groundwork for successful implementation in the future, lack of use can lead to them becoming dusty, which may aggravate the allergies on some models.
* Models are not restricted to the use of either qualitative or quantitative data alone, and may gravitate toward mixed-methods approaches.
* Models cannot magically create “pull” environments for “push” studies to deliver knowledge into. Problems with lack of fit can be avoided through early use and consideration of end user needs. Please consult the operating manual for further details.


Posted in Implementation science, Patient Involvement/Engagement | Tagged , | Leave a comment

Quick reflections on #KMb2018 – coproduction and cultivation (and Banarama)

I attended the Knowledge Mobilisation Forum this week, and wanted to get down some quick reflections on the pressing questions and key themes from the conference. This is very much a set of reflections for KMb community members, though I will try to write a more accessible version when I have time!

1.There is a tension between recognising that we need to embed KMb, and foster a culture of KMb (so that KMb doesn’t just disappear if the person responsible moves jobs) with recognition that KMb often does happen through specific individuals who have eg. Facilitation skills, time to develop relationships, and willingness to work across boundaries. What are the implications of this? How do individuals help establish a culture that will outlast them, or how do organisations take responsibility to sustain or reinvigorate KMb if individuals move on?

2. Debates persist around what knowledge is or isn’t, although perhaps now less a debate than trying to work out what we *do* with that fact that there are many kinds of knowledge, with unequal power or legitimacy in different settings, expressed in different ways, some explicit and some intangible. Certainly the knowledge pipeline and any automatic privileging of research knowledge are both considered debunked or unhelpful. As someone who worked for years in trials and systematic reviews, I wonder though if this is specific to the KMb community, and are those who produce research knowledge thinking like this? Or is this seen as an ‘implementation problem’ that KMb types are supposed to provide workarounds for (fix that leaky pipeline!)

3. Boundary objects are more complicated than they appear. Defining them, making them, measuring their impact. No, you can’t just say “so basically what I’m looking at is X as a boundary object” for anything in KMb. I am fully guilty of having done this and will stop it now.

4. Coproduction, involving knowledge users, was essential and consistently discussed across pretty much all the workshops, talks, and across all models and methods. How KMb is conducted was impressively diverse, and there isn’t a magic trick that guarantees KMb will happen or a specific method that has *the* KMb stamp of approval – focus groups, stepped wedge trials, statistical models, Lego, were all in play. But whatever the method, the way of working was considered more effective if it was collaborative, working with the knowledge users and coproducing knowledge itself, to avoid making assumptions, to recognise those different types of knowledge, and to understand what problems knowledge is actually needed to solve. This relates to my Banarama Principle of KMb – it ain’t what you do, it’s the way that you do it, and that’s what get results.


Early KMb Pioneers

5. Sorry Health, but you ain’t special. Education, social work, even law, all face similar, challenges, pressures, and we need to learn across different knowledge settings.

6. I’m not the only “lapsed statistician” in KMb who has moved from doing a lot of quantitative work into more qualitative work, often inspired or necessitated by a focus on coproduction. Interestingly I suspect there’s a fair body of quantitative expertise in KMb and do we too often assume everyone is inherently on the soft measurement side? There seemed to be a growing interest regarding how quantitative and participatory approaches may combine, which is something that interests me enormously in terms of Learning Health Systems and how big data and community/patient involvement can work together to improve care.

Posted in Knowledge Mobilisation | Tagged , | Leave a comment

Not your average patient: Thoughts on expert PPI

For some time now, researchers have recognised that research should be informed by the “end users” of their work. Collaborative working with those who we expect to be users of our findings, and coproduction to improve the research itself, has been heralded as the key to making research more valuable, impactful, sensitive and relevant.
Unfortunately, there are serious problems with this approach. Often there are only certain individuals involved, and typically those with a special passion for or experience of the topic of study. They can clearly not be said to be representative of their group, given their unusual interest in research, which immeadiately marks them out as unusual.  There is, to date, no formally agreed way of assessing what impact they have on research team working, beyond fairly subjective appraisals of “broader perspectives’ and “challenging the status quo”. Increasingly a system has arisen where these individuals are considered semi-professional researchers in their own right, encouraged to take on full research roles, with little consideration for whether they can legitmately straddle both academia and the ‘real world’ of health and truly continue to offer unique, non-research insights. They expect to be paid for their contributions, given equivalent membership on study teams and even demand authorship.
I’m speaking of course about clinicians.
I hope it’s clear that the above is intended as a parody of the views I hear about patients involved in research. I’ve often said that when patients use their experience to give feedback on research it’s ‘bias’, but when clinicians do it’s ‘insight’. We rightly place particular value on clinician-researchers, as they bring a special combined set of skills and perspectives.  This is conversely devalued in patients, where aquiring those skills are seen as them ‘going native’ or rendering them ‘unrepresentative’. We recognise that hybrid clinician-researchers can be great champions of research to colleagues who may not be that interested in it, and we appreciate the role they have in helping us connect with their wider communities, as they’re able to understand both sides. Patients who could play similar roles are too often dismissed as “expert” or “professional” PPI, with academia, a field not known to dislike expertise, suddenly taking issue with people having a lot of experience in something.
We make demands of patients that we never would of our colleague clinicians. We don’t expect them to ‘represent’ their entire discipline or to put aside their particular experience and passions – quite the opposite. We certainly don’t expect them to work for free and would never consider their acceptance of a salary evidence that they’re “in it for the wrong reasons.” We see that their ability and willingness to take on this joint role does them make them far from the average – it makes them exceptional. I can’t tell you how often have I heard complaints that people who get involved in PPI aren’t “the average patient”…  it’s true, but to put it another way, would they want to work with just an “average” research team?
P.S. One rebuttal to this need for expert or experienced PPI may be from researchers who  need a ‘research naive’ perspective on their work. There’s no problem with this, but I would point out that achieving this takes effort and thought, to genuinely try to reach beyond academic borders and engage with people who don’t even know what research is, but in a meaningful way (it’s a challenge we’re grappling with and that I’ll blog about in future!). Often I see researchers make no attempt to do this, and instead simply consult an established group or public contributor who is more accessible to them because they’re already part of some research. The same researchers will then dismiss the feedback they get (and I think a “professional patient” can probably still give a better estimate of a naive view than a researcher can) as the patient “isn’t representative”. This makes me furious. If you need ‘representative’ input, it’s on you to find it – you should never request feedback from a patient which you then undermine and ignore.
In a nutshell: both ‘research naive’ and ‘expert’ patient involvement can be valuable and neccesary for different studies (or sometimes both are needed, with expert involvement to help us think about how to reach beyond our normal borders). Our systems and structures of involvement need space and support for both, and the responsibility for providing that is with us, the researchers, not the patients.
Posted in Patient Involvement/Engagement | Tagged , , | Leave a comment

Stories as the original empathy machines.

Yesterday, I talked about design as being “empathy in action”. I ended the post by questioning how Learning Health Systems can make sure empathy with their users – patients and professionals – is part of the drive for action and improvement.

There’s been a fair bit of chat lately about the potential of Virtual Reality tools to act as ’empathy machines’, which enable wearers to experience first-hand the perspective of others. Making sure Learning Health Systems come wrapped up with fancy tech like VR might appeal to some people (probably people who think adding the word “wearable” to something immediately makes it usable.) But we have a more old-fashioned way of doing this: we use stories.

The power of stories to evoke empathy, and consequently to encourage change, isn’t new. And the power is already being embraced in health care. Experience Based Co-Design makes patient stories central to the process for identifying where improvement is needed and encouraging action to make those improvements happen. Sites like Care Opinion recognise that stories about care can be hugely effective in communicating what the experience of care is really like, and how it should be made better – see, for example, this excellent post “Stories: the original data for improvement”. The post includes the quote “The plural of story isn’t data. The plural of story is culture.” I think this is especially worth reflecting on, given that Learning Health Systems talk about achieving a culture of data-driven improvement as well as a structure to enable it to happen.

If you’re still thinking that VR might be better, it’s worth reflecting on this: critics have disputed whether virtual reality can make you more empathic, but more crucially for us, it may not encourage you take action. Putting yourself in someone else’s shoes takes you out of your own, and distances you from your own responsibility or culpability. Recognising that something is not your experience, but is an experience that you can impact and you can try to change, is a key part of using stories for improvement.

Asssuming a role to try to provoke empathy can also be misleading – people who put on a blindfold (or a VR headset) to ‘become’ blind end up focusing on the difficulty of the experience for themselves, and assuming that all blind people struggle the same way, as opposed to learning from blind people how they have adapted and managed, and what they think the real problems are.

I think it’s essential that neither researchers nor health professionals think they can ‘take’ the perspective of a patient. We should listen to that perspective as told to us by patients themselves. We should recognise ourselves as outsiders to those stories, but as people with responsibility to act on them. Empathy in action, once again.



Posted in Codesign | Tagged , , , , , | Leave a comment

Empathy in action: the value of codesign.

I’m exploring how principles and methods from Design can help us to involve patients and health professionals in implementation research. Design focuses on two things that I think are, or should be, crucial to knowledge mobilisation: empathy and action.

Design is about empathy: user experience is key (that’s what the UX in UX design stands for), with a focus on how the user feels and what they see and do. I first came across this when reading about Experience Based Co-Design, which talks about the neglect of ‘aesthetics’ in much health service design. How patients feel, what they see, hear, even smell, when they interact with a service, are all crucial to how they engage or disengage. Designers talk about “touchpoints” of interaction, when emotions are particularly high or low, and consequently engagement with a service is either supported or threatened.

Design is about action: Service design and UX design are about producing things that people do things with. Designers even talk about “a bias to action” in development, with a focus on testing things in practice rather than just theorising. You need to see something in action to understand it, and to improve it.

When it works well, and when our end users are fully involved, design achieves both: empathy in action. Producing services or outputs that empathise with the user, to offer them the best experience, sensitive to their circumstances, and help them as engaged actors to do what they need to do.

Codesign can help different users to empathise with each other too. We conducted a study which brought together patients and health professionals to codesign new interventions around medication safety. By talking about the actions taken, or not, and by centreing the actual experience of both patients and professionals in the process, we found that both groups could better see the perspective of the other.

Action is key to an effective Learning Health System – using data to drive action to improve care. How can we ensure empathy has a place too? What would the Feeling Health System look (and feel!) like?

Posted in Codesign | Tagged , , | 1 Comment

Theory Vs The Thing Explainer

I’m rather boringly enthusiastic about theory. I genuinely think theory is useful, and practical, and valuable, and not just a way of showing off your intellectual academicness using impenetrable language and questionable philosophical references.

Quite often though, when academics talk about theory, they only do so using impenetrable language and with lashings of questionable philosophical references (“name check classical philosopher of unclear relevance” is a popular square on the conference bingo card).

The Simple Writer is a fab little tool by Randall Munroe of xkcd fame. It encourages you to describe things using only the 1000 most common English words. He wrote an entire book using this principle, explaining a variety of scientific things using the simplest language possible.

In the spirit of “honestly I think theories make a lot of sense and underneath the academic-babble they’re very practical”, I decided to try and describe three theories I’ve used in my research using the Simple Writer. Results below – in each example, the first image is some actual text about the theory from papers, and the second picture is my attempt at a simple version.

I started with Normalisation Process Theory. First of all, none of those three words were allowed, which was a great start. On the whole though I think this one worked the best. I actually think this should replace the Wikipedia page for NPT which is…extravagant, let’s say, in its use of academic bluster. Agentic contributions operationalised within existing socially patterned matrices anyone?




Next was PARIHS (Promoting Action on Research Implementation in Health Services.) Context, evidence and facilitation were all no-nos. You really see why he called it The Thing Explainer with this one.


Lastly, the RE-AIM framework. Reach was allowed, but not the other four. I left in “health” in the Simple version because I think that should be allowed.


What do you reckon? Do the Simple Writer versions communicate what’s useful about the theory? Do they make it seems more or less valuable? Do they capture the key elements, or are they lost in translation?

One reaction I might expect is that the Simple Writer versions expose these complicated theories as “just common sense”. I’m absolutely fine with that. For me, people say something is common sense when they agree with it and think it’s obviously a good idea. They may well not have ever thought it before and it may be far from common, but common sense gets employed as a shorthand for “why would you not do it?”. I’d be delighted if people felt that way about theories.

Posted in Implementation science | Tagged , | 3 Comments

Pencils in Space, and other fairytales about technology.

You must have heard the story. It gets rolled out at conferences and in seminars as an example of how simple, straight-forward solutions are the best way to solve complex problems.

During the space race, so it goes, those clever folk at Nasa had a problem: pens don’t work in space. Astronauts need pens. Maybe they’re big doodlers. For whatever reason, this was a Problem. So, many years, many bright minds, and millions of dollars later, they finally come up with a better, more space-appropriate pen.

The Russians? They just used a pencil.

Or, at least, that’s how it goes…

Sadly, at least if you like simple-solution-stories or berating Americans, the story isn’t true. Pencils don’t really solve the problem, and in fact can cause a whole load more. Bits of lead can flake off and interfere with delicate shuttle equipment. Pencils are also flammable, and fire it seems is a big no-no when you’re floating around the planet with only the vast unfathomable deep of space to escape to.

(Nasa also didn’t waste taxpayer money to invent the new pen. It was a private investor who stumped up the cash.)

Two things strike me about this story. The most obvious is the lesson that deceptively simply solutions are often just that – deceptions. They ignore the complicated context in which problems happen, and under estimate the risk. Being in space turns out to be pretty complex and risks rather high.

The second thing that strikes me is the popularity of the story itself. I’ve heard it twice already this year and we’re only in January. For me, this speaks to our desire to find simple solutions that bypass the cost, in time, money and expertise, of finding technology that works. It’s easy to laugh at the NHS, with its fax machines and its pagers, but it’s much harder to confront the challenging reality of helping health professionals get things done in complicated and risk-laden environments. But its vital that we do, or fax machines and pagers are going to stay around even longer. We need to think about where these things are supposed to work, and the consequences (and risks) of using them.

Beyond that challenge of technology implementation (understanding how things work, or fail tor work, where they need to work, and for whom), there is a challenge of knowledge mobilisation. How do we make sure that our stories of what works are true, and fair? How do debunk the myth that simple solutions can bypass complex problems? How do we fight the fairy tales?

cat in space

In space,  no-one can hear you knock both pens and pencils off the table.



Posted in Implementation science | Tagged , | Leave a comment

Three cheers for Implementation Science #3: You Make Me Want To Be A Better Researcher

If you didn’t get the As Good As It Gets reference then you’re too young to be reading this post. Or I’m too old to be attempting movie references in titles. Let’s call it even.

Our first cheer was for the role of Implementation Science in Doing Things with research. We need to cross that pesky 2nd Translational Gap and ensure the potential value of research knowledge is realised in practice. The second cheer was for the role of Science in Implementation. We need theory and robust evaluation to bridge that translational gap.

Our last cheer takes a slightly different view of that bridge. Rather than being a clean one-way system with our august research knowledge scientifically skipping it’s way over to the grateful naivetés on the other side, Implementation Science recognises we need this traffic on the bridge to travel both ways.

This is because while there may be a gap in our translation efforts, there isn’t necessarily an easy research-shaped gap on the other side, just waiting to be filled in with our sciencey polyfilla. You can’t work in implementation research for very long without being confronted by an uncomfortable truth: a lot of research just doesn’t fit.

cat flap

Crossing boundaries is hard, and sometimes you just don’t fit.

This might be because it doesn’t answer the right questions. Or the answer it provides isn’t feasible in the real world. Or it makes assumptions about how things work that turn out to be wrong, and crumbles under the pressure of how things really are.

In my opinion, science can help us a fair way with this. Better evaluations, better synthesis of existing evidence, better use of theory to anticipate problems and work out how to overcome them. But then, I am a scientist (ever heard the saying “never ask a barber if you need a haircut”?). And crucially it, and we, can’t do it alone.

cat haircut

When you just ask for a bit off the sides, and they give you an entire randomised controlled trial (it’s a science joke…)

Co-production, codesign, collaborative working – whatever you want to call it – is the best tool in the box for tackling this problem. Actually reaching across the gap in the first place to ask the users of our research, the patients, health professionals, policy makers, what they need, what would work, what would fall flat on its face. Working together, not just to translate the knowledge into practice, but to cocreate that knowledge itself. This can be messy, and challenging, but I have no doubt that it makes for better research.

Implementation is about Actually Doing with research. It turns out that, often, Better Doing = Doing Better.

better together

Knowledge users and knowledge producers: better together.


Posted in Implementation science | Tagged , , , | 2 Comments

Three cheers for Implementation Science: #2 Science Vs The Gap

Our first cheer was about the value of implementation science for getting the best value out of research itself. Implementation science is about helping us cross the “2nd translational gap” and getting knowledge from research to the people and places who need it.

The second cheer is about the value of implementation science for realising that this is a hell of a tough nut to crack. Crossing the gap doesn’t just mean tinkering around the edges or adding a hasty ‘dissemination plan’ at the end of a project. The evidence base on implementation warns us that getting research into practice is a complex problem, and we need to tackle that problem with appropriate tools. This means robust evaluations and theory-informed approaches. Implementation science recognises that the research underpinning translation efforts needs to be just as sound as the research we’re trying to translate.

Let’s put this another way: Would you buy a sign from this guy?bad sign maker (via buzzfeed)

I’m guessing no. And yet, as researchers, so often our plans and methods for getting health research into practice, getting the users (such as patients and health professionals) to buy-in to our ideas, are poorly put together. Why should people trust that the research itself is rigorous, valid, and uses the best evidence, when our own efforts to communicate research so often aren’t?

Implementation is about Actually Doing things with research. Implementation science means recognising we need to Do Translation Better (and more scientifically). This brings us to our third and final cheer: When translation helps us make the research itself better.

Posted in Implementation science | Tagged , , | 2 Comments