Why data-driven care must be patient-driven care – final report

I haven’t blogged much at all lately (blogs seemed to lose out to twitter threads as the preferred way of sharing stuff online for academics), but I wanted to make sure this final report was here on the site for anyone who finds it. This is the summary of the work that I did with public contributors to understand how and why patients can, should and must be involved in the use of health data to improve health services. Huge thanks to the amazing Chris Redford for design and illustration and for putting up with my endless “actually can I change that line again sorry” emails.

The academic papers reporting the work are open access:

The paper reporting our conclusions and how we reached them is here

The paper reporting our reflections about the process and what we learned about working together is here

Some notes on the report: We really wanted to ensure we provided an alternative report that wasn’t a paper written in academic language, that would hopefully be more accessible to people outside academia, particularly patients but also health professionals.

We made some decisions that I think have made the report a useful summary, compared to the academic papers. We boiled it down to key messages, broke it into easier to navigate sections, used illustration and colour to support our points. We also tried to keep it brief (compared to the full papers!)
We tried to think about who was reading, the questions they would ask. And we tried to leave people with useful questions to take away and examples to look for further inspiration.

Nevertheless, making outputs that are useful for different audiences is a really big task, and although I’m really pleased with the report, I think there is definitely room for improvement.

If we had had more time/resource, I think we would have focused on specific stakeholder groups and produced something for each rather than trying to squeeze multiple audiences into one. I also think ‘accessible’ needs to be taken seriously all levels, and this report isn’t accessible in terms of people with visual impairment, or accessible through being in other languages. All of that takes time and money though, and even in a dedicated KM project we ran out of both. I think we were right to spend more time in debating what the findings were, rather than focusing on dissemination. But I want to be open that I know this is still a job half done, and I aim to do more and better in future.

Posted in Codesign, Digital Health, KMR Fellowship, Patient Involvement/Engagement | Tagged , , | Leave a comment

The Elephant of Digital Health

We seem to be constantly on the verge of a revolution in digital health in NHS, at least if grand proclamations about ‘harnessing’ data, ‘disrupting’ traditional services etc are to be believed. We see news items about the latest all-singing super-advanced piece of kit that will perform surgery in seconds, or diagnose any problem based on a fingerprint, or is a ‘wearable’ that will make patients exercise more, sleep better and eat only acai berries.

 

Alongside this is the consistent grumbling about the slow and outdated way the NHS works on a day to day basis, using pagers and fax machines and even paper to collect information, which then can’t be shared across the boundaries of primary and secondary care, or even between units within the same service.

 

One thing I’ve noticed, whenever the latest shiny bit of kit is rolled out, is that the justification for the expense of that ‘innovation’ is often wrapped within a complaint about the everyday processes of the NHS. “It’s crazy that we have a health care system so dependent on pagers! Here is my gene-classifying radar drone that will benefit 0.01% of the NHS population to help!”… I call this the Elephant of Digital Health.

 

In the parable of the blind men and the elephant, each one who tries to determine what the elephant is gets it wrong, because they only have part of the whole –  one says it’s like a ‘rope’ (the one who grabs the the tail), another a ’tree’ (the legs) and another a ’snake’ (the trunk). Too often those selling a new gadget or system seem to be talk as if they recognise the whole animal, but actually they’re just focused on one area. They’ve made something that fits into one part of the broad beast of ‘digital health’ but neglects the wider state of things.  I’ve been at digital health conferences where someone selling their fancy gadget will bemoan the lack of integrated records and slow place of communication, and talk as if they want to address it, but their VR 3D headset doesn’t seem to have anything to do with those problems. They’re claiming to see the whole thing, but in reality being deliberately blind and speaking to just one element of digital health.

h219dbfcb

Are we even sure it’s an elephant at all??

 I think it’s wrong to ignore that these are all part of one thing, or we’re just like the blind men. It’s especially wrong to design something that will address only one part, and assume it will solve the problems elsewhere. But equally I think it helps to recognise that digital health is a complex beast, and we could and should be more specific when we talk about digital health.

 

We can even use the elephant. I think the legs should be seen as the infrastructure, or lack thereof. The absence of working digital systems on site, and struggle to use digital means to interact both with patients (eg. Using Skype or even just the telephone for appointments, enabling patients to book or change appointments online) and with other professionals (electronic health records that don’t travel between services). This is the stuff that frustrates people the most, that keep us stuck, that stops the NHS moving forward.

 

The trunk I think would be big data and health analytics – it can guide us ahead and drive learning. It can also, if we’re not careful, seem like a snake. The tail can be the gadgets – the 3D models and the lasers and the apps. I’ve rather unsubtley put this fancy whizzy stuff at the, er, ‘bottom’ end.

 

Crucially, neither can go anywhere without the legs. We’re investing in high tech gadgets while we don’t have stable wifi in hospitals. We’re imagining huge data sets with sophisticated analytics when patient data is collected and stored on dusty sheets of paper. Despite infrastructure seeming more mundane than an AI Life Saving Robot, I think this is where the revolution could actually happen, and where the focus in digital health really needs to be.

 

Most importantly of course, we mustn’t forget about who’s riding the elephant – the patients we’re trying to support. But I think I’ve tortured this metaphor enough for one day…

 

Posted in Digital Health | Tagged | Leave a comment

Talk, Tailor, Test & Track: Simple Rules For Complexity

Complexity theory is pretty sexy in implementation research right now. Like, Ian Malcolm in the first Jurassic Park talking about chaos and strange attractors kind of sexy. I must confess however to originally having some scepticism. It’s all very well talking about how “complex” the system is, but what does that mean in practice? What do we do with that knowledge?

Fortunately, this does seem to be moving on. More recent papers actually talk about the implications of this for how we work, what it looks like in reality to “embrace complexity” and how this is different to business as usual (check out for example SHIFT-Evidence by The CLAHRC North West London team, or the recent BMJ paper on complexity and improvement by Jeffrey Braithwaite.)

ian-malcolm-open-shirt

You’re welcome.

Nevertheless, a lot of it is still described in somewhat impenetrable academic-speak. Fractals, non-linear dynamics, emergent properties etc etc. To be fair, these are publications from academic journals so the academic speak is appropriate there. In the spirit of making theory more accessible though, I wanted to break it down, and also to think about what *doing* a complexity approach means in health research.

So, what does complexity theory really mean? I’ve boiled it down to 4 actions. Talk, Tailor, Test, and Track. 

Talk

Lots of the complexity papers describe health as ‘a living system’. It has People in it, and there’s nowt as queer as folk. They tend to have their own opinions. They have burdensome histories, they have urgent presents, they have desired futures. They sometimes don’t do what we want. They almost never do what we expect. How do we tackle that? We talk. And we listen.

People react to things, especially New Things that we’ve dropped on them. We need to understand that they are ‘active mechanisms’ (in academic speak) in what happens – how they react will be crucial to whether our shiny New Thing works or fails.

ln8zdc5.jpg

Your shiny New Thing might not actually be what people think they fit best.

Their environments are complex. There are ‘hidden variables’ (academic speak), meaning stuff going on that you might not know about if you only take a top-level view. The way things are actually done might not look much like the tidy flow chart that the chief exec has on their wall. If you only plan to intervene based on the flow chart, then you’re probably going to have a shock.

There are more academic ways of trying to understand this such as ethnographic observation and soft system modelling. But, in the interests of simplicity, probably the most obvious way to get to grips with this is to have a conversation. We talk.

So, now we (hopefully) understand our “living system” a bit better. We have a sense of what really happens, and what people really think. But now what? How do we use that to try to improve our projects?

We Tailor.

Tailoring means acknowledging that “one size does not fit all”. It’s about matching our New Thing to what the people there want or need, or fitting our New Thing better to how things are actually done on the ground. This might be mean adapting how something is done locally, so it fits with their workflows and resources. It might mean framing the project itself so it matches with their priorities, or helps them meet a new national directive. At it’s core, this is about understanding that the ‘research to practice gap’ is less about research jumping over a gulf than it is about research not fitting into spaces that exist in practice.

This is about recognising that local diversity is not noise to be analysed out but part of if and how something works, and if we continue to neglect it, we will continue to have stuff that doesn’t work. A colleague of mine described this perfectly as “no more drag and drop interventions”.

It’s important to acknowledge that there is a tension here between tailoring and the traditional research aims of replication, standardisation, and generalisability. In research we typically want things to be The Same everywhere, both for purity in our testing approaches and because this then makes it easier to roll out (if it’s found in those tests to be effective). If it’s not the same, then how do you efficiently scale up something that is different everywhere?  Personally I think there’s a fascinating space here to explore how we can identify and work with the core ingredients of a New Thing  (which would be The Same, and delivered and tested everywhere) but also map out where this is potential for adaptation. This would put the pressure on us to define exactly what works in our New Thing  (what are the core features that are essential to making something happen) but also how it works – what are the fundmental mechanisms that are at work, even if they are put into action in different ways? How are these activated in specific contexts, and who does this?

h343436E1

We need to identify and report what should look the same, and what’s different…

So, we talked (and listened) and we tailored.  Now we just get stuck in and deliver, right? Not quite…

We Test.  Yes, we now run something and check the effect. But, if you’re a complexity type , you think it’s better to start this small to begin with, and it’s better to do this repeatedly. Think feedback loops and iterative learning (this is where learning health systems fit in). This stage acknowledges that any amount of talking can’t beat a live run, and you learn a huge amount about what works and what doesn’t. Things Will Go Wrong. This is fine, if we’ve made space in our projects to learn from this. I’ve known of trials where it became obvious very early on that the intervention wasn’t being delivered, because it turned out people couldn’t do it properly, or the context changed (see below). But the trial still trundles on. This is arguably an enormous (and unethical) waste of time and money. This is why local testing, on the ground, with the users, is so essential. Fail faster, as the adage goes.

Ok, ok, we did all that. Now we just do it, right? Almost…

Now, we Track.

The key thing now is that things change over time (emergent properties in academic speak). There’s a paper by Carl May and colleagues that explains this really nicely: context is not just a where, but a when. Changes can be surprises – staff can leave, new leadership comes in, competing priorities or directives emerge, funding gets cut. Or they can be deliberate – local adaptations (such as the ‘tinkering’ that iPARIHS talks about), for example through the iterations you’ve undertaken in the “test” stage. They can be both deliberate and surprising for the research team, as when users ‘game the system’ or hack what we wanted them to do to do something else. These are our ‘non-linear dynamics’ which is a fancy academic way of saying “blimey, we never thought that would happen”.

Fundamentally, we need to treat time as a variable, and change as a mechanism of action rather than a deviation from course. We need to understand implementation as a process rather than a thing. Longitudinal data collection is our friend here.

before-and-after-growing-up-cats-fb2.jpg

Things look different over time…

So there we go. Much of the above is of course simplified, and there’s a lot more that complexity theory adds, or challenges, about health services research and implementation, and there are certainly challenges for complexity approaches too (the best ways of achieving these things in practice and how to adequately report and evaluate them) Nevertheless,  I think these are the actionable and hopefully sensible take-aways for implementing New Things.

What do you reckon?  Are they surprising, or obvious? Do they help? Let me know in the comments or on twitter.

PS. We have a paper that explores the implications of these things for an “intervention”approach to health research. How we do we include the Talking, how do we acknowledge change? You get messy.

Posted in Implementation science | Tagged , , , , | Leave a comment

My Implementation Scientist: Quick Start Guide

Well done on purchasing your all-new implementation scientist! Please note that coffee stains are part of the original design and there are no refunds.

How To Use
* Implementation science is the study of methods to promote the uptake and application of research findings. Your new implementation scientist can therefore provide support to achieve implementation outcomes, including: adoption, diffusion, acceptability, sustainability.
* We strongly advise that you deploy your model as early as possible during the research process. This will ensure that implementation factors are addressed throughout, including during design and evaluation. Users have reported problems similar to those encountered with health statisticians when they were first introduced, with the models only being activated at the end of the process “to sort that mess out”.

FAQ
Why does my model keep saying the word context?
Implementation scientists love context. Context can be understood as the Who, Where, and When of how something is implemented. Neglect of contextual factors may cause models to overheat.

My model keeps trying to deliver Local Adaptations which are ruining my intervention fidelity. How do I get them to stop?
Local adaptation is not incompatible with replication and fidelity. Many studies increasingly recognise the need for tailoring of some delivery elements, whilst maintaining a standardised core of key ingredients.

What accessories does my model come with?
All models come pre-programmed with a selection of implementation theories and frameworks. The Theory-Builder specialisation can be purchased separately. Please note that some users have reported these units becoming stuck in “add/refine construct” mode without ever progressing to “test/apply theory”.

I asked my model how we can get the end users to engage with the research and they just said “Talk to them.” Is this an error?
No.

Things To Avoid
* We must reiterate that your implementation scientist is best used early and often. As well as laying the neccesary groundwork for successful implementation in the future, lack of use can lead to them becoming dusty, which may aggravate the allergies on some models.
* Models are not restricted to the use of either qualitative or quantitative data alone, and may gravitate toward mixed-methods approaches.
* Models cannot magically create “pull” environments for “push” studies to deliver knowledge into. Problems with lack of fit can be avoided through early use and consideration of end user needs. Please consult the operating manual for further details.

 

Posted in Implementation science, Patient Involvement/Engagement | Tagged , | Leave a comment

Quick reflections on #KMb2018 – coproduction and cultivation (and Banarama)

I attended the Knowledge Mobilisation Forum this week, and wanted to get down some quick reflections on the pressing questions and key themes from the conference. This is very much a set of reflections for KMb community members, though I will try to write a more accessible version when I have time!

1.There is a tension between recognising that we need to embed KMb, and foster a culture of KMb (so that KMb doesn’t just disappear if the person responsible moves jobs) with recognition that KMb often does happen through specific individuals who have eg. Facilitation skills, time to develop relationships, and willingness to work across boundaries. What are the implications of this? How do individuals help establish a culture that will outlast them, or how do organisations take responsibility to sustain or reinvigorate KMb if individuals move on?

2. Debates persist around what knowledge is or isn’t, although perhaps now less a debate than trying to work out what we *do* with that fact that there are many kinds of knowledge, with unequal power or legitimacy in different settings, expressed in different ways, some explicit and some intangible. Certainly the knowledge pipeline and any automatic privileging of research knowledge are both considered debunked or unhelpful. As someone who worked for years in trials and systematic reviews, I wonder though if this is specific to the KMb community, and are those who produce research knowledge thinking like this? Or is this seen as an ‘implementation problem’ that KMb types are supposed to provide workarounds for (fix that leaky pipeline!)

3. Boundary objects are more complicated than they appear. Defining them, making them, measuring their impact. No, you can’t just say “so basically what I’m looking at is X as a boundary object” for anything in KMb. I am fully guilty of having done this and will stop it now.

4. Coproduction, involving knowledge users, was essential and consistently discussed across pretty much all the workshops, talks, and across all models and methods. How KMb is conducted was impressively diverse, and there isn’t a magic trick that guarantees KMb will happen or a specific method that has *the* KMb stamp of approval – focus groups, stepped wedge trials, statistical models, Lego, were all in play. But whatever the method, the way of working was considered more effective if it was collaborative, working with the knowledge users and coproducing knowledge itself, to avoid making assumptions, to recognise those different types of knowledge, and to understand what problems knowledge is actually needed to solve. This relates to my Banarama Principle of KMb – it ain’t what you do, it’s the way that you do it, and that’s what get results.

banarama

Early KMb Pioneers

5. Sorry Health, but you ain’t special. Education, social work, even law, all face similar, challenges, pressures, and we need to learn across different knowledge settings.

6. I’m not the only “lapsed statistician” in KMb who has moved from doing a lot of quantitative work into more qualitative work, often inspired or necessitated by a focus on coproduction. Interestingly I suspect there’s a fair body of quantitative expertise in KMb and do we too often assume everyone is inherently on the soft measurement side? There seemed to be a growing interest regarding how quantitative and participatory approaches may combine, which is something that interests me enormously in terms of Learning Health Systems and how big data and community/patient involvement can work together to improve care.

Posted in Knowledge Mobilisation | Tagged , | Leave a comment

Not your average patient: Thoughts on expert PPI

For some time now, researchers have recognised that research should be informed by the “end users” of their work. Collaborative working with those who we expect to be users of our findings, and coproduction to improve the research itself, has been heralded as the key to making research more valuable, impactful, sensitive and relevant.

Unfortunately, there are serious problems with this approach. Often there are only certain individuals involved, and typically those with a special passion for or experience of the topic of study. They can clearly not be said to be representative of their group, given their unusual interest in research, which immeadiately marks them out as unusual.  There is, to date, no formally agreed way of assessing what impact they have on research team working, beyond fairly subjective appraisals of “broader perspectives’ and “challenging the status quo”. Increasingly a system has arisen where these individuals are considered semi-professional researchers in their own right, encouraged to take on full research roles, with little consideration for whether they can legitmately straddle both academia and the ‘real world’ of health and truly continue to offer unique, non-research insights. They expect to be paid for their contributions, given equivalent membership on study teams and even demand authorship.

I’m speaking of course about clinicians.

I hope it’s clear that the above is intended as a parody of the views I hear about patients involved in research. I’ve often said that when patients use their experience to give feedback on research it’s ‘bias’, but when clinicians do it’s ‘insight’. We rightly place particular value on clinician-researchers, as they bring a special combined set of skills and perspectives.  This is conversely devalued in patients, where aquiring those skills are seen as them ‘going native’ or rendering them ‘unrepresentative’. We recognise that hybrid clinician-researchers can be great champions of research to colleagues who may not be that interested in it, and we appreciate the role they have in helping us connect with their wider communities, as they’re able to understand both sides. Patients who could play similar roles are too often dismissed as “expert” or “professional” PPI, with academia, a field not known to dislike expertise, suddenly taking issue with people having a lot of experience in something.

We make demands of patients that we never would of our colleague clinicians. We don’t expect them to ‘represent’ their entire discipline or to put aside their particular experience and passions – quite the opposite. We certainly don’t expect them to work for free and would never consider their acceptance of a salary evidence that they’re “in it for the wrong reasons.” We see that their ability and willingness to take on this joint role does them make them far from the average – it makes them exceptional. I can’t tell you how often have I heard complaints that people who get involved in PPI aren’t “the average patient”…  it’s true, but to put it another way, would they want to work with just an “average” research team?

P.S. One rebuttal to this need for expert or experienced PPI may be from researchers who  need a ‘research naive’ perspective on their work. There’s no problem with this, but I would point out that achieving this takes effort and thought, to genuinely try to reach beyond academic borders and engage with people who don’t even know what research is, but in a meaningful way (it’s a challenge we’re grappling with and that I’ll blog about in future!). Often I see researchers make no attempt to do this, and instead simply consult an established group or public contributor who is more accessible to them because they’re already part of some research. The same researchers will then dismiss the feedback they get (and I think a “professional patient” can probably still give a better estimate of a naive view than a researcher can) as the patient “isn’t representative”. This makes me furious. If you need ‘representative’ input, it’s on you to find it – you should never request feedback from a patient which you then undermine and ignore.

In a nutshell: both ‘research naive’ and ‘expert’ patient involvement can be valuable and neccesary for different studies (or sometimes both are needed, with expert involvement to help us think about how to reach beyond our normal borders). Our systems and structures of involvement need space and support for both, and the responsibility for providing that is with us, the researchers, not the patients.

Posted in Patient Involvement/Engagement | Tagged , , | Leave a comment

Stories as the original empathy machines.

Yesterday, I talked about design as being “empathy in action”. I ended the post by questioning how Learning Health Systems can make sure empathy with their users – patients and professionals – is part of the drive for action and improvement.

There’s been a fair bit of chat lately about the potential of Virtual Reality tools to act as ’empathy machines’, which enable wearers to experience first-hand the perspective of others. Making sure Learning Health Systems come wrapped up with fancy tech like VR might appeal to some people (probably people who think adding the word “wearable” to something immediately makes it usable.) But we have a more old-fashioned way of doing this: we use stories.

The power of stories to evoke empathy, and consequently to encourage change, isn’t new. And the power is already being embraced in health care. Experience Based Co-Design makes patient stories central to the process for identifying where improvement is needed and encouraging action to make those improvements happen. Sites like Care Opinion recognise that stories about care can be hugely effective in communicating what the experience of care is really like, and how it should be made better – see, for example, this excellent post “Stories: the original data for improvement”. The post includes the quote “The plural of story isn’t data. The plural of story is culture.” I think this is especially worth reflecting on, given that Learning Health Systems talk about achieving a culture of data-driven improvement as well as a structure to enable it to happen.

If you’re still thinking that VR might be better, it’s worth reflecting on this: critics have disputed whether virtual reality can make you more empathic, but more crucially for us, it may not encourage you take action. Putting yourself in someone else’s shoes takes you out of your own, and distances you from your own responsibility or culpability. Recognising that something is not your experience, but is an experience that you can impact and you can try to change, is a key part of using stories for improvement.

Asssuming a role to try to provoke empathy can also be misleading – people who put on a blindfold (or a VR headset) to ‘become’ blind end up focusing on the difficulty of the experience for themselves, and assuming that all blind people struggle the same way, as opposed to learning from blind people how they have adapted and managed, and what they think the real problems are.

I think it’s essential that neither researchers nor health professionals think they can ‘take’ the perspective of a patient. We should listen to that perspective as told to us by patients themselves. We should recognise ourselves as outsiders to those stories, but as people with responsibility to act on them. Empathy in action, once again.

 

 

Posted in Codesign | Tagged , , , , , | Leave a comment

Empathy in action: the value of codesign.

I’m exploring how principles and methods from Design can help us to involve patients and health professionals in implementation research. Design focuses on two things that I think are, or should be, crucial to knowledge mobilisation: empathy and action.

Design is about empathy: user experience is key (that’s what the UX in UX design stands for), with a focus on how the user feels and what they see and do. I first came across this when reading about Experience Based Co-Design, which talks about the neglect of ‘aesthetics’ in much health service design. How patients feel, what they see, hear, even smell, when they interact with a service, are all crucial to how they engage or disengage. Designers talk about “touchpoints” of interaction, when emotions are particularly high or low, and consequently engagement with a service is either supported or threatened.

Design is about action: Service design and UX design are about producing things that people do things with. Designers even talk about “a bias to action” in development, with a focus on testing things in practice rather than just theorising. You need to see something in action to understand it, and to improve it.

When it works well, and when our end users are fully involved, design achieves both: empathy in action. Producing services or outputs that empathise with the user, to offer them the best experience, sensitive to their circumstances, and help them as engaged actors to do what they need to do.

Codesign can help different users to empathise with each other too. We conducted a study which brought together patients and health professionals to codesign new interventions around medication safety. By talking about the actions taken, or not, and by centreing the actual experience of both patients and professionals in the process, we found that both groups could better see the perspective of the other.

Action is key to an effective Learning Health System – using data to drive action to improve care. How can we ensure empathy has a place too? What would the Feeling Health System look (and feel!) like?

Posted in Codesign | Tagged , , | 1 Comment

Theory Vs The Thing Explainer

I’m rather boringly enthusiastic about theory. I genuinely think theory is useful, and practical, and valuable, and not just a way of showing off your intellectual academicness using impenetrable language and questionable philosophical references.

Quite often though, when academics talk about theory, they only do so using impenetrable language and with lashings of questionable philosophical references (“name check classical philosopher of unclear relevance” is a popular square on the conference bingo card).

The Simple Writer is a fab little tool by Randall Munroe of xkcd fame. It encourages you to describe things using only the 1000 most common English words. He wrote an entire book using this principle, explaining a variety of scientific things using the simplest language possible.

In the spirit of “honestly I think theories make a lot of sense and underneath the academic-babble they’re very practical”, I decided to try and describe three theories I’ve used in my research using the Simple Writer. Results below – in each example, the first image is some actual text about the theory from papers, and the second picture is my attempt at a simple version.

I started with Normalisation Process Theory. First of all, none of those three words were allowed, which was a great start. On the whole though I think this one worked the best. I actually think this should replace the Wikipedia page for NPT which is…extravagant, let’s say, in its use of academic bluster. Agentic contributions operationalised within existing socially patterned matrices anyone?

 

EFFF70D1-5FE3-4489-A230-86F37676290F

C5D1EF9B-2A09-492C-9C73-C1392102E1C1

Next was PARIHS (Promoting Action on Research Implementation in Health Services.) Context, evidence and facilitation were all no-nos. You really see why he called it The Thing Explainer with this one.

99A6CEFC-3AC2-4520-B105-DB126F38606FB9870F0A-CC24-4C27-85A8-8D9E97CF65A6

Lastly, the RE-AIM framework. Reach was allowed, but not the other four. I left in “health” in the Simple version because I think that should be allowed.

D872AE1B-C7F6-47B4-8384-89464DC64E90BEDD4AD0-E95F-4C40-BCB3-8BDC68C7E2D5

What do you reckon? Do the Simple Writer versions communicate what’s useful about the theory? Do they make it seems more or less valuable? Do they capture the key elements, or are they lost in translation?

One reaction I might expect is that the Simple Writer versions expose these complicated theories as “just common sense”. I’m absolutely fine with that. For me, people say something is common sense when they agree with it and think it’s obviously a good idea. They may well not have ever thought it before and it may be far from common, but common sense gets employed as a shorthand for “why would you not do it?”. I’d be delighted if people felt that way about theories.

Posted in Implementation science | Tagged , | 3 Comments

Pencils in Space, and other fairytales about technology.

You must have heard the story. It gets rolled out at conferences and in seminars as an example of how simple, straight-forward solutions are the best way to solve complex problems.

During the space race, so it goes, those clever folk at Nasa had a problem: pens don’t work in space. Astronauts need pens. Maybe they’re big doodlers. For whatever reason, this was a Problem. So, many years, many bright minds, and millions of dollars later, they finally come up with a better, more space-appropriate pen.

The Russians? They just used a pencil.

Or, at least, that’s how it goes…

Sadly, at least if you like simple-solution-stories or berating Americans, the story isn’t true. Pencils don’t really solve the problem, and in fact can cause a whole load more. Bits of lead can flake off and interfere with delicate shuttle equipment. Pencils are also flammable, and fire it seems is a big no-no when you’re floating around the planet with only the vast unfathomable deep of space to escape to.

(Nasa also didn’t waste taxpayer money to invent the new pen. It was a private investor who stumped up the cash.)

Two things strike me about this story. The most obvious is the lesson that deceptively simply solutions are often just that – deceptions. They ignore the complicated context in which problems happen, and under estimate the risk. Being in space turns out to be pretty complex and risks rather high.

The second thing that strikes me is the popularity of the story itself. I’ve heard it twice already this year and we’re only in January. For me, this speaks to our desire to find simple solutions that bypass the cost, in time, money and expertise, of finding technology that works. It’s easy to laugh at the NHS, with its fax machines and its pagers, but it’s much harder to confront the challenging reality of helping health professionals get things done in complicated and risk-laden environments. But its vital that we do, or fax machines and pagers are going to stay around even longer. We need to think about where these things are supposed to work, and the consequences (and risks) of using them.

Beyond that challenge of technology implementation (understanding how things work, or fail tor work, where they need to work, and for whom), there is a challenge of knowledge mobilisation. How do we make sure that our stories of what works are true, and fair? How do debunk the myth that simple solutions can bypass complex problems? How do we fight the fairy tales?

cat in space

In space,  no-one can hear you knock both pens and pencils off the table.

 

 

Posted in Implementation science | Tagged , | Leave a comment