A group of dedicated geeks, clinicians, designers, and programmers recently met up for a weekend of hacking the NHS. Sounds scary? Don't worry! A hack day is a chance for people to mix together various bits of NHS data and technology to see if they can produce interesting prototypes.
All of us used publicly available data sets and APIs - so no private or patient data - and a variety of open source tools and libraries. Between the teams we created over a dozen demos. All of them were sketches of ideas to show what is possible with open data.
What we built
Our team wanted to explore whether NHS data could be better understood using voice technology. Some people like looking at spreadsheets and some people prefer graphs and charts - but do users want NHS data through a voice assistant like Amazon's Alexa?
We did a bit of user research - mostly asking on Twitter what people would like to hear about - then started hacking.
Our demo focused on delivering 4 bits of information:
- waiting times at the local A&E
- car parking information at a hospital
- what a hospital's quality rating is
- how much prescriptions cost
Each of these had their own unique challenges.
What we discovered
Waiting time data for A&E is complicated. If you turn up with a heart attack, you'll be seen quicker than if you have a saucepan stuck on your head.
So we pivoted to ask “how many people are currently waiting in A&E?” - that's a pretty useful piece of information.
But this data isn’t published anywhere! So a friend of ours built an API to show the data in Welsh hospitals.
Car parking information was easy to find - there is an API that lets you grab the data presented on the NHS website. We couldn't find a source for how many car parking spaces were available - which is what people really wanted, but we were able to get costs for parking.
Most of those details are designed to be read on screen, so they often have HTML code embedded in them - which isn't great for voice assistants. Additionally, long sentences are really annoying when spoken by a robot. People want quick answers. Readers can easily skim text, but listeners can't skim voice!
Quality ratings were easy to get thanks to the NHS API - but we stumbled onto an interesting problem. When a person says “What's the rating of Princess of Wales hospital” - do they mean the one in Bridgend or the one in Ely?
Each hospital has a unique ID code, but it is hard to disambiguate them from human speech.
Finally, prescription charges. There are lots of different exceptions and exemptions which means a person may or may not pay for their prescription.
What sounds like a simple question can turn into a complicated decision tree. This means having quite a long conversation with your voice assistant.
What we learned
Building an interactive voice skill is pretty easy! There are lots of open source tutorials to follow.
However, creating a useful and accurate skill is quite hard. Voice is a new interaction paradigm with its own idiosyncratic behaviours.
Finally, getting hold of the data from the NHS can be a frustrating experience. There are lots of different APIs with different access requirements - and many services simply don't provide data.
2 comments
Comment by Chris posted on
Has anyone created a summary of all outputs from all the NHS Hackdays that have taken place since 2012 (when, I believe the first one occurred)? Something similar to the spreadsheet Govcamp creates would be useful so we can see what ideas have evolved over time (and also check for wheel-reinvention): https://docs.google.com/spreadsheets/d/1S6nemSPxSLrURGigaQZFKViWBoAhalpE2f0RtZ92Fpk/edit
Comment by Terence Eden posted on
Hi Chris,
There is a summary of each event at https://nhshackday.com/events/
They list all the things created and, where possible, link to source code etc.
If you want to do something with the data, it's on GitHub at https://github.com/nhshackday/nhshackday.github.io/tree/master/content/_projects
Terence