At the beginning of the year, many of us in the Charity sector were becoming increasingly aware of the growing interest in the use of Artificial Intelligence (AI). And probably like many of you, I approached it with both interest and trepidation!
On the one hand, as a charity (or indeed any values driven organisation), there are ethical and safeguarding issues to consider. On the other, the opportunities appear significant.
AI is described as ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages’. Whilst not necessarily a new technology, more and more organisations are identifying an opportunity to use AI in the delivery of products and services, especially the concept of a ‘chat-bot’.
A Chat-Bot is a computer program designed to simulate conversation with human users, likely over the Internet. Typically, they will route the user to an end goal - a product, service or piece of information.
In their most complex form, some examples include;
- Babylon: the company behind the NHS GP at Hand app, says its follow-up Chat-Bot software achieves medical exam scores that are on-par with human doctors*
- Woebot: a Facebook messenger based Chat-Bot developed with Stanford University, that checks in with the user daily and provides guided lessons on CBT, Mindfulness and other wellbeing support**
Show me the money
Many Chat-Bot developments require significant investment, often in partnership with an academic institution. However, new software entering the commercial market is enabling those with even the smallest budget to develop less complex solutions.
These give the user a series of responses from which to choose, and then answer them based on their choice. Essentially, the user is guided through a complex map of scenarios where there are pre-prepared responses.
At the other end of the cost spectrum, the Chat-Bot will ask a question and enable the user to respond in free text. It will then access data in anattempt to recognise and interpret this response, before either asking further questions to clarify, or compiling a unique answer.
It’s not difficult to think of potential applications for this technology, whichever end of the spectrum you look at. However, both come with the need to consider legal and ethical implications early in their development, such as;
- Article 22 of the GDPR gives the user the right to object to automated decision making***
- Would use of AI free up staff to focus on more highly skilled tasks, and/or render parts of a workforce obsolete?
- If a Chat-Bot is used to support human wellbeing, how do we implement appropriate safeguards?
For us, it was important to start with something simple!
Our first foray
We decided to embark on a small internal trial. A chance conversation led us to work with our own HR function to develop a simple Chat Bot to answer those every-day questions; ‘How do I change my password?’, ‘What is the policy on x, y or x?’ etc.
Sited on our intranet page, ‘Al-Bot’ was used around 50 times to respond to staff queries using a very simple routed conversation; where the user chooses from a series of options rather than entering free text for the Bot to interpret.
Apart from the technical performance and design, this small test told us a lot about how users interacted with the Chat-Bot and how best to configure the conversation to get the best result. 80% of those who used Al-Bot said they could see how they would be less likely to contact HR for minor queries on a daily basis.
Taking all we’d learnt, both internally and externally, the next step has been to apply it to developing a new externally facing product.