“AI: what will it mean?,” the Prime Minister recently asked the United Nations. “Helpful robots washing and caring for an ageing population? Or pink-eyed terminators sent back from the future to cull the human race?”
Well, likely more the former than the latter, thankfully. But at the time of writing, both appear on the fanciful side. Ask any tech journalist what the most common PR pitch in their inbox is, and a business trying to hook onto artificial intelligence (AI) is second only to implausible Blockchain fantasies.
Yet behind the scenes, AI is making its mark. It’s actually all around us making small but significant differences, and these are only going to be more significant in the not-so-distant future. So how will that impact the third sector?
The limited present
In most instances, AI is seen and not heard – which is to say you’ll feel the benefit without seeing the working. It’s in cars, video games, ecommerce software and international finance, typically in the form of algorithms quietly making decisions on companies’ behalfs based on learned behaviour of what works best. At its most obvious, it’s there in your phone and speakers: if you’ve ever asked Alexa or Google Assistant a question, you’re benefiting from artificial intelligence – albeit one that feels like a dunce compared to the current organic variant.
That means that, at the moment, examples of AI’s use in charity tend to be either invisible – analysing newsletter open rates for prime sending times, for example – or visible but limited in scope.
Chatbots are often cited as an example of basic AI, but more often than not they’re closer to something pre-programmed – they don’t think for themselves, but have set paths through dialogue trees. WaterAid’s “Talk to Sellu” campaign allowed potential donors to ‘talk’ to a bot purporting to be someone who would benefit directly from the charity’s support, for example. Mencap has something similar on its website where you can talk to Aeren and find out about life with learning disabilities.
They’re cute ideas, but hardly the stuff of sci-fi made reality. Ida was slightly more advanced, and would offer advice on to help the cause of your choice based on location, passions and how you wanted to help – but at the time of writing doesn’t seem to be replying to messages.
Microsoft has another interesting case study on its blog, where it teamed up with The Children’s Society to offer real-time translation to refugees and migrants via smartphone, rather than needing to wait for an overworked translator to become available.
Finally, while it’s not exactly charity use, Amazon’s recent addition of political donations via Alexa may give you an idea of where things could go. It may seem more fiddly than donating via a website for most people, but for those living with visual impairment, Alexa and Google Assistant can be a life-changer, as Amazon’s recent advertising campaign with the RNIB shows.
Prepare for rapid change… and the challenges that come with it
That’s today, but it’s important to reflect on how fast these things move. Futurist Ray Kurzweil calls this technological advancement the “Law of Accelerating Returns” and estimates that humanity’s advancements between 2000 and 2014 were equal to that achieved in the whole of the entire 20th century. And it’ll get faster, with his original estimate betting that we’d manage the same again by 2021. Right now, his current bet is that AI will be smarter than humans by 2045, so enjoy flesh-and-blood dominance while it lasts.
Even today, AI is better at some things. Back in 2017 I interviewed the CEO of Saffron Technology who gave me a real-world example of this: spotting the difference between restrictive cardiomyopathy and constrictive pericarditis. Studying two echocardiograms, doctors would typically make the right diagnosis between 50 and 75% of the time using seven attributes. An AI trained on 10,000 was able to up the correct diagnoses to 96% in just two months. In other words, there are patterns in data that artificial intelligence can spot that humans miss. And that applies to things well beyond medicine.
With that in mind, we need to be wary of the risks as well as the opportunities. It’s no coincidence that some nonprofits exist purely to deal with AI’s challenges.
First there’s the apocalyptic vision espoused by the likes of Elon Musk and Stephen Hawking. The fear here is that once AI is more intelligent than humans, there’s no way back. As Wait, but Why puts it: “a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans.” Not hugely flattering, but the comparison is a fair one: a super-intelligent AI could be able to see and do things that humans simply can’t begin to comprehend. And that feels risky if it decides humanity’s interests shouldn’t be the priority.
All very scary or exciting depending on your point of view, but there are other challenges before we get to that point, and none more significant than human biases. Because artificial intelligence has to learn via human examples, there’s a real danger of imposing our own mistakes and prejudices onto AI. There are loads of examples of this: the COMPAS algorithm to measure likelihood of criminal reoffending was likely to be more pessimistic for black criminals than white ones, for example, and even search engines aren’t immune from this racial prejudice.
Then there are more mundane problems like privacy. To learn effectively, AI needs access to a whole lot of data, and said data could potentially be lost or used against you. Laws can be put in place to prevent this, but that can then slow progress making it a difficult catch-22.
But for all of these challenges, there are real opportunities here too even in its current seen-but-not-heard form. As the Charities Aid Foundation paper Machine-Made Goods: Charities, Philanthropy and Artificial Intelligence says: “If AI has anything like the impact that many are predicting, it will have profound implications for civil society and the work of charities.”
It’s a wide-reaching paper, covering the challenges and impact. But perhaps the most interesting idea is how donations could be suggested based on both peer interactions and urgency of the cause (based on machine learning of data from social and environmental needs). “This would enable identification of where the most pressings needs were at any given time, as well as the most effective ways of addressing those needs through philanthropy, and thus allow a rational matching of supply and demand,” the paper explains. It even goes as far as suggesting that donations could be automated as we become increasingly comfortable with letting AI handle our key decisions – something it accepts might sound “absurd” to most people in the here and now.
In short, there are a lot of conversations to be had here, and in some ways it all still feels very speculative. But given the Law of Accelerating Returns I mentioned earlier, perhaps there is no such time as “too early” when discussing how AI will impact the causes most important to us.