‘The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future.’ — UN Secretary General, António Guterres
Last week, I had the privilege of visiting Restless Development to give a presentation and have a bit of a discussion about artificial intelligence (AI). This seemed a slightly unusual proposition when we were first organising it: links between the AI and development communities appeared few and far between. But it was becoming increasingly apparent that AI could be used to help solve a range of global problems, on the one hand, and that it risked further entrenching some of those problems on the other.
In light of the talk – and, conveniently, the AI for Good Global Summit that brought together a range of international institutions to discuss similar things a few weeks earlier – I thought I’d share a quick overview of my thoughts and some introductory links.
How could AI help?
I summarised the possible applications of AI to development problems as covering four main areas. AI could help with:
- finding efficiencies in existing systems, processes and supply chains (e.g. transport systems, or energy supplies);
- monitoring situations on the ground (e.g. public opinions on issues, or migration patterns);
- predicting events or crises before they happen, to aid prevention or preparation (e.g. political turmoil, or natural disasters); and
- responding more quickly and more effectively to crises when they happen (e.g. mining social media for breaking news and information before it reaches traditional news media).
The UN’s “Global Pulse” initiative is really pushing things forward in this area, capitalising on the vast quantities of data increasingly available on almost every aspect of human life. It’s worth browsing their site for examples and ongoing projects, and taking a look at their Guide to Data Innovation for Development if you’re considering starting a project along similar lines (they’re also open to collaboration).
There is almost endless scope for growth here – particularly from a development perspective. A 2014 study found that the amount of data in the “digital universe” roughly doubles every two years. About 22% of it was deemed useful or informative in 2013, but less than 5% of that (~1% of the overall total) was actually analysed. It predicted that closer to 35% of a much larger total would be useful by 2020, and that the majority of data would be from “emerging” markets from 2017 onwards. At the moment, we’re barely scratching the surface of what could be possible.
On a related note, there’s a lot of potential for drones to help with collecting data (satellite imagery, primarily) and responding to emergencies. This report is a decent overview of how they can be used in humanitarian action, if you’re interested.
What should we be worried about?
While AI could certainly be of assistance, there are a number of areas in which AI could also hinder progress towards development goals.
There’s a lot of chat about the possible impact of AI on employment, and just as much disagreement over how many jobs will actually be lost. What is less discussed in the UK and US, though, is that the extent to which jobs might be taken by machines is likely to differ between countries. A World Bank report from last year, for instance, suggested that the percentage of jobs in developing countries that are susceptible to automation could be as high as 85% (that was for Ethiopia, with the OECD average at 57%; see p.23 of the report).
This sounds pretty terrifying, but the report does also acknowledge that this should be offset in the near-term by the slower uptake of technology in much of the developing world. There are other worries, though. If, for instance, manufacturing that has typically been offshored by companies in the West can be automated increasingly cheaply, it might soon be brought back to the developed world – blocking a typical “path to prosperity” for developing countries. This is particularly likely given that companies can then avoid the ethical issues surrounding working conditions, and offer faster turnaround of orders to the domestic market.
On this reading, making sure that livelihoods in the Global South aren’t impacted too significantly by rapidly advancing AI could become a key development concern.
Inequalities, old and new
That said, we need to go beyond thinking about how to stop individuals, communities and countries (see this on the scope for widening inequalities between nations in an AI-centric world) from being left behind economically – although this more traditional form of inequality is still a fundamental problem. We now live in a world where a small clique of companies has not only accumulated huge amounts of money, but where those companies have astonishing volumes of data on billions of individuals, and an increasing influence on their thoughts and day-to-day lives. Working out how to make sure these money-making organisations that lack any democratic accountability don’t abuse this power will be a key concern in the coming years and decades. I could write a lot about this, but frankly I wouldn’t do any better than Maciej Ceglowski does here (should really be compulsory reading for pretty much everyone).
It is important to note that leaving the process of curbing the power of big tech firms to the market will not be enough. AI is a large part of the problem. Perhaps more than any other technology, it favours those that are ahead. The companies with the most data (and the most computing power) have an astonishing advantage when it comes to innovating in the AI space. Most start-ups, however inventive, simply don’t have the capacity to do what the likes of Google, Facebook, Apple, Amazon and Microsoft do – and they increasingly run their services on platforms provided by the big names anyway (AWS being the typical example). Even if smaller companies do manage to offer a competitive service, they tend to get swallowed up by one of the big names pretty quickly. Again, there’s lots more to say here; this panel is a good starting point on the prevailing “façade of competition”.
The big tech firms seem to have good intentions (Google’s motto is “Don’t be evil” – what could possibly go wrong?), but good intentions don’t always lead to desirable outcomes (see Dave Eggers’ The Circle). And we need to acknowledge that if even their existing capabilities fall into the wrong hands, the consequences could be horrendous. We only have to look to China – with advanced facial recognition set to combine with a planned national ‘social credit’ system to perfect the surveillance state formula – for an inkling of what could be yet to come.