AI and international development

‘The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future.’    —  UN Secretary General, António Guterres

Last week, I had the privilege of visiting Restless Development to give a presentation and have a bit of a discussion about artificial intelligence (AI). This seemed a slightly unusual proposition when we were first organising it: links between the AI and development communities appeared few and far between. But it was becoming increasingly apparent that AI could be used to help solve a range of global problems, on the one hand, and that it risked further entrenching some of those problems on the other.

In light of the talk – and, conveniently, the AI for Good Global Summit that brought together a range of international institutions to discuss similar things a few weeks earlier – I thought I’d share a quick overview of my thoughts and some introductory links.

How could AI help?

I summarised the possible applications of AI to development problems as covering four main areas. AI could help with:

  • finding efficiencies in existing systems, processes and supply chains (e.g. transport systems, or energy supplies);
  • monitoring situations on the ground (e.g. public opinions on issues, or migration patterns);
  • predicting events or crises before they happen, to aid prevention or preparation (e.g. political turmoil, or natural disasters); and
  • responding more quickly and more effectively to crises when they happen (e.g. mining social media for breaking news and information before it reaches traditional news media).

The UN’s “Global Pulse” initiative is really pushing things forward in this area, capitalising on the vast quantities of data increasingly available on almost every aspect of human life. It’s worth browsing their site for examples and ongoing projects, and taking a look at their Guide to Data Innovation for Development if you’re considering starting a project along similar lines (they’re also open to collaboration).

There is almost endless scope for growth here – particularly from a development perspective. A 2014 study found that the amount of data in the “digital universe” roughly doubles every two years. About 22% of it was deemed useful or informative in 2013, but less than 5% of that (~1% of the overall total) was actually analysed. It predicted that closer to 35% of a much larger total would be useful by 2020, and that the majority of data would be from “emerging” markets from 2017 onwards. At the moment, we’re barely scratching the surface of what could be possible.

On a related note, there’s a lot of potential for drones to help with collecting data (satellite imagery, primarily) and responding to emergencies. This report is a decent overview of how they can be used in humanitarian action, if you’re interested.

What should we be worried about?

While AI could certainly be of assistance, there are a number of areas in which AI could also hinder progress towards development goals.

Employment

There’s a lot of chat about the possible impact of AI on employment, and just as much disagreement over how many jobs will actually be lost. What is less discussed in the UK and US, though, is that the extent to which jobs might be taken by machines is likely to differ between countries. A World Bank report from last year, for instance, suggested that the percentage of jobs in developing countries that are susceptible to automation could be as high as 85% (that was for Ethiopia, with the OECD average at 57%; see p.23 of the report).

This sounds pretty terrifying, but the report does also acknowledge that this should be offset in the near-term by the slower uptake of technology in much of the developing world. There are other worries, though. If, for instance, manufacturing that has typically been offshored by companies in the West can be automated increasingly cheaply, it might soon be brought back to the developed world – blocking a typical “path to prosperity” for developing countries. This is particularly likely given that companies can then avoid the ethical issues surrounding working conditions, and offer faster turnaround of orders to the domestic market.

On this reading, making sure that livelihoods in the Global South aren’t impacted too significantly by rapidly advancing AI could become a key development concern.

Inequalities, old and new

That said, we need to go beyond thinking about how to stop individuals, communities and countries (see this on the scope for widening inequalities between nations in an AI-centric world) from being left behind economically – although this more traditional form of inequality is still a fundamental problem. We now live in a world where a small clique of companies has not only accumulated huge amounts of money, but where those companies have astonishing volumes of data on billions of individuals, and an increasing influence on their thoughts and day-to-day lives. Working out how to make sure these money-making organisations that lack any democratic accountability don’t abuse this power will be a key concern in the coming years and decades. I could write a lot about this, but frankly I wouldn’t do any better than Maciej Ceglowski does here (should really be compulsory reading for pretty much everyone).

It is important to note that leaving the process of curbing the power of big tech firms to the market will not be enough. AI is a large part of the problem. Perhaps more than any other technology, it favours those that are ahead. The companies with the most data (and the most computing power) have an astonishing advantage when it comes to innovating in the AI space. Most start-ups, however inventive, simply don’t have the capacity to do what the likes of Google, Facebook, Apple, Amazon and Microsoft do – and they increasingly run their services on platforms provided by the big names anyway (AWS being the typical example). Even if smaller companies do manage to offer a competitive service, they tend to get swallowed up by one of the big names pretty quickly. Again, there’s lots more to say here; this panel is a good starting point on the prevailing “façade of competition”.

The big tech firms seem to have good intentions (Google’s motto is “Don’t be evil” – what could possibly go wrong?), but good intentions don’t always lead to desirable outcomes (see Dave Eggers’ The Circle). And we need to acknowledge that if even their existing capabilities fall into the wrong hands, the consequences could be horrendous. We only have to look to China – with advanced facial recognition set to combine with a planned national ‘social credit’ system to perfect the surveillance state formula – for an inkling of what could be yet to come.

On Healthware

There’s a hypothetical scenario I’ve been pondering for a while. I’ve actually been trying to write a short story about it, framing it from different perspectives. But that’s taking too long, and reality is fast catching up.

In the scenario, the British government has decided that the only way of making universal free health care affordable is by compelling citizens to have data on their bodily health and lifestyle tracked, with behavioural changes recommended to individuals by artificially intelligent “healthware” to keep them from from falling ill. The healthware learns how best to persuade people to act differently, fitting itself to individuals’ personalities to ensure maximum compliance. If people are consistently non-compliant, they have their access to free healthcare revoked.

Naturally, hospital visits are still required for genetic and particularly complex conditions, and in the wake of accidents or unexpected emergencies. But there are no more queues in GP surgeries or A&E. The number of people on medication drops to levels not seen for decades. The physical and mental health of the population soars, with higher productivity, longer life expectancy, and wellbeing to match (or better) the Scandinavians.

On the one hand, this sounds wonderful. On the other, it would herald the arrival of the sort of big state that socialist governments of the past could hardly dream of (their dreams looked more like this). The level of social control that would become possible – with our every behaviour monitored and, ultimately, made to fit a “healthy” norm – is intensely disquieting. Even more perturbing is the fact that, at least to me, this doesn’t seem particularly far-fetched.

In reality, healthware this sophisticated would come from a big tech firm before any government had even properly thought about it. I’ve posted a piece on Medium that comes at the possibility more from this angle. But I also wanted to take a few minutes to expand on why I think such a scenario is feasible, and offer a list of related things to read.

Technical feasibility

I’ve actually written before about the difficulties of applying a data-driven approach to a biological system as endlessly curious as the human body. That, though, was in the context of elite performance, and keeping someone within the bounds of reasonably good health ought to be more straightforward than turning them into an Olympian.

Naturally, it could take years for a system to be successfully trained with the sort of capacity outlined here. This is particularly the case given that the learning process would likely require real-time participants, and accordingly move only as fast as the rate at which people live and fall ill. Historical health records, along with some expert knowledge, could be used to speed up the process, but both may prove to be sub-optimal and useful only as a starting point. The concurrent analysis of the data of many, many individuals, and the pooling of the resulting knowledge (as has happened for the training of autonomous vehicles) will likely prove crucial – the more participants the better.

Eventually, a system should be sufficiently accurate for commercial roll-out. And over time it would just get better: optimising to take into account the individual quirks of your body, and benefitting from the more general findings from everyone else’s systems (perhaps attributing greater weight to data from family members and those physiologically similar to you). It could also keep abreast of the latest medical research findings (as IBM’s Watson does) in a way that would be impossible for a human, incorporating these into its predictions and recommendations to boost performance even further.

The bigger problem will be that there is currently far too much missing data on almost everyone to accurately predict health outcomes. Making wearable technology as ubiquitous as phones, and developing more ways of collecting health and lifestyle data automatically so you don’t need to rely on useless humans to input it manually, will be key (Apple is attempting to do both).

Motivation

From the perspective of business, developing healthware at this level of sophistication could make some of the most powerful companies in the world even more money. If Apple could even get close to it, the Apple Watch would become a must-have – which seems ample motivation for pushing on with it as smartphone sales stagnate. Health insurers would happily make use of all that data to aid their own predictive models, and big pharma’s displeasure at a possible decline in medication levels could be offset by having healthware recommend, and automatically deliver their drugs.

Government would also likely be supportive given the scope for relieving strain on health services. It might be that government – or health providers more generally – come to endorse, or even require the use of this sort of technology (hence the scenario painted above). Besides, the British government seems supportive of pretty much any new way to better track people and invade their privacy, so there should be no problem on that front.

Another force that might drive the development of this sort of technology is the longevity hype that’s apparently consuming Silicon Valley. You could almost imagine this sort of healthware being pursued as a vanity project by one of any number of tech-entrepreneurs-turned-billionaires looking for a data-driven approach to living forever, regardless of whether or not it would end up being profitable.

The patient-consumer

What about, well, normal people? As a starting point, a 2016 survey of American healthcare “consumers” found that a quarter own wearable tech, 88% have used some sort of “digital health tool”, and 77% are willing to share their health data with their doctor to improve care – with 60% happy to give that data to Google.

The number of people buying wearables will continue to grow (likely driven more by marketing campaigns and the waning allure of near-identical mobile phones than anything else), as will the adoption of digital health tools as they become ever more useful. The figures on willingness to share health data may not sound especially high, but they’re ample for an initial phase of developing sophisticated predictive healthware – and if any system proved to be effective, they’d likely go up.

Americans are, admittedly, much further down this road than the rest of the world. But given that so many of our recent technological trends (e.g. personal computers, smartphones) have come from the US, and have been driven by American companies, it wouldn’t be an enormous surprise if the rest of the “developed” world soon caught up.

OK, enough. A few things to read / listen to that haven’t been linked to in either this or the Medium piece:

2017 Internet Trends Report – Kleiner Perkins (Mary Meeker)   >   See slides 288-319 for a range of pointers as to where healthcare might be going. (The rest is interesting, too.)

Self-regulation in Sensor Society – Natasha Schüll   >   Cool talk, available as a podcast from Data & Society, on the softer, fuzzier form of tracking represented by wearable tech (“little mother”, as opposed to the “big brother” of CCTV etc.) and its implications for individual autonomy and selfhood.

Some decent long-ish reads from a range of publications: this from the FT (paywalled), which is from a while ago and probably the first thing I remember reading on the subject, focusing mainly on Babylon; this from the Atlantic, which is even older (2013!) and concentrates on IBM’s Watson (which is still going strong in the healthcare game); and this from Newsweek International last Friday, which has more of an American bent but covers a load of interesting startups I haven’t really discussed here.

Networks of Control – Wolfie Christl and Sarah Spiekermann   >   A longer, broader work focusing on the collection and use of personal data by businesses working in a range of areas. Considers whether this corporate surveillance can enable businesses to control consumer behaviour – which is relevant here.

Intervention Symposium: “Algorithmic Governance” – org. Jeremy Crampton and Andrea Miller   >   A bit academic, but some interesting thoughts here and in the collected essays giving some background to the notion of algorithmic control and its implications.

As usual, I’m always keen for cool new stuff to read, so hmu if anything jumps to mind!

Automation etc.

I mentioned in my 2016 reading round-up that I hadn’t yet had a crack at Martin Ford’s The Rise of the Robots. I finally put that right the other week, and the book prompted many, many thoughts. Which is always a good thing.

Quite a few of those thoughts were objections. I generally agree with Ford that the technological progress we’re seeing today far surpasses anything that has come before, and the mass automation of labour is a feasible possibility as a result. But I don’t see it as inevitable. For one thing, a backlash against automation could see it rolled back, rather than accelerated, before too long. I wrote a short piece explaining the thinking behind that, which you can read here.

That piece was, necessarily, a massive over-simplification of the lie of the land. You can’t really cover social change on this scale in 750 words. My purpose with it was more to suggest an alternative way of thinking about how things might pan out, rather than predicting what will happen.

One of the most interesting dynamics that I failed to cover was how all of this fits into the global / international economy. Some of the best bits of Ford’s book, I thought, were actually about offshoring, and how it could seriously shake up the world of work (if it isn’t doing so already) before automation does. We think about manufacturing going abroad, but one of the things that technology really has changed is the ability to do non-manual work remotely, arguably making the pool of potential applicants for, say, a software engineering role, global. Why limit yourself to UK graduates when you could take your pick of the best minds in Asia, or Africa, or anywhere else?

The question of how different governments shape their policies in light of and in competition with those of other governments will also be fascinating. If you’re elected on a platform of rolling back automation (as I suggest might soon happen in the piece), and you force companies to hire human workers over computers while other countries are actively promoting automation, those companies will either move elsewhere or risk becoming uncompetitive in a global market. If they move elsewhere, you end up with the same unemployment problem you would have had anyway. If they stay, domestic consumers will probably look abroad for products and services provided more cheaply and efficiently – so you’ll have maintained that vital consumer purchasing power only to reap no rewards. Unless you close yourself off from the world of international trade, or implement very stringent tariffs and what have you – but then you risk your country becoming irrelevant on the world stage (the Trump presidency should make a very interesting case study…). At which point, UBI might seem like it was the better idea after all. Although then what would you have done about all the mental health problems and social issues arising in a population of bored, unhappy, confused and unfulfilled humans?

In short, it’s complicated. My piece, and these ramblings, don’t even scratch the surface.

The other complicating factor will be the environment. Ford mentions climate change at the start and end of the book as something that could further exacerbate the problems of mass automation. What he doesn’t do, however, is consider the ways in which climate change might actually impose a natural limit on automation. Where is the energy to run all these robots going to come from? But this is something I really want to write about separately. So I’ll leave it there for now.

Any thoughts / comments / objections, fire away. If you’re interested (and if you’ve got this far?), I thought I’d include a very brief reading list with some interesting stuff that might be worth referring to.

 

The Rise of the Robots – Martin Ford   >   Obviously. It’s actually a pretty easy and entertaining read, albeit slightly repetitive at times.

The Future of Employment: How susceptible are jobs to computerisation? – Carl B Frey & Michael Osborne   >   This now-ubiquitous report from 2013, which estimated that 47% of US jobs are susceptible to ‘computerisation’, underpins Ford’s argument. The Oxford Martin School have a load of other interesting publications on technology and unemployment which are also worth checking out.

World Development Report 2016: Digital Dividends – The World Bank   >   In some ways, this could be seen as a follow-up to the above, but it takes a broader approach to the impact of technology and a much more international perspective. Very interesting on the topic of employment, though, particularly with regard to the susceptibility of jobs in the developing world to automation (see p.23 for a quick graphical overview).

New Robot Strategy  The Headquarters for Japan’s Economic Revitalization   >   A detailed plan of action for the integration of robots into multiple levels of Japanese society. Very interesting.

The Second Machine Age – Erik Brynjolfsson & Andrew McAfee   >   I wasn’t overwhelmed when I read this last year, but have been dipping back into it over the past week and actually think it’s very thoughtful in terms of its policy / long-term recommendations. Worth a look.

Please let me know if you’ve read anything good!