Once upon a time, in a galaxy far, far away, I wrote a novel called Pandora’s Brain.
I’ve just finished the sequel, Pandora’s Oracle, and review copies are available – free, gratis, and for nothing. Let me know if you would like a copy – PDF, or a MOBI file for your Kindle.
Pandora’s Brain was published in 2014. I wrote the first draft of a sequel the following year, but I wasn’t happy with it, so it sat alone inside a computer, un-read and un-loved. I went on to write a series of non-fiction books about AI. I decided I was probably a writer of non-fiction, and not of fiction.
About a year ago I was introduced to a writing coach. He was enormously helpful. He showed me how to structure a novel, and then worked with me painstakingly, chapter by chapter, line by line, as I completely re-wrote the book.
I think Pandora’s Oracle is now pretty damn good. I hope you will too.
Pandora’s Oracle is a sequel to Pandora’s Brain, but it works as a stand-alone novel. You don’t have to read Pandora’s Brain first. (Although if you haven’t, I hope you will.)
If you would like a free review copy of Pandora’s Oracle, just send your name and email address to cccalum at gmail dot com, and specify whether you would like a PDF copy or a MOBI file for your Kindle.
We’re all wondering how to survive the virus: how to stay alive, and also solvent.
Assuming we manage that, what will be its lasting impacts?
1. Appreciation of exponentials
The rising death tolls in many countries has been shocking to watch. Many people are getting their first up-close-and-personal view of the astonishing power of exponential growth. We have seen it for decades in the dramatic growth of computing power described by Moore’s Law, but like the mythical boiling frog in the saucepan (it really is a myth: frogs are not that daft), we acclimatise to improvements on that timescale, and take them for granted.
Exponential growth accelerates, and it is going to transform our lives in amazing ways over the coming years. The more we all understand the rate of change that is coming, the better our chance of responding intelligently to the challenges it will pose.
2. A closer look at UBI
Proponents of Universal Basic Income argue that it is an idea whose time has come. And indeed, governments everywhere are unveiling radical packages of economic assistance to keep their citizens fed and housed, and to keep businesses afloat so that they can resume trading once the crisis is over.
But the flaws of UBI mean it is unlikely to be implemented anywhere during this crisis. People have different needs and liabilities, so an identical payment for everyone is inappropriate. And there is no point wasting funds on Mark Zuckerberg or Rupert Murdoch. We may well learn lessons from this nightmare which will serve us well if and when technological unemployment hoves into view – probably two or three decades from now. UBI is a genuine attempt to answer the question of how to keep everyone alive in that situation, but it is looking less and less like a silver bullet.
Instead, governments are exploring temporary measures to shore up and enhance their welfare systems, and to keep companies from laying off their staff. If we are smart, we will learn what works, and what does not. During our long journey towards technological unemployment, we will experience increased employment Churn, and the most successful measures deployed now will come in handy then.
3. More digital nomads
Entire populations are confined to their homes, but many organisations continue to function. People are working from home and millions of meetings are being held online. Apart from Netflix, the teleconferencing platform Zoom may be enjoying the biggest silver lining in this dark cloud. Many people are finding working and meeting remotely both efficient and congenial, saving both time and money. Not all meetings are best conducted online: conferences and festivals are being postponed rather than cancelled. But many of the people who find there is rarely any reason to be in the office may join the ranks of the digital nomads, using AirBnb and home exchanges to change location every few weeks, and see the world as they work.
4. Telemedicine and AI triage
Obtaining an appointment, travelling to the surgery, and hanging around in the waiting room is a pain. Especially if you are already sick, and would prefer to be tucked up in bed. Asking the doctor for a diagnosis by teleconference is a much better solution in many cases, and it exposes doctors and their staff to fewer contagious patients. The virus will make this desirable change a necessity. People will invest in devices and apps to take measurements to inform the remote consultations.
By making them easier, telemedicine will increase the demand for consultations, and this in turn will boost the acceptability and use of AI triage.
5. Acceptance of automation
As well as saving lives and money, self-driving cars and automated delivery services will help prevent or slow down the spread of communicable diseases. Likewise automated checkouts in shops. There is a certain amount of resistance to these developments at the moment, and the present crisis could well diminish that.
6. Acceptance of government
In 1986, Ronald Reagan said “The nine most terrifying words in the English language are: I’m from the government and I’m here to help.” This attitude still prevails on the political Right, and cynicism about the public sector is common across the spectrum. But it turns out there are problems which only government is equipped to handle, and the virus is certainly one of them. The ideological resistance to government intervention seems to have delayed lockdown measures in the Anglo-Saxon countries, which is likely to increase their relative casualty rates.
It is obviously important that government intervention is intelligent and efficient. At the time of writing it looks as though Asian governments have shown themselves more capable, regardless whether they are dictatorships or democracies. Relentless testing and fast isolation there is keeping their casualty rates low, although their younger populations is also a factor.
7. Acceptance of experts, and diversity
“We’ve had enough of experts”, quipped Michael Gove in June 2016. We are rediscovering their value. Cynics have commented that last year only one in a million people were epidemiologists, whereas today at least half of us are – at least on Twitter. Nevertheless, we are all hungry to hear what the genuine experts have to say.
A more subtle point is that experts do not agree – and nor should they. “We are following the science” suggests that there is one settled view on complicated matters. There is not. Religious devotion to one conventional wisdom can be dangerous in science, as elsewhere. Sorting out the wheat from the chaff in a debate is not easy: it takes time, effort, and healthy scepticism. A basic understanding of statistics is a pre-requisite for informed citizenship.
8. Physical distancing, not social distancing
While most of us are to a greater or lesser degree self-isolating, anecdotal evidence suggests that we are talking to each other – to friends and family – more than ever. This is a good thing, and let’s hope the habit persists.
9. Less polarisation
There can be no doubt that we are all in this together. The virus does not discriminate between Brexiteer and Remainer (except by their age), or between the supporters and opponents of Trump. Could the rabid polarisation of political debate introduced by the Tea Party movement in reaction to Obama’s election be soothed by a common enemy? Sadly, judging by the discussions raging on Twitter, it looks unlikely.
10. From Chernobyl to Suez
It is pretty clear that the virus originated in a live animal market in Wuhan, not in a US military lab. In the early days, various levels of Chinese government covered it up and lied about it. If the rest of the world had reacted quickly and responsibly, this epidemic would have been China’s Chernobyl – a disaster aggravated by official incompetence and malpractice.
But the rest of the world did not do that. The widespread perception that Trump downplayed the significance of the virus for weeks, and proceeded to lie about his behaviour, has deprived the world of its natural leadership through the crisis. Furthermore, Western governments have proved less able than those in Asia to take the necessary steps to mitigate the impacts of the virus.
The Suez crisis in 1956 exposed Britain’s precipitate decline to the world, and to itself. The Economist magazine has suggested that the virus could become America’s Suez. China is assiduously burnishing its philanthropic credentials, sharing data and distributing masks and medicines. America still has the world’s largest and most innovative economy, and by far its most powerful armed forces. But depending on what happens next, its management of this crisis could devastate its global influence.
Maybe this is the real beginning of the Asian century.
This article first appeared in Forbes
Technological unemployment and economists
The term “technological unemployment” was popularised in the 1930s by the celebrated economist John Maynard Keynes. Fifty years later, another renowned economist called Wassily Leontief warned that jobs for humans might follow the same path that jobs for horses did in the early 20th century. So the idea has a respectable economic heritage, but economists are still arguing about whether it will actually happen.
The latest contribution comes from Daniel Susskind, a member of an unreasonably talented family of lawyers, economists and academics. His economic credentials are strong: previously an adviser at Number 10, he is now a fellow at Balliol College, Oxford. Susskind is on the side of those who think that technological unemployment is very likely to happen in a matter of decades. He does not attempt a watertight proof, but he helps to clarify how economists should think about the issue.
Unlike many commentators, Susskind goes beyond diagnosis and into prognosis and prescription. This is commendable, and although I disagree with where he ends up, the journey is important because it is vital that more thinking is done on this subject. The book is well-written, and easy to digest.
Susskind is not a technological determinist: he speculates that if horses had the vote, their fate might have been different. He describes himself instead as a technological realist, and thinks that our freedom of movement is constrained. We can’t escape the fact that we will build more and more capable machines. Our challenge is not to try to stop this, but to work out how to flourish anyway.
The substitution force and the complementary force
The book provides a clear and helpful discussion of the two main economic forces that determine whether there is technological unemployment: the substitution force and the complementary force.
The substitution force is straightforward: machines replace horses and humans in jobs if they are cheaper, better, and / or faster. In 1915 there were 21 million horses labouring away in America, and the US horse population today is two million.
Despite numerous rounds of automation (mostly mechanisation so far) humans have not been ejected from the workplace, and many developed economies are close to full employment. This is because of the complementary force, which has three effects: the productivity effect, the bigger pie effect, and the changing pie effect.
The productivity effect is when automation eliminates some jobs, but makes other workers more productive. Computer-aided design (CAD) has reduced employment for draughtsmen, but it has also enabled architects to work much faster, and design more complex, efficient and elegant buildings.
The bigger pie effect is clearly visible in the economic history of the USA: its GDP was 15,000 times higher in 2000 than it was in 1700. This means more wealth, more demand, and more jobs.
The changing pie effect is seen in the shift in economies and employment from farms to factories, and then to offices.
ALM and ATM
In 2003, the MIT economists David Autor, Frank Levy, Richard Murnane gave their names to the ALM hypothesis, which provides more detail about how the substitution effect works. It points out that jobs are not monolithic, but comprised of tasks. Automation replaces tasks which are routine. It is not always obvious which tasks are routine, and they are certainly not restricted to low-paid, blue-collar jobs. The Luddites who smashed looms in the early 19th century were not unskilled labourers, but artisans. Their jobs were being de-skilled by machines.
Today, lots of office jobs involve routine tasks, and Bank of England Governor Mark Carney has warned of an impending “massacre of the Dilberts”. Many economists think that since the 1980s, automation has hollowed out middle-range jobs, leaving untouched the high-skilled cognitive jobs, and also the manual jobs, which, as Moravec’s Law explains, are hard to automate because while reasoning (which is hard for humans) requires very little computation, sensorimotor skills (which are easy for human adults) require enormous computational resources.
David Autor has argued for years that the complementary effect means that technological unemployment is not something to worry about for the foreseeable future. In 2016 he gave an engaging TED talk in which he argued that when ATMs automated the jobs of bank tellers, the number of human tellers actually rose, because the ATMs enabled banks to open more branches, and the tellers carried out more value-added tasks.
It is a pity that Susskind repeats this story, because it is almost certainly untrue. The real reason the number of branches increased was a piece of financial deregulation, the Riegle-Neal Act of 1994, which allowed more banks to operate across state borders. This explains why the number of branches only increased in the USA, and not, for example, in Europe.
The Big Bang in AI, in 2012
In 2012 there was a Big Bang in artificial intelligence. Access to more data and more powerful machines enabled AI researchers to deploy machine learning, a well-established type of statistical technique. Since then, it has been much harder to sustain the argument that the complementary effect will persist for the foreseeable future. Machine learning, and in particular a sub-set called deep learning, enables machines to carry out tasks which are non-routine, and indeed sometimes require creativity. Those who continue to deny that machines can be creative have simply not been paying attention. There is clear evidence of creativity in the famous move 37 in game two of AlphaGo’s defeat of Lee Sedol, the best human Go player. Umpires in chess championships between humans now watch out for unusual creativity, which is a sign that the player has cheated by using a computer.
Oddly, Susskind chooses not to use the terminology of machine learning, but calls it “pragmatic” AI instead. The form of AI which prevailed before is generally called symbolic AI, or good old-fashioned AI (GOFAI), but Susskind calls it “purist”.
This transition within AI has caught out many eminent thinkers. They seem to confuse consciousness (which machines do not appear to possess) and intelligence (which they certainly do display). The philosopher John Searle complained that by developing Deep Blue, IBM was giving up on AI. Douglas Hofstadter thought that IBM’s Watson, which won Jeopardy, was vacuous. Intelligence is most concisely defined as goal-oriented learning behaviour, and it is not a specifically human thing. We humans are just the most advanced exemplar we have today. We are unlikely to be the most advanced exemplar this planet will ever host.
For these thinkers, intelligence is whatever machines can’t do today. They assume that machines will have to reach artificial general intelligence (AGI, or strong AI) to be capable of the more complex things that we do, like driving, diagnosing cancers, holding conversations, etc. Whenever any of these things falls to the machines, they move on to the next. Susskind is surely correct to say that technological unemployment does not require AGI.
The capabilities we use to earn a living can be classified as manual, cognitive, and affective. (Affective capabilities relate to human moods, feelings and attitudes.) Machines have been taking over the jobs requiring manual capabilities for decades. They dominate heavy-lifting tasks, and they are increasingly good at fiddly manual jobs too. We can already see cognitive capabilities starting to go the same way, and we cannot we be confident that jobs requiring affective capabilities will always be reserved for humans: machines can already tell if you are happy, surprised, or depressed. Or gay. Some AI systems can tell these things by your facial expressions, and others by how you walk, or dance, or type.
Automation will proceed at different paces in different places, not least because the cost of the alternative to automation will vary. Countries that age faster will automate faster, and regulations and cultures will also play a role in setting the timeline. But overall, the process is ineluctable. Take any piece of technology – a computer, a mobile phone, a robot: the current version is the least advanced that it is ever going to be. As Susskind says, “nothing is certain in life except death, taxes, and the relentless process of task encroachment.”
Frictional and structural technological unemployment
Susskind expects task encroachment to cause two kinds of unemployment: frictional, and then structural.
Frictional technological unemployment means there are still jobs, but not all of us are equipped to do them. Susskind suggests this may already be happening: in the USA, unemployment is very low, at 3.7%, but it is no secret that unemployment numbers don’t tell the whole story. The participation rate is the number of people who are employed, or looking for employment. This is depressed, with one in six men of working age having dropped out of the workforce – double the level in 1940. There is a mis-match of skills, identity, and place: men are unwilling to take the so-called “pink collar” jobs which have expanded, like nursing, teaching, housekeeping, and hairdressing. Or they may not be offered them.
If more and more workers chase a dwindling number of jobs, the result will be lower wages, and the growth of “the precariat”. And if automation replaces human jobs at an accelerating rate, we will see what I call the Churn, as people have to re-skill and re-train more and more often, changing their jobs, their companies, and even their careers.
Structural technological unemployment, by contrast, means the complementary force has become ineffective. A human is replaced in one job, and even though the productivity effect, the bigger pie effect or the changing pie effect means that another job is created, that new job is done by a machine, not by the displaced human. Susskind is no more able to prove beyond reasonable doubt that this will happen than any other commentator has been, but he provides plenty of compelling examples which show why we should take the idea seriously.
Skeptics about the idea of technological unemployment think it is the “lump of labour fallacy”, the misconception that there is a fixed amount of work—a lump of labour—to be done within an economy which can be distributed to create more or fewer jobs. David Schloss, a British economist, pointed out back in 1892 that instead of being static, work expands. The trouble is, there is no guarantee that the additional work will always be done by humans instead of machines.
If technological unemployment is coming, when will it arrive? Susskind is refreshingly honest: he does not know. There may be sudden surges and abrupt tipping points, or there may be a constant, gradual erosion of the complementary force. Unusually, for a book on this subject, Susskind does not explicitly refer to the exponential growth in the power of our machines. But he does say that he thinks the timing is decades rather than centuries, because given eight decades of current progress, a machine will be a trillion times more powerful than its equivalent today.
The Big State
In the book’s concluding chapters, Susskind asks how we will all find meaning in our lives without jobs, and how we will all earn enough to live on. Many of the authors who have addressed technological unemployment before have ground to a halt at this point, particularly American ones. Susskind’s proposal gives a clue as to why they struggle so much. For both questions, he concludes that the answer lies partly in developing a Big State, which will redistribute income and wealth, and nudge us all into behaviours that will give us lives of fulfilment rather than boredom and despair. What he has in mind is something much more radical and much more intrusive than the currently popular appeals for more industrial strategy and a more generous welfare state. This will be anathema to many American readers, and it certainly raises the spectre of an authoritarian state, or at least a heavily patronising one.
Although Susskind thinks we will need the Big State to help us find meaning and purpose in a world without work, he provides a series of historical examples which demonstrate that you don’t need a job to enjoy a life with meaning. One of them is aristocrats. He quotes Bertrand Russell as saying that far from being bored and useless, the leisure class “contributed nearly the whole of what we call civilisation”.
Furthermore, working nine to five (or five to nine, for the merchants bankers and startup founders amongst you) is not a state of nature. Foragers and hunter-gatherers worked fewer hours each day than we do. The ancient Greeks’ approach to work and leisure was the opposite of ours: the ancient Greek word for work is ascholia, and it means the absence of leisure. Aristotle declared that “citizens must not lead the life of artisans or tradesmen, for such a life is ignoble and inimical to excellence.”
This idea is also enshrined in the foundation story of our most widespread religion. When Adam and Eve ate the forbidden apple, God sentenced them to work, and they lost the lives of leisure they had led up till then. If Marx were alive today, he might say that work is the new opiate of the masses – the pastime we use to blind ourselves to the possibility of a better life.
Susskind also thinks the Big State will be needed to re-distribute income and wealth. He envisages taxes rising sharply, and various kinds of intrusive behaviour change programmes, including obliging accountants to work against the financial interests of their clients.
Like many economists, he is skeptical of universal basic income. He understands the argument that making such payments universal would ensure they reach everybody, and should neutralise their stigma, but he cannot reconcile himself to the waste involved in paying UBI to, for instance, Rupert Murdoch and Mark Zuckerberg.
But I think he neglects the biggest problem with UBI, which is the little word in the middle: “basic”. At best, even if it is somehow affordable, UBI succeeds only in keeping everybody alive but poor. We have to do much better than this. We have to make everybody rich – or at least comfortable.
I suspect there is only one way we can transfer enough income and / or wealth from the rich to the rest of us to make everybody comfortable in a world without jobs. That is to develop the economy of abundance. If prices remain essentially as high as they are today, UBI is doomed to failure, and so are its variants, like Universal Basic Services, or Conditional Basic Income. The transfer would weight heavily on the employed and the rich, and would be resisted. But if we can drive down to almost zero the price of everything we need for a great standard of living, then the transfer should be achievable, and a world without jobs could be a truly wonderful place. I think we can develop an economy of abundance – in fact it may arise naturally. However, the transition will be bumpy, and we need to have our eyes open.
This article first appeared in Forbes
The 2010s were an ironic decade. Most metrics show that human welfare improved at an extraordinary rate, but many of us seem to be fearful or resentful, or both. The world is far richer in 2020 than it was in 2010, and global inequality is declining. There is still plenty of poverty, egregious inequality, and injustice, and there are still brutal wars and civil unrest. But overall, life expectancy is sharply up, and child mortality and deaths during childbirth are sharply down. Despite global warming, the number of deaths and injuries from climate-related disasters has fallen significantly, and many rich countries have passed the point of “peak stuff”: they are using fewer resources, polluting less, and the world has actually increased its forest cover.
And yet, the most potent political force in many countries is populism. Some populists are sincere people motivated by genuine conviction, but many more are obvious opportunists. Their claims are consistent: the world used to be a better place; the people’s birthright has been stolen by outsiders, enabled by an established elite, and only the populist can rectify the situation. Oh, and anybody who opposes them is an enemy of the people, and should be vilified, and barred from the media.
Populism is rampant on both sides of the political divide. Today’s right-wing populism is often explained as a reaction against economic disadvantage – the resentment of people who feel left behind by globalism and technological change. There is something in this, but in truth it is much more a cultural phenomenon: a reaction against the decades-long triumphal march of social liberalism, which has overturned what people believed to be the natural order of things. The worst insult a right-wing populist can level is “politically correct”.
Populism of the left claims that modern capitalism is a conspiracy by an elite which is dedicated to (or at least indifferent to) the immiseration of the majority. Contrary to what the data shows, it claims that inequality is at an historical extreme, and getting worse.
Much of the improvement in the quality of human lives which populists don’t want you to know about was produced by the exponential improvements in technology, so it was perhaps inevitable that the ironic 2010s would see a backlash against technology – the techlash. Social media is accused of enslaving everyone to the dopamine rush of a Facebook like or a Twitter reply, and these accusations are often expressed most forcefully by the most avid users of the technologies they rail against. The tech giants are hoovering up our personal data for nefarious purposes, and recklessly deploying algorithms that are opaque, riddled with bias, and diluting the agency and humanity of a population that is increasingly dumbed down – incapable of paying attention to anything for more than ten seconds, unless it is a video game or a blockbuster movie.
Techlash encompasses artificial intelligence too, which is either feared or ridiculed – or both. Either it is about to take over all human jobs and then destroy the species in a robot apocalypse, or it is an over-hyped fad: a mere conjuring trick using statistics and human slave labour.
In fact, the 2010s were AI’s decade of wonders. In 2011, IBM’s Watson beat the best human players of the US TV quiz show “Jeopardy” – an amazing achievement, and the gracious human loser gave us the memorable phrase “I for one welcome our new robot overlords.” The next year saw the Big Bang in AI, when Geoff Hinton and others figured out how to get machine learning to work in AI – and in particular deep learning, which is (to over-simplify) a rehabilitation of neural networks. What made this possible was the huge increases in the available compute power and data, and what it made possible was superhuman facial recognition, and seriously impressive search, mapping, and translation services. (The often lauded recommendation services are still a bit crap, though.)
Two things which will have huge impact during the 2020s showed signs of their promise during the 2010s. Self-driving cars went from being rubbish, to being deployed in a pilot service carrying members of the public in self-driving taxis with nobody in the front seats. Smartphones went from rare in 2010 to globally ubiquitous in 2019. The digital assistants in these phones and other devices (Siri, Cortana, Alexa and co) are basic today, but Google Duplex offers a glimpse of how powerful they will become, and some of this promise will be realised in the 2020s.
In the next few days you will probably read many predictions about what AI will and will not be able to do by 2030. Here are a few contributions.
There will be another major breakthrough in AI, similar in impact to 2012’s Big Bang.
Researchers will work out how to combine symbolic AI, or good old-fashioned AI with machine learning.
Machines will start to display signs of common sense.
We will still be a long way off artificial general intelligence, or AGI – a machine with all the cognitive abilities of an adult human.
The business world will move beyond pilots to large-scale implementation, and start catching up with the tech giants.
Europe will try harder, and might even start to crack the current US-China AI duopoly.
By 2030, self-driving cars will be a common sight in most cities, but in taxis rather than privately-owned cars.
Many taxi drivers, van drivers and lorry drivers will be looking for new careers.
You will have conversations with your phone, and send your digital assistant off into the net to do errands for you.
5G will make the internet of things a reality, so predictive maintenance will mean that things will break down and collapse less often, and there will be less waste.
Virtual and augmented reality will work quite well, and it will be interesting to see whether lots of people spend much of their lives in simulated worlds.
AI simulations will enable better decisions to be made in business, science, and government.
We may finally be able to turn sick care into health care. There’s a decent chance we will cure many types of cancer, and the idea of ending ageing may well be in the mainstream.
And yes, we will have flying cars.
Some of this may seem fanciful, and predicting the future is, of course, impossible. But here’s the thing which most people still miss. When you read the forecasts elsewhere in the coming days, ask yourself whether they appear to be taking exponential growth into account.
Moore’s Law is the observation that computers get twice as powerful every 18 months or so. People often say it is dead or dying, but really it is evolving – which is what it has done since the phenomenon was first observed in 1965. Moore’s Law gives us exponential growth, and exponential growth is astonishingly powerful. If you had one unit of computing power in 2010, you will have 128 units in 2020. How many will you have in 2030? Believe it or not, you will have 8,000 units.
Change has never been this fast. And it will never be this slow again. Hang onto your hat: the 2020s are going to be astonishing.
A YouTube series, presented by Robert Downey Jr
Robert Downey Jr is best known as Tony Stark, the character behind Iron Man in the Avengers movies. It is said that Downey Jr modelled his portrayal of Stark on Elon Musk, the creator of Tesla and SpaceX, and one of the most outspoken commentators about artificial intelligence. Musk famously said that by developing advanced AI we are “summoning the demon”, and that we must work hard and fast to ensure it remains safe. In fact he thinks we must develop the technology to link our minds intimately with AI systems, so that instead of being replaced by them we can be enhanced by them.
So it is apt that Downey Jr is introducing “The Age of AI”, YouTube’s expensive new eight-part series on AI. The first two episodes are available now, and the remaining six will be released over the coming weeks – unless you are impatient, and sign up for the premium service. Inevitably, the series has high production values: Robert Downey Jr is not going to lend his name to content below Hollywood standards. Indeed, he introduces each episode from a hangar where the original Iron Man movies were shot, a dozen years ago. The camera moves around a lot, and each shot is short, with lots of close-ups of faces, hands, musical instruments – lots of eye candy for viewers with short attention spans.
How do you find a way into a subject as large, complex, and important as artificial intelligence? The storytellers behind “The Age of AI” chose to start by focusing on how far AI can enhance us, and whether it could end up replicating, and even replacing us. The first episode introduces us to Baby X, a lifelike avatar of a baby girl developed by digital effects artist Mark Sagar, who helped create King Kong for Peter Jackson, and the Na’vi characters in Avatar for James Cameron. Graphics by Hollywood, behavioural traits courtesy of machine learning. The experts go on to develop an avatar for Will.I.Am, founder of the Black Eyed Peas, who is impressed by the creation, and then suggests that it should remain a little robotic, so as not to confuse his mother.
The second story in episode one shows us prosthetic hands for two musicians – a drummer and a guitarist. Existing prosthetic hands are rather blunt instruments, and often quickly abandoned by their intended users. Adding analysis by machine learning of the nerve signals the brain can still send down a phantom limb seems to enable a much more lifelike prosthesis. The message of the episode is that machine learning and AI can make us more human, not less, but we will have to think carefully about where we want to draw the line.
A geek might ask for more detailed explanations of how AI works. Terms are explained as the series unfolds, but very briefly. Machine learning, for instance, is a technique to find patterns in data. And, er… that’s it. But viewers unfamiliar with AI will learn a lot. The second episode addresses how AI is advancing medical science, and also disseminating it – making it more widely available in the developing world, for instance. It rams home the point that the availability of masses of data is what enables machines to diagnose illnesses faster and more cheaply than human doctors can. In India, which has a chronic shortage of doctors for its enormous population, machines can quickly and accurately diagnose retinal damage caused by diabetes, and push patients through to surgery in time to prevent blindness. There was no discussion in this episode of the controversy surrounding the sharing of patients’ intimate data which is necessary to enable this – perhaps that will come in a later episode.
Sometimes the show feels like an infomercial, either for AI as a whole, or simply for Google, which provided many of the filmed examples. This must have been much easier to arrange, given that YouTube is owned by Google, but it is surprising they didn’t wander down the road to speak to Facebook or Apple, for instance, or hop on a plane to see Amazon, IBM, or even Baidu or Tencent. The programme follows teams from Google as they help ex-NFL star Tim Shaw regain his natural voice after losing muscle control to the tragic disease ALS, also known as Lou Gehrig’s disease. The achievement is impressive, and the emotion provoked in his family is profound and moving. But the failure to mention any of the other tech giants, or the controversy swirling around the industry, will leave some viewers feeling manipulated.
AI is our most powerful technology, and in the next few decades it will change everything about the nature of being human. Understanding what it is, how it works, and something about its promise and its peril will increasingly be basic literacy for citizens. This is a well-made, well-informed show that will get many more people up to speed, and that is greatly to be welcomed.
This article first appeared in Forbes
The New Optimists
Andrew McAfee wants to cheer you up. If you read his latest book with an open mind, he might well succeed. McAfee, an MIT economist, is joining the New Optimists (Bill Gates, Stephen Pinker, Hans Rosling and others) in trying to persuade us that the world is not going to the dogs. The central claim of “More From Less” is that capitalism and technological progress are allowing us “to tread more lightly on the earth instead of stripping it bare.” Unfortunately, he admits, this good news is hard for many people to believe because catastrophism has such a strong hold on our imaginations.
For hundreds of years before 1700, England’s population oscillated between two and six million. When peace coincided with good harvests, the number would rise, only to slump again when our inability to feed the growing population brought famine again. Robert Malthus made the reasonable assumption that this pattern would continue, and issued a dire warning about the consequence of Britain’s fast-growing population in the early industrial revolution. He was wrong. Capitalism and technology changed the game entirely, enabling us to feed far larger populations than ever before. Malthus’ name became a synonym for dramatically inaccurate predictions.
Paul Ehrlich is Malthus’ intellectual heir. Since the 1960s he has been forecasting doom and disaster from the exhaustion of all the natural resources we depend upon. The first New Optimist, Julian Simon, offered Ehrlich a bet: choose any resource and any time-frame above a year. If the price of the resource rose, Julian would pay Ehrlich; if it fell, the reverse. Ehrlich chose five – copper, chromium, nickel, tin, and tungsten – and the prices of all five fell. Ehrlich is surprisingly unrepentant: after all these years of abysmal forecasting failure, he is still telling students at Stanford that disaster is just around the corner.
Ehrlich is not alone. Any number of environmentalists and lobby groups will tell you that we are polluting, deforesting, and generally destroying the planet, exhausting its natural resources, and driving most other species extinct. All this is making us sick, and crucially, the damage is accelerating.
Using fewer natural resources
Implausible as it will seem to many, the data shows the opposite. As we get richer, we are using resources more efficiently, using less energy, causing less pollution and cleaning up the pollution of the past. We are even re-foresting the earth and protecting other species. McAfee produces compelling data and numerous examples, but sadly, many people will refuse to believe him: good news is no news, and if it bleeds, it leads. We all love a good horror story.
The evidence about resource consumption in America comes from the US Geological Survey, a federal agency formed in 1879. It tracks seventy-two resources, from aluminium to zinc, and only six of them are not yet post-peak. Even energy usage is decreasing, down two percent in 2017 from its 2008 peak, despite a 15 percent growth in GDP between those two years.
America is getting more and more efficient. Milk and aluminium are two of McAfee’s examples. Between 1950 and 2015, US milk production rose from 117 billion pounds to 209 billion, while the herd shrank from 22 million cows to 9 million. This is a productivity improvement of 330 percent. When aluminium cans were introduced in 1959 they weighed 85 grams. This fell to 21 grams by 1972, and by 2011 it was down to 13 grams.
The information revolution has powered much of this improvement, as illustrated by the story of railcars. In the late 1960s, US railway companies owned thousands of these 30-ton beasts, and only about five percent of them moved on any given day. This was not because the other 95 percent needed to rest: it was because their owners didn’t know where they were. They knew that if they could increase the percentage of cars moving each day from 5 percent to 10 percent, they would need only half as many of them. Today, of course, every railcar reports its precise location to its owner several times a second – thanks to the information revolution.
It’s not just the US. In the UK, the Office for National Statistics publishes the annual Material Flow Accounts, and a 2011 paper entitled ‘Peak Stuff’ concluded that the UK reached maximum use of material resources in the early 2000s. Data from the EU’s statistical agency Eurostat show that Germany, France, and Italy have generally seen flat or declining total consumption of metals, chemicals, and fertilizer in recent years.
And no, before you ask, this reduction in natural resource usage is not just the result of our economies switching from goods to services. While goods have been declining compared to services as a percentage of total GDP, the output and consumption of products has carried on increasing in absolute terms. We are experiencing a great decoupling: we are de-materialising industrial production. (This is actually quite an old idea: it was called ephemeralisation by Buckminster Fuller back in 1927. You may remember him as the inventor of geodesic domes, which are very efficient structures.)
As well as using fewer natural resources, the developed world is generating less pollution. In the US, the Clean Air Act was substantially amended and strengthened in 1970, 1977, and 1990. The Clean Water Act was passed in 1972, the Safe Drinking Water Act in 1974, and the Toxic Substances Control Act in 1976. Other developed countries have their equivalents.
The results are impressive. McAfee quotes another member of the New Optimists, Matt Ridley: “A car today emits less pollution travelling at full speed than a parked car did from leaks in 1970.”
McAfee also denies that we are driving thousands of species extinct: “documented extinctions are relatively rare (with about 530 recorded within the past five hundred years) and appear to have slowed down in recent decades”. That is not to say that our impact on other species is altogether benign: “the biggest threat to animal species isn’t absolute extinction, but instead huge declines in population size due to over-hunting and habitat loss.” But even here the trend is encouraging. “Parks and other protected areas made up only 4 percent of global land area in 1985, but by 2015, this figure had almost quadrupled, to 15.4 percent. At the end of 2017, 5.3 percent of the earth’s oceans were similarly protected.”
It turns out we are using less land for farming, and land that we no longer farm reverts to forest. “Throughout the developed world this process is now dominating any and all tree felling that is taking place, and overall reforestation has become the norm.” This not the case in the developing world, but “even with continued deforestation in developing countries and other challenges, a critical milestone has been reached: across the planet as a whole we have, as an international research team concluded in 2015, experienced a ‘recent reversal in loss of global terrestrial biomass.’ For the first time since the start of the Industrial Era, our planet is getting greener, not browner.”
As the world continues to grow richer, McAfee argues, we can expect this good news to spread. “In 1999, 1.76 billion people were living in extreme poverty. Just sixteen years later, this number had declined by 60 percent, to 705 million. Hundreds of millions fewer people are living in poverty now than in 1820, when the world’s total population was seven times smaller than it is today.” Happily, “the story of global poverty reduction isn’t a purely Chinese one. … Every region around the world has seen large poverty reductions in recent years.”
The Four Horsemen of the Optimist
If you can suspend your disbelief for a bit longer, you’ll be wondering what is causing these happy developments? McAfee identifies four drivers, which he calls the four horsemen of the optimist: Technology, Capitalism, Public awareness, and Responsive government.
Technology gives us new ways to solve old problems, and capitalism provides the incentive for people to invent these new ways and to implement them once they have been invented. As Abraham Lincoln put it, we add “the fuel of interest [capitalism] to the fire of genius [technology] in the discovery and production of new and useful things.”
Sadly, capitalism is a hard sell in many quarters these days, so McAfee also provides a poignant example of how its great rival, socialism, often yields disastrous outcomes. The USSR was part of the 1946 international convention against whale hunting, but between 1948 and 1973 it killed 180,000 more whales than it reported. Unlike the Japanese, the Russians have no great appetite for eating whale flesh, and most of the animals’ bodies were thrown back into the sea. And why? Because the five-year plan demanded seafood tonnage, and it had no mechanism to incentivise the production (or in this case, hunting) of things that people actually wanted.
Technology and capitalism are not enough, of course. Some humans, capitalist or otherwise, will pillage and poison unless they are prevented from doing so. Public awareness and responsive government is needed to address the fact that markets often ignore what economists call negative externalities, and they often fail to support people who are unlucky and / or unsuccessful.
Nevertheless, McAfee insists that the spread of capitalism has improved the lot of humanity beyond recognition. Its partial adoption by India in “1991… deserves its spot in the annals of economic history alongside December 1978, when China’s Communist Party approved the opening up of its economy, or even May 1846, when Britain voted to repeal the Corn Laws.” “Between 1978 and 1991, more than 2.1 billion people—about 40 percent of the world’s 1990 population—began living within substantially more capitalist economic systems.”
McAfee is confident that in the long run, the four horsemen will continue to ride. “Smartphone use and access to the Internet are increasing quickly across the planet. This means that people no longer need to be near a decent library or school to gain knowledge and improve their abilities.” And countries are unlike companies in that size does not necessarily beget bureaucratic sloth: our most valuable resource is human ingenuity, and “an economy with a larger total stock of human capital will experience faster growth.”
Climate change and its solutions
To establish that he is no climate change denier, McAfee cites the mantra, “it’s warming; it’s us; it’s bad; and we can fix it.” But once again, he argues that the trend in the developed world is much better than most people think. In the US, “greenhouse gas emissions have gone down even more quickly than has total energy use. This is largely because we have in recent years been using less coal and more natural gas to generate electricity.”
How can we entrench and spread this positive trend? McAfee proposes two solutions: first, cap and tax carbon emissions, and allow companies to trade permits. Second, rehabilitate nuclear energy. “Nuclear power doesn’t deserve its bad reputation. As is the case with vaccines, glyphosate, and GMOs, public awareness around nuclear power is broadly out of step with reality.”
Inequality and Populism
Despite all this good news, the world is undeniably grumpy. People in many countries have elected Populist governments, and in some places, especially rural America, “deaths of despair” like suicide and the mis-use of drugs and alcohol are rising. McAfee thinks that growing inequality plays a significant role in this, but the data from his favourite source, the excellent website “Our World in Data” suggests otherwise. Inequality is certainly not growing on a global level, as developing countries have been growing much faster than developed ones. And while the Gini coefficient, the usual yardstick of inequality, has become slightly worse in the US, the same is not true elsewhere in the developed world, where the coefficient has remained fairly steady at just under 40 since the early 1990s. (100 is perfectly unequal and 0 is perfectly equal.)
The real villain of the piece is not inequality, but the perception of unfairness, which is something people actually care much more about. As McAfee himself notes, “people prefer fair inequality over unfair equality.” Populists have risen to power on the back of resentment. McAfee quotes a book on America’s Tea Party: “Blacks, women, immigrants, refugees – all have cut ahead of you in line. But it’s people like you who have made this country great. The line cutters irritate you. They are violating rules of fairness.”
Pluralists and authoritarians
The roots of the perceived inequality lie in the remarkable success of social liberalism in recent decades. Rightly or wrongly, many people feel this has gone too far: it is “political correctness gone mad”. The culture wars are being fought by pluralists and authoritarians. As McAfee puts it, “most countries are becoming significantly more pluralistic—they’re seeing more ethnic diversity and immigration, gender equality, support for gay marriage and other non-traditional lifestyles, and related changes that enhance diversity. A fascinating stream of recent research finds that a large percentage of people in all countries studied have an innate intolerance for this greater diversity. [They] want a strong central authority to enforce obedience and conformity.”
This battle between pluralists and authoritarians is raging all over the world, and it has eclipsed traditional loyalties of class, and the ideologies of the left and the right. How can this battle be won, or at least resolved? McAfee is clearly a pluralist, but he discounts the possibility of persuading authoritarians by rational argument. “It’s particularly important not to try to win arguments with them. … A better way is to start by finding common ground.”
This seems an unpromising approach. As he admits, “more and more people are choosing to have fewer ties to people with dissimilar values and beliefs, opting instead to spend more time among the like-minded. The journalist Bill Bishop calls this phenomenon ‘the big sort.’” Perhaps a better way to respond to the fear and anger which authoritarians breathe is simply to make pluralism the more attractive option, using fun and humour. This should not be hard, since pluralism is inherently more optimistic, although it often trips itself up by taking itself too seriously, and engaging in self-righteous circular firing squads.
Automation and abundance
McAfee is probably best known for his 2014 book “The Second Machine Age”. In that book, he and his co-author, fellow MIT academic Erik Brynjolfsson, argued that many jobs will be automated by artificial intelligence, and that although many new jobs will be created, societies must get better at re-skilling and re-training people to move from the old to the new.
I agree that for the next two or three decades there will be a Big Churn in the job market, but I have been trying for some time to persuade Brynjolfsson and McAfee to cast their minds further forward, and take seriously the idea that after two or three more decades of exponential improvement, our machines will be cheaper, better, and faster at pretty much everything that most of us can do for money. In which case, technological unemployment will become a reality.
McAfee makes little reference to the theme of automation in “More From Less”, which is ironic, because it helps to answer this big question: if machines do take all the jobs, how do we pay for the humans? The answer may well be to reduce the cost of all the goods and services we need to almost zero.
This is called the economy of abundance, and “More From Less” is invaluable in showing some of the ways it could materialise.
“More From Less” is a well-written and convincing book. If it makes a few of us more optimistic, it will also be remembered as an important one
Roger Bootle is not afraid to think and say unconventional things. He is that rare phenomenon: a professional economist who thinks that Brexit is a Good Idea. Indeed, he belongs to a group called Economists for Brexit, now renamed as Economists for Free Trade, which argues for a no-deal Brexit.
Whatever you think of that, the economics consultancy that Bootle founded, Capital Economics, has been very successful financially, and in 2012 it was awarded the £250,000 Wolfson Economics Prize, the second most valuable economics prize in the world after the Nobel, for a proposal that EU member states who wanted to exit should default on a large part of their debts. A book on technological unemployment from such a high-profile economist is to be warmly welcomed. What’s more, it is a well-researched, enjoyable, and thoughtful book.
The thoughtfulness does have its limits. The book reads as though Bootle was determined to dismiss the possibility of technological unemployment from the outset, and he makes little effort to hide his disdain for those who take the idea seriously. People like Max Tegmark and me, who are guilty of this crime, are labelled “AI visionaries”, and it is clear that this is not a compliment. We “geeks” are “bubbling enthusiasts” but also pessimists, “emanating gloom”. Others who are responsible for “fetid speculation about the implications of AI” are Stephen Hawking, Martin Rees, Stuart Russell, Elon Musk and Bill Gates. Quite the rogues’ gallery.
Overall, Bootle’s writing style is clear and relaxed, and the book is mostly calm and measured. Occasionally he does give free rein to his inner curmudgeon: “As to the Internet of Things, rarely can something have been so overhyped. … In the future, doorknobs and curtains will also be able to speak to us when they need some attention, rather like those disembodied voices or noises in cars that tell us when we haven’t fastened our seatbelts. Heaven forfend!”
Less than a quarter of the way through the book, Bootle delivers what he thinks is the killer blow to the idea that technological unemployment is possible. “Unless and until robots can produce and reproduce themselves costlessly … human beings will always have some comparative advantage.” He admits that this might not help, as the income they could earn “might be appallingly low such that it hardly seemed worth working and the state has to intervene in a major way.” But he thinks humans have something better than comparative advantage: “In fact, such an outcome lies a long way off and, I suspect, will never transpire. For there are many areas where humans possess an absolute advantage over robots and AI, including manual dexterity, emotional intelligence, creativity, flexibility, and most importantly, humanity. These qualities ensure that in the AI economy there will be a plethora of jobs for humans.” And apparently that’s it.
I disagree. AlphaGo’s famous move 37 in its second game against Lee Sedol in 2016 is one of many proofs that machines can be creative, even if their version of creativity does not involve a shred of consciousness. And anyone who has been watching the progress of robots developed by Boston Dynamics and others in the last few years will be under no illusion that humans will remain supreme forever in manual dexterity and flexibility.
The truth is that no-one knows for sure whether technological unemployment will happen, or when. None of us has a crystal ball. But if you think seriously about the impact of the exponential growth in the power of computers, and if you think ahead just a few decades, you realise that it is dangerously complacent to dismiss the possibility of technological unemployment out of hand.
Bootle does consider the phenomenon of exponential growth – he borrows my illustration of a football stadium filling up with water – but he dismisses it because it always collapses into an S curve, and he argues that because observations of exponential growth are sometimes described as a law, they lead to assertions that “rest on flimsy, if not nonexistent foundations.” This is a blatant Aunt Sally: everyone knows that exponential growth always collapses into an S curve eventually – the question is how long before that happens. (You are composed of around 27 trillion cells, which were created by fission, or division – an exponential process. It required 46 steps of fission to create all of your cells. Moore’s Law, by comparison, has had 36 steps in the 54 years of its existence.) And I’m not aware of anybody writing about Moore’s Law who doesn’t realise that it is an observation, not a physical law.
Partly the problem seems to lie in a failure of Bootle’s imagination – or perhaps his unwillingness to exercise it. He studied PPE at Oxford, and one of his favourite questions from back then is “Was the Black Death a good thing?” He says he “cannot imagine any form of AI being capable of assessing adequately the range of possible answers to this question.” I bet he could if he really tried.
Quite a few of Bootle’s assertions are out-of-date, or simply mistaken. He pours scorn on the idea of the paperless office, but the use of paper in offices peaked in 2007. He reports that chess computers are enhanced by collaborating with humans, but this has not been true for several years now. He thinks Kevin Kelly is a singularitarian, when he is actually a prominent opponent of the idea. A quick look at Wikipedia would have saved him from making the erroneous claim that Stanislav Petrov (the man who saved the world by bravely declaring a report about incoming American nuclear weapons to be a false alarm) was sacked. More seriously, his account of the progress with self-driving cars is highly contentious, and probably considerably off the mark. He regards autonomous cars as a bubble which is about to burst and destroy much of the automotive industry which has been foolish enough to invest so heavily in it.
From my point of view, it is a great shame that Bootle seems to have begun his enquiry so prejudiced against the idea that technological unemployment is a realistic possibility some decades ahead. In general, he is a congenial guide to the issues, and it would have been fascinating to have had his economic expertise applied to the idea, for instance, that the economy of abundance is a better solution to the problem than universal basic income, and that fully automated luxury capitalism is a better aspiration than fully automated luxury communism. As it stands, most of his book is only of academic interest if you do take the idea of technological unemployment seriously.
This article first appeared in Forbes magazine in October 2019.