“Calum’s Rule”

Forecasts should specify the timeframe

Time dispute

Disagreements which suggest profound differences of philosophy sometimes turn out to be merely a matter of timing: the parties don’t actually disagree about whether a thing will happen or not, they just disagree over how long it will take. For instance, timing is at the root of apparently fundamental differences of opinion about the technological singularity.

Elon Musk is renowned for his warnings about superintelligence:

With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” We are the biological boot-loader for digital super-intelligence.”

Comments like this have attracted fierce criticism:

I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” (Andrew Ng)

We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature.” (Yann LeCun)

Superintelligence is beyond the foreseeable horizon.”  (Oren Etzioni)

Surviving cover, compressed
If you look closely, these people don’t disagree with Musk that superintelligence is possible – even likely, and that its arrival could be an existential threat for humans. What they disagree about is the likely timing, and the difference isn’t as great as you might think. Ng thinks “There could be a race of killer robots in the far future,” but he doesn’t specify when. LeCun seems to think it could happen this century: “if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future.” And Etzioni’s comment was based on a survey where most respondents set the minimum timeframe as a mere 25 years. As Stephen Hawking famously wrote, “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not.”

Although it is less obvious, I suspect a similar misunderstanding is at play in discussions about the other singularity – the economic one: the possibility of technological unemployment and what comes next. Martin Ford is one of the people warning us that we may face a jobless future:

A lot of people assume automation is only going to affect blue-collar people, and that so long as you go to university you will be immune to that … But that’s not true, there will be a much broader impact.”

The opposing camp includes most of the people running the tech giants:

People keep saying, what happens to jobs in the era of automation? I think there will be more jobs, not fewer.” “… your future is you with a computer, not you replaced by a computer…” “[I am] a job elimination denier.” – Eric Schmidt

Schmidt and bot
There are many things AI will never be able to do… When there is a lot of artificial intelligence, real intelligence will be scarce, real empathy will be scarce, real common sense will be scarce. So, we can have new jobs that are actually predicated on those attributes.” – Satya Nadella

For perfectly good reasons, these people mainly think in time horizons of up to five years, maybe ten at a stretch. And in that time period they are surely right to say that technological unemployment is unlikely. For machines to throw us out of a job, they have to be able to do it cheaper, better, and / or faster. Automation has been doing that for centuries: elevator operator and secretary are very niche occupations these days. When a job is automated, the employer’s process becomes more efficient. This creates wealth, and wealth creates demand, and thus new jobs. This will continue to happen – unless and until the day arrives when the machines can do almost all the work that we do for money.

Time horizon

If and when that day arrives, any new jobs which are created as old jobs are destroyed will be taken by machines, not humans. And our most important task as a species at that point will be to figure out a happy ending to that particular story.

Will that day arrive, and if so, when? People often say that Moore’s Law is dead or dying, but it isn’t true. It has been evolving ever since Gordon Moore noticed, back in 1965, that his company was putting twice as many transistors on each chip every year. (In 1975 he adjusted the time to two years, and shortly afterwards it was adjusted again, to eighteen months.) The cramming of transistors has slowed recently, but we are seeing an explosion of new types of chips, and Chris Bishop, the head of Microsoft Research in the UK, argues that we are seeing the start of a Moore’s Law for software: “I think we’re seeing … a similar, singular moment in the history of software … The rate limiting step now is … the data, and what’s really interesting is the amount of data in the world is – guess what – it’s growing exponentially! And that’s set to continue for a long, long time to come.”

So there is plenty more Moore, and plenty more exponential growth. The machines we have in 10 years time will be 128 times more powerful than the ones we have today. In 20 years time they will be 8,000 times more powerful, and in 30 years time, a million times more powerful. If you take the prospect of exponential growth seriously, and you look far enough ahead, it becomes hard to deny the possibility that machines will do pretty much all the things we do for money cheaper, better and faster than us.

New rule
So I would like to propose a new rule, and with no superfluous humility I’m calling it Calum’s Rule:

Forecasts should specify the time frame.”

If we all follow this injunction, I suspect we will disagree much less, and we can start to address the issue more constructively.

Advertisements

Has AI ethics got a bad name?

depositphotos_77092207-stock-illustration-between-angels-and-demons

Amid all the talk of robots and artificial intelligence stealing our jobs, there is one industry that is benefiting mightily from the dramatic improvements in AI: the AI ethics industry. Members of the AI ethics community are very active on Twitter and the blogosphere, and they congregate in real life at conferences in places like Dubai and Puerto Rico. Their task is important: they want to make the world a better place, and there is a pretty good chance that they will succeed, at least in part. But have they chosen the wrong name for their field?

Artificial intelligence is a technology, and a very powerful one, like nuclear fission. It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire. Like nuclear fission, electricity, and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative outcomes.

Bias - 6 or 9

This is what concerns people in the AI ethics community. They want to minimise the amount of bias in the data which informs the decisions that AI systems help us to make – and ideally, to eliminate the bias altogether. They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible, so that in advance or in retrospect, we can check for sources of bias, and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”? We don’t have “fire ethics” or “electricity ethics”, so why should we have AI ethics? There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems. The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car, or a rock. It will probably be many years before we create an AI which can reasonably be described as a moral agent.

Sophia citizen

It is ironic that people who regard themselves as AI ethicists are falling into this trap, because many of them get very heated when robots are anthorpomorphised, as when the humanoid Sophia was given citizenship by Saudi Arabia.

There is a more serious potential downside to the nomenclature. People are going to disagree about the best way to obtain the benefits of AI and minimise or eliminate its harms. That is the way it should be: science, and indeed most types of human endeavour, advance by the robust exchange of views. People and groups will have different ideas about what promotes benefit and minimises harm. These ideas should be challenged and tested against each other. But if you think your field is about ethics rather than about what is most effective, there is a danger that you start to see anyone who disagrees with you as not just mistaken, but actually morally bad. You are in danger of feeling righteous, and unwilling or unable to listen to people who take a different view. You are likely to seek the company of like-minded people, and to fear and despise the people who disagree with you. This is again ironic, as AI ethicists are generally (and rightly) keen on diversity.

Bridge failThe issues explored in the field of AI ethics are important, but it would help to clarify them if some of the heat was taken out of the discussion. It might help if instead of talking about AI ethics, we talked about beneficial AI, and AI safety. When an engineer designs a bridge she does not finish the design and then consider how to stop it falling down. The ability to remain standing in all foreseeable circumstances is part of the design criteria, not a separate discipline called “bridge ethics”. Likewise, if an AI system has deleterious effects it is simply a badly designed AI system.

Interestingly, this change has already happened in the field of AGI research, the study of whether and how to create artificial general intelligence, and how to avoid the potential downsides of that development, if and when it does happen. Here, researchers talk about AI safety. Why not make the same move in the field of shorter-term AI challenges?

This article first appeared in Forbes magazine on 7th March 2019

The greatest generations

Greatest generation 2

Every generation thinks the challenges it faces are more important than what has gone before. American journalist Tom Brokaw bestowed the name “the greatest generation” on the people who grew up in the Great Depression and went on to fight in the Second World War. As a late “baby boomer” myself, I certainly take my hat off to that generation.

The Boomers were named for demography: they were a bulge in the population (“the pig in the python”) caused by soldiers returning from the war. They saw themselves as special, and maybe they were. They invented sex in the 1960s, apparently, along with rock and roll, the counter-culture, the civil rights movement and the second wave of feminism.

Generation X was the first to take a letter as its title, although that happened late in their history, with the publication in 1991 of Canadian author Douglas Coupland’s novel, “Generation X: tales for an accelerated culture”. Cynical Boomers said Generation X got its name because its members were cyphers: their role in the world was less clear, their contribution to it was doubtful. Early on they were accused of being lazy and disaffected: they were the MTV generation, and their musical styles were grunge and hip hop. But these are accusations that most parents hurl at their successors. Later on, Generation X showed high levels of entrepreneurship, and appeared to be above averagely happy, with a good work-life balance. Their profile may be lower because there are fewer of them: they were the first generation whose parents had access to the contraceptive pill.

Generation Y

Generation Y, 2
Generation X was followed, naturally enough, by Generation Y, also known as the Millennials, since they were born between the early 1981 and 2000. (There are no generally agreed dates for the generations; I like 1941-60 for Boomers, 1961-80 for Generation X, and 1981-2000 for Millennials.) Following Generation X, and still being born, is Generation Z. Whatever their predecessors may think, it is these two generations which will face the biggest challenges yet presented to humanity.

Speaking at the United Nations in 1963, John F Kennedy said something which would not be out of place today: “Never before has man had such capacity to control his own environment, to end thirst and hunger, to conquer poverty and disease, to banish illiteracy and massive human misery. We have the power to make this the best generation of mankind in the history of the world – or make it the last.”i

The members of Generation Y and Z have been born at the best time ever to be a human, in terms of life expectancy, health, wealth, access to education, information, and entertainment. They have also been born at the most interesting time, and the most important. Whether they like it or not, they have the task of navigating us through the economic singularity of mass unemployment, and then the technological singularity of super-intelligence.

The economic singularity will arrive when Generation Y is running the show, which makes their name apposite, since one of the challenges the economic singularity will raise is to ensure that everyone finds meaning in a life without jobs. Generation Y will have to come up with a great new answer to the question of “why” we are here.

Generation Z

Generation Z
Generation Z is, if anything, even better named, although again, entirely by accident. One way or another, they are likely to be the last generation of humans to reach old age in a form their ancestors would recognise. These timings are of course uncertain and tendentious, but Generation Z is likely to be the dominant force in politics and business when the first superintelligence appears, and humanity becomes the second-smartest species on the planet. The consequences for humans will be staggering. If things go well, and the superintelligence really likes us, then at a minimum, humans will quickly be augmented to dispense with many of the limitations and frailties which have afflicted us since life on earth began: ageing, vulnerability, and probably even death. These augmentations will render us barely recognisable, and hard to continue to classify as human. If things go very well, perhaps we will merge with the machines we have created, and travel the universe together to wonder at its marvels, immune to the ravages of vacuum and radiation. If things go less well, generation Z could be the last generation of humans for less cheerful reasons.

Generations Y and Z are destined to be our greatest generations. If either of them fails in their respective tasks, humanity’s future could be bleak. But if they succeed, it could be almost incredibly good. They must succeed.

Stories from 2045

sparky

Sparky, a NAO robot who lives at Queen Mary University, helped launch a book this week.  In the very whooshy surroundings of the Reform Club on London’s Pall Mall, she read out a story written by an AI.  It’s not a very good story, to be honest, but it’s impressive that an AI can write stories at all.

The other stories, written by humans, are very good indeed.  You’ll find them at an Amazon site near you. They speculate on what life might be like during and after the economic singularity.  Two-thirds of them are positive, which is what we need – Hollywood has given us more than enough dystopias already.

red cover 27 nov
In the coming decades, artificial intelligence (AI) and related technologies will have enormous impacts on the job market. At the moment, no-one can predict exactly what will happen or when. The outcomes could be anywhere from very good to very bad, and which ones we get will depend significantly on the actions taken (and not taken) by governments and others in the coming few years.

The Economic Singularity Club think tank (ESC) was set up to discuss these issues, and try to influence their outcome positively. This book is our first tangible project.

esc key
One possible impact of the AI revolution is that many people will be unemployable within a relatively short space of time – maybe in two or three decades. If this does happen, and if we are smart and perhaps a bit lucky, the outcome could be wonderful, and we should certainly try to make it so.

The book is intended to encourage political leaders, policy makers, and everyone else to bring a much more serious level of attention and investment to the possibility of technological unemployment. The idea is to make the prospect of technological unemployment seem more real and less academic to people who have not previously given the idea much thought. It should also stimulate readers to devise their own solutions, and to decide what actions they can take to help ensure that we get a good outcome and not one of the bad ones.

ar screengrab
The book has an augmented reality cover.  Download an app, point your phone at the book, and a gaggle of TV screens emerge, and hover in front of you.  Select one, and click it to watch a short video.

All proceeds from this book will go to a charitable foundation set up by the ESC to promote its objectives. We hope you find it enjoyable and stimulating. 

charity jar
Finally, if you feel inspired to write your own story from 2045, you can submit it to the book’s website, https://storiesfrom2045.com.  If it’s shorter than 1,000 words, and not illegal or hateful, we’ll publish it there.  It we get enough great stories, we’ll publish a sequel book.

stories webpage

Road rage against the machines? Self-driving cars in 2018 and 2019

Self-driving image

Self-driving cars – or Autos, as I hope we’ll call them – passed several important milestones in 2018, and they will pass several more in 2019. The big one came at the end of the year, on 5th December: Google’s Autos spin-out Waymo launched the world’s first commercial self-driving taxi service, open to citizens in Phoenix, Arizona, who are not employees of the company, and not bound by confidentiality agreements.

This service, branded Waymo One, was an extension of the company’s EasyRider programme, which was launched back in April. In that programme, selected members of the public who were willing to sign non-disclosure agreements (NDAs) got free rides in cars where sometimes no-one sat up front: no driver, no supervising engineer. There is much debate about how often the cars in both these programmes run with the front seats empty. Google and Waymo won’t say, but the answer seems to be sometimes, but not often. Some people argue this means that self-driving cars won’t be ready for prime time for years to come. Others see it as commendable caution.

Waymo is the clear front-runner in this business. In October it announced that its test cars had driven 10 million miles, and they have not been the unambiguous cause of a single accident. In simulations, they drive that many miles every single day.

General Motors, America’s biggest car maker by volume, is determined not to lag far behind, and has said for some time that it will launch a fleet of self-driving taxis during 2019. In October it announced a $2.75bn JV with Honda in Cruise, its self-driving car unit, which added to the earlier $2.25bn investment by Softbank to bring the valuation of Cruise to $14bn,i which is almost half the parent company’s equity value.

The rest of America’s car industry is also in hot pursuit, especially its newest and most valuable participant, Tesla Motors, which is pursuing the contrarian strategy of offering more and more driver assistance rather than jumping straight to full automation.

Tesla dog driver

Autos are still expensive, not least because production volumes of their LIDAR sensors are still low. So for some years to come, these vehicles will probably only be sold to commercial fleets, especially taxis and trucks. Unless, of course, Tesla’s Elon Musk is proved right, and Autos can operate solely with cameras, and don’t need LIDAR. So far he’s in a small minority, but his contrarian views have been vindicated before. Even if Musk is wrong, city dwellers in particular may well stop buying cars and start using Auto taxis. In which case, how long would the switch take? A famous pair of photographs taken on the same New York street on the same day in 1900 and 1913 shows that it took just 13 years to effect a complete swap in that city from horse-drawn carriages to automobiles. The switch took longer in rural areas of the US, and much longer again in less developed countries.

Self-driving adoption - from horse to car

In short, anyone who thinks that self-driving vehicles will not be in widespread use by the mid-2020s is probably in for a shock.

The US is in the vanguard of the Autos revolution, but other countries are keen to catch up. Both the UK government and London’s leading private hire company (Addison Lee) have stated their intention to have Autos operating in London by 2021. Driving in London is a whole different proposition to driving in Phoenix, so this two-year delay does not denote a lack of ambition.

But as usual in AI, it is China which is most likely to catch the US if there is a race to deploy self-driving technology. Baidu, often described as China’s Google, is the leader so far, with more than 100 partners involved in its Apollo project, including car manufacturers like Ford and Hyundai, and technology providers. The Chinese government is keeping close tabs on these developments, not least in obliging foreign companies to source their maps from Chinese companies.

Baidu self-driving car

Are we ready for the arrival of Autos? Can our infrastructures cope? The belief that Autos require modifications to our road infrastructure is a misapprehension. Waymo’s cars don’t need smart lane dividers, special traffic light telematics, or dedicated local area networks. They drive on ordinary roads, just like you and me. No doubt Autos will lead to our cities and towns becoming smarter and more intelligible, but they don’t require it to get started.

What about resistance? Will there be road rage against the machines? The most tragic thing to happen in the self-driving car industry this year was also perhaps the most revealing. In April, an Uber Auto ran over and killed a woman walking a bicycle across a busy road. There is still disagreement about what caused the accident, and Uber stopped its self-driving test programme immediately. But the most interesting thing is that no other company followed suit – and there are over 40 companies trialling self-driving cars in the US alone. Despite this, and despite blanket press coverage, there was no popular protest against Autos. It seems that people have already “discounted” the arrival of Autos: it’s a done deal.

Even if the arrival of Autos is a done deal for society as a whole, there may well be pockets of resistance. On a low level, this will come from petrol heads who find themselves banned from more and more roads because they are much more dangerous drivers than machines. Eventually they will only be allowed to drive on designated racetracks, after signing detailed indemnifications. We should welcome this, not resist it: right now, we kill 1.2 million people around the world each year by running them over, and we maim another 50 million. We are sending humans to do a machine’s job, and there is a holocaust taking place on our roads. We should hurry to embrace Autos. And anyone tempted to vandalise Autos will quickly find that they are bristling with cameras: if people start spray-painting their LIDARS to disable them, they will find themselves on the wrong end of a criminal prosecution.

Cross Clarkson

But there is another form of resistance which may not be so easy to assuage. In June, I gave a talk about AI to a room full of senior US police officers – just outside Phoenix, Arizona, appropriately enough. When I argued that a million Americans who currently earn a reasonable living driving trucks are going to be out of a job fairly soon because the economics of truck driving is going to flip, there was an audible gulp in the hall. They didn’t need me to point out that many of these people have guns.

One of the most significant impacts of Autos may well be to play the role of the canary in the coal mine: they could alert people to the likelihood that technological unemployment is coming – not now, and not in five years, but in a generation. If it is coming, we had better have a plan for how to cope. Otherwise there could be a panic which makes the current wave of populism look mild. At the moment we have no plan, and we’re not even thinking about developing a plan because so many influential people are saying that it cannot happen. They might be right to say that it will not happen. But to say that it cannot happen is dangerous complacency.

So what of 2019? Assuming success in Phoenix, Google is likely to roll out its pilot to other US cities – we could maybe see a dozen of them start during 2019. GM will be anxious not be seen as lagging, and no doubt Tesla will make startling announcements followed by almost-as-startling achievements. I’ll be surprised if there aren’t some significant pilots in China by the end of 2019 as well. And who knows, maybe all this will spur Europe into getting more serious about AI in general. Here’s hoping. 

This article was first published by Forbes magazine

Reviewing last year’s AI-related forecasts

Robodamus 3

As usual, I made some forecasts this time last year about how AI would change, and how it would change us. It’s time to look back and see how those forecasts for 2018 panned out. The result: a 50% success rate, by my reckoning. Better than the previous year, but lots of room for improvement. Here are the forecasts, with my verdicts in italics.

1. Non-tech companies will work hard to deploy AI – and to be seen to be doing so. One consequence will be the growth of “insights-as-a-service”, where external consultants are hired to apply machine learning to corporate data. Some of these consultants will be employees of Google, Microsoft and Amazon, looking to make their open source tools the default option (e.g. Google’s TensorFlow, Microsoft’s CNTK, Amazon’s MXNet).

Yes. The conversation among senior business people at the events I speak at has moved from “What is this AI thing?” to “Are we moving fast enough?”

2. The first big science breakthrough that could not have been made without AI will be announced. (I stole this from DeepMind’s Demis Hassabis. Well, I want to get at least one prediction right!)

Yes. In May, an AI system called Eve helped researchers at Manchester University discover that triclosan, an ingredient commonly found in toothpaste, could be a powerful anti-malarial drug. The research was published in the journal Scientific Reports (here).

3. There will be media reports of people being amazed to discover that a customer service assistant they have been exchanging messages with is a chatbot.

Yes. Google Duplex

4. Voice recognition won’t be quite good enough for most of us to use it to dictate emails and reports – but it will become evident that the day is not far off.

Yes. Alexa is pretty good, but not yet a reliable stenographer. (Other brands of AI assistant are available.)

5. Some companies will appoint Chief Artificial Intelligence Officers (CAIOs).

Not sure. I don’t know of any, but I bet some exist.

6. Capsule networks will become a buzz word. These are a refinement of deep learning, and are being hailed as a breakthrough by Geoff Hinton, the man who created the AI Big Bang in 2012.

Not as far as I know.

7. Breakthroughs will be announced in systems that transfer learning from one domain to another, avoiding the issue of “catastrophic forgetting”, and also in “explainable AI” – systems which are not opaque black boxes whose decision-making cannot be reverse engineered. These will not be solved problems, but encouraging progress will be demonstrated.

I think I’ve seen reports of progress, but nothing that could fairly be described as a major breakthrough.

8. There will be a little less Reverse Luddite Fallacism, and a little more willingness to contemplate the possibility that we are heading inexorably to a post-jobs world – and that we have to figure out how to make that a very good thing. (I say this more in hope than in anticipation.)

No, dammit.

Book review: “21 Lessons for the 21st Century”, by Yuval Harari

Cover

The title of Yuval Harari’s latest best-seller is a misnomer: it asks many questions, but offers very few answers, and hardly any lessons. It is the least notable of his three major books, since most of its best ideas were introduced in the other two. But it is still worth reading. Harari delights in grandiloquent sweeping generalisations which irritate academics enormously, and part of the fun is precisely that you can so easily picture his colleagues seething with indignation that he is trampling on their turf. More important, some of his generalisations are acutely insightful.

The insight at the heart of “Sapiens”, his first book, was that humans dominate the planet not because we are logical, but because 70,000 or so years ago we developed the ability to agree to believe stories that we know are untrue. These stories are about religion, and political and economic organisation. The big insight in his second book, “Homo Deus” is that artificial intelligence and other technologies are about to transform our lives far more – and far more quickly – than almost anyone realises. Both these key ideas are reprised in “21 Lessons”, but they are big ideas which bear repeating.

Happily, he has toned down his idiosyncratic campaigns about religion and vegetarianism. In the previous books he encountered religion everywhere: capitalism and communism have passionate adherents, but they are not religions. The first third of “Homo Deus” is religious in a different way: it is a lengthy sermon about vegetarianism.

Sapiens and Homo Deus

21 Lessons” is divided into five parts, of which the first is the most coherent and the best. It concerns the coming technological changes, which Harari first explored in “Homo Deus”. “Most people in Birmingham, Istanbul, St Petersburg and Mumbai are only dimly aware, if at all, of the rise of artificial intelligence and its potential impact on their lives. It is undoubtable, however, that the technological revolutions will gather momentum in the next few decades, and will confront humankind with the hardest trials we have ever encountered.”

He is refreshingly blunt about the possibility of technological unemployment: “It is dangerous just to assume that enough new jobs will appear to compensate for any losses. The fact that this has happened during previous waves of automation is absolutely no guarantee that it will happen again under the very different conditions of the twenty-first century. The potential social and political disruptions are so alarming that even if the probability of systemic mass unemployment is low, we should take it very seriously.”

Very well said, but this part of the book would be much more powerful if he had offered a fully worked-through argument for this claim, which in the last couple of years has been sneeringly dismissed by a procession of tech giant CEOs, economists, and politicians. Perhaps next year, the World Economic Forum could organise a debate on this question between Harari and a leading sceptic, such as David Autor.

It is also a shame that he offers no prescriptions, beyond categorising them: “Potential solutions fall into three main categories: what to do in order to prevent jobs from being lost; what to do in order to create enough new jobs; and what to do if, despite our best efforts, job losses significantly outstrip job creation.” Fair enough, but this should be the start of the discussion, not the end. Still, at least he doesn’t fall back on the usual panacea of universal basic income, and his warning about what happens if we fail to develop a plan is clear: “as the masses lose their economic importance … the state might lose at least some of the incentive to invest in their health, education and welfare. It’s very dangerous to be redundant.”

Harari is also more clear-sighted than most about the risk of algocracy – the situation which arises when we delegate decisions to machines because they make better ones than we do. “Once we begin to count on AI to decide what to study, where to work, and who to marry, human life will cease to be a drama of decision-making. … Imagine Anna Karenina taking out her smartphone and asking the Facebook algorithm whether she should stay married to Karenin or elope with the dashing Count Vronsky.” Warning about technological unemployment, he coined the brutal phrase “the gods and the useless”. Warning about algocracy, he suggests that humans could become mere “data cows”.

Data cows

The remaining four parts of the book contain much less that is original and striking. Harari is a liberal and an unapologetic globalist, pointing out reasonably enough that global problems like technological disruption require global solutions. He describes the EU as a “miracle machine”, which Brexit is throwing a spanner into. He does not see nationalism as a problem in itself, although he observes that for most of our history we have not had nations, and they are unnatural things and hard to build. In fact he thinks they can be very positive, but “the problem starts when benign patriotism morphs into chauvinistic ultra-nationalism.”

Although he sees nationalism as a possible problem, he also thinks it has already lost the game: “we are all members of a single rowdy global civilisation … People still have different religions and national identities. But when it comes to the practical stuff – how to build a state, an economy, a hospital, or a bomb –almost all of us belong to the same civilisation.” He supports this claim by pointing out that the Olympic Games, currently “organised by stable countries, each with boringly similar flags and national anthems,” could not have happened in mediaeval times, when there were no such things as nation states. And he argues that this is a very good thing: “For all the national pride people feel when their delegation wins a gold medal and their flag is raised, there is far greater reason to feel pride that humankind is capable of organising such an event.”

Globalisation

He is even more dismissive of religion – especially monotheism – despite his obsession with it. “From an ethical perspective, monotheism was arguably one of the worst ideas in human history … What monotheism undoubtedly did was to make many people far more intolerant than before … the late Roman Empire was as diverse as Ashoka’s India, but when Christianity took over, the emperors adopted a very different approach to religion.” Religion, he says, has no answers to any of life’s important questions, which is why there is no great following for a Christian version of agriculture, or a Muslim version of economics. “We don’t need to invoke God’s name in order to live a moral life. Secularism can provide us with all the values we need.”

He seems to be applying for membership of the “new atheists” club, in which Richard Dawkins and Stephen Pinker deliberately goad the religious by diagnosing religion as a disease which can be cured. Harari suggests that “when a thousand people believe some made-up story for one month, that’s fake news. When a billion people believe it for a thousand years, that’s a religion.”

New atheists

Oddly, given his perceptive take on the future of AI, Harari is weak on science fiction, displaying a fundamental misunderstanding of both The Matrix and Ex Machina. He is stronger on terrorism, pointing out that it is much less of a threat than it seems, contrary to the deliberate mis-representations by populists: “Since 11 September 2001, every year terrorists have killed about fifty people in the European Union, about ten people in the USA, about seven people in China, and up to 25,000 people globally (mostly in Iraq, Afghanistan, Pakistan, Nigeria and Syria).  In contrast, each year traffic accidents kill about 80,000 Europeans, 40,000 Americans, 270,000 Chinese, and 1.25 million people altogether.” Terrorists “challenge the state to prove it can protect all its citizens all the time, which of course it can’t.” They are trying to make the state over-react, and populists are their eager accomplices.

The book seems to be building to a climax when it addresses the meaning of life. Here and elsewhere, Harari has said that humans create meaning – or at least the basis of power – by telling ourselves stories. So is he going to give us a story which will help us navigate the challenges of the 21st century?

Sadly not. The closest we get is a half-baked version of Buddhism.

The Buddha taught that the three basic realities of the universe are that everything is constantly changing, nothing has any enduring essence, and nothing is completely satisfying.  Suffering emerges because people fail to appreciate this … The big question facing humans isn’t ‘what is the meaning of life?’ but rather, ‘how do we get out of suffering?’ … If you really know the truth about yourself and about the world, nothing can make you miserable. But that is of course much easier said than done.” Indeed.

Meaning

Harari has worked out his own salvation: “Having accepted that life has no meaning, I find meaning in explaining this truth to others.” Given his six-figure speaking fees, this makes perfect sense.

Harari also finds solace in meditation, which he practices for two hours every day, and a whole month or two every year. “21 Lessons” is a collection of essays written for newspapers and in response to questions. This shows in its disjointed, discursive, and inconclusive nature. If Harari had spent less time meditating, maybe he would have found more time to answer the questions he raises. It’s still definitely worth reading, though. 

This article first appeared in Forbes Magazine