Superintelligence: a balanced approach

A couple of recent events made me think it would be good to post a brief but (hopefully) balanced summary of the discussion about superintelligence.

Can we create superintelligence, and if so, when?

Brain on motherboard

Our brains are existence proof that ordinary matter organised the right way can generate general intelligence – an intelligence which can apply itself to any domain. They were created by evolution, which is slow, messy and inefficient. It is also un-directed, although non-random. We are now employing the powerful, fast and purposeful method of science to organise different types of ordinary matter to achieve the same result.

Today’s artificial intelligence (AI) systems are narrow AIs: they can excel in one domain (like arithmetic calculations, playing chess, etc) but they cannot solve problems in new domains. If and when we create an AI which has all the cognitive ability of an adult human, we will have created an artificial general intelligence (AGI).

Although the great majority of AI research is not specifically targeted at creating an AGI, some of it is. For instance, creating an AGI is an avowed aim of DeepMind, which is probably the most impressive team of AI researchers on the planet. Furthermore, many other AI researchers will contribute more or less inadvertently to the development of the first AGI.

We do not know for sure that we will develop AGI, but the arguments that it is impossible are not widely accepted. Much stronger are the arguments that the project will not be successful for centuries, or even thousands of years. There are plenty of experts on both sides of that debate, However, it is at least plausible that AGI will arrive within the lifetime of people alive today. (Let’s agree to leave aside the separate question about whether it will be conscious.)

We do not know for sure that the first AGI will become a superintelligence, or how long that process would take. There are good reasons to believe that it will happen, and that the time from AGI to superintelligence will be much shorter than the time from here to AGI. Again there is no shortage of proponents on both sides of that debate.

I am neither a neuroscientist nor a computer scientist, and I have no privileged knowledge. But having listened to the arguments and thought about it a great deal for several decades, my best guess is that the first AGI will arrive in the second half of this century, in the lifetime of people already born, and that it will become a superintelligence within weeks or months rather than years.

Will we like it?

This is the point where we descend into the rabbit hole. If and when the first superintelligence arrives on Earth, humanity’s future becomes either wondrous or dreadful. If the superintelligence is well-disposed towards us it may be able to solve all our physical, mental, social and political problems. (Perhaps they would be promptly replaced by new problems, but the situation should still be an enormous improvement on today.) It will advance our technology unimaginably, and who knows, it might even resolve some of the basic philosophical questions such as “what is truth?” and “what is meaning?”

ASI
Within a few years of the arrival of a “friendly” superintelligence, humans would probably change almost beyond recognition, either uploading their minds into computers and merging with the superintelligence, or enhancing their physical bodies in ways which would make Marvel superheroes jealous.

On the other hand, if the superintelligence is indifferent or hostile towards us, our prospects could be extremely bleak. Extinction would not be the worst possible outcome.

None of the arguments advanced by those who think the arrival of superintelligence will be inevitably good or inevitably bad are convincing. Other things being equal, the probability of negative outcomes is greater than the probability of positive outcomes: humans require very specific environmental conditions, like the presence of exactly the right atmospheric gases, light, gravity, radiation, etc. But that does not mean we would necessarily get a negative outcome: we might get lucky, or a bias towards positive outcomes on this particular issue might be hard-wired into the universe for some reason.

What it does mean is that we should at least review our options and consider taking some kind of action to influence the outcome.

No stopping

steamroller 2

There are good reasons to believe that we cannot stop the progress of artificial intelligence towards AGI and then superintelligence: “relinquishment” will not work. We cannot discriminate in advance between research that we should stop and research that we should permit, and issuing a blanket ban on any research which might conceivably lead to AGI would cause immense harm – if it could be enforced.

And it almost certainly could not be enforced. The competitive advantage to any company, government or military organisation of owning a superior AI is too great. Bear in mind too that while the cost of computing power required by cutting-edge AI is huge now, it is shrinking every year. If Moore’s Law continues for as long as Intel thinks it will, today’s state-of-the-art AI will soon come within the reach of fairly modest laboratories. Even if there was an astonishing display of global collective self-restraint by all the world’s governments, armies and corporations, when the technology falls within reach of affluent hobbyists (and a few years later on the desktops of school children) surely all bets are off.

There is a danger that, confronted with the existential threat, individual people and possibly whole cultures may refuse to confront the problem head-on, surrendering instead to despair, or taking refuge in ill-considered rapture. We are unlikely to see this happen on a large scale for some time yet, as the arrival of the first superintelligence is probably a few decades away. But it is something to watch out for, as these reactions are likely to engender highly irrational behaviour. Influential memes and ideologies may spread and take root which call for extreme action – or inaction.

At least one AI researcher has already received death threats.

Rather clever mammals

Astronaut

We are an ingenious species, although our range of comparisons is narrow: we know we are the smartest species on this planet, but we don’t know how smart we are in a wider galactic or universal setting because we haven’t met any of the other intelligent inhabitants yet – if there are any.

The Friendly AI problem is not the first difficult challenge humanity has faced. We have solved many problems which seemed intractable when first encountered, and many of the achievements of our technology that 21st century people take for granted would seem miraculous to people born a few centuries earlier.

We have already survived (so far) one previous existential threat. Ever since the nuclear arsenals of the US and the Soviet Union reached critical mass in the early 1960s we have been living with the possibility that all-out nuclear war might eliminate our species – along with most others.

Most people are aware that the world came close to annihilation during the Cuban missile crisis in 1962; fewer know that we have also come close to a similar fate another four times since then, in 1979, 1980, 1983 and 1995i. In 1962 and 1983 we were saved by individual Soviet military officers who decided not to follow prescribed procedure. Today, while the world hangs on every utterance of Justin Bieber and the Kardashian family, relatively few of us even know the names of Vasili Arkhipov and Stanislav Petrov, two men who quite literally saved the world.

Perhaps this survival illustrates our ingenuity. There was an ingenious logic in the repellent but effective doctrine of mutually assured destruction (MAD). More likely we have simply been lucky.

We have time to rise to the challenge of superintelligence – probably a few decades. However, it would be unwise to rely on that period of grace: a sudden breakthrough in machine learning or cognitive neuroscience could telescope the timing dramatically, and it is worth bearing in mind the powerful effect of exponential growth in the computing resource which underpins AI research and a lot of research in other fields too.

It’s time to talk

Bubbles

What we need now is a serious, reasoned debate about superintelligence – a debate which avoids the twin perils of complacency and despair.

We do not know for certain that building an AGI is possible, or that it is possible within a few decades rather than within centuries or millennia. We also do not know for certain that AGI will lead to superintelligence, and we do not know how a superintelligence will be disposed towards us. There is a curious argument doing the rounds which claims that only people actively engaged in artificial intelligence research are entitled to have an opinion about these questions. Some go so far as to suggest that people like Elon Musk are not qualified to comment. This is nonsense: it is certainly worth listening carefully to what the technical experts think, but AI is too important a subject for the rest of us to shrug our shoulders and abrogate all involvement.

We have seen that there are good arguments to take seriously the idea that AGI is possible within the lifetimes of people alive today, and that it could represent an existential threat. It would be complacent folly to ignore this problem, or to think that we can simply switch the machine off if it looks like becoming a threat. It would also be Panglossian to believe that a superintelligence will necessarily be beneficial because its greater intelligence will make it more civilised.

Equally, we must avoid falling into despair, felled by the evident difficulty of the Friendly AI challenge. It is a hard problem, but it is one that we can and must solve. We will solve it by applying our best minds to it, backed up by adequate resources. The establishment of existential risk organisations like the Future of Humanity Institute in Oxford is an excellent development.

To assign adequate resources to the project and attract the best minds we will need a widespread understanding of its importance, and that will only come if many more people start talking and thinking about superintelligence. After all, if we take the FAI challenge seriously and it turns out that AGI is not possible for centuries, what would we have lost? The investment we need at the moment is not huge. You might think that we should be spending any such money on tackling global poverty or climate change instead. These are of course worthy causes, but their solutions require vastly larger sums, and they are not existential threats.

Surviving AI

Surviving cover, compressed

If artificial intelligence begets superintelligence it will present humanity with an extraordinary challenge – and we must succeed. The prize for success is a wondrous future, and the penalty for failure (which could be the result of a single false step) may be catastrophe.

Optimism, like pessimism, is a bias, and to be avoided. But summoning the determination to rise to a challenge and succeed is a virtue.

Advertisements

Book review: “Enlightenment Now” by Stephen Pinker

A valuable and important book

Pinker with book
Enlightenment Now” is the latest blockbuster from Stephen Pinker, the author of “The Blank Slate” and “The Better Angels of Our Nature”. It has a surprising and disappointing blind spot in its treatment of AI risk, which is why it is reviewed here, but overall, it is a valuable and important book: it launches a highly effective attack on populism, which is possibly the most important and certainly the most dangerous political movement today. The resistance to populism needs bolstering, and Pinker is here to help.

Populism

Populism
Populists claim to defend the common man against an elite – usually a metropolitan elite. They claim that the past was better than the present because the birthright of the masses has been stolen. The populists claim that they can right this wrong, and rescue the people from their fate. (The irony that most populists are members of the same metropolitan elite is strangely lost on their supporters. The hypocrisy of Boris Johnson, Jacob Rees-Mogg, Rupert Murdoch and the rest complaining about metropolitan elites is breath-taking.)

The claims of populists are mostly false, and they usually know it, so their advocacy is often as dishonest as it is strident, which undermines public debate. What is worse, their policies don’t work, and often cause great harm.

Past outbreaks of populism have had a range of outcomes. The term originated in the US, where a Populist Party was electorally successful in the late nineteenth and early twentieth centuries, but then fizzled out without leaving much of a trace. Other outbreaks have had far more lasting consequences: communist populists have murdered millions, and the Nazis plunged the whole world into fire and terror.

Today, populism has produced dangerously illiberal governments in Central Europe, and it is dragging Britain out of the EU with the nostalgic rallying cry of “take back control”. The hard left faction currently in charge of Britain’s Labour party wants to take the country back to the 1970s, and Bernie Saunders enchants his followers with visions of a better world which has been stolen by plutocrats.

The populist-in-chief

Twitter in prison
The most obvious and blatant populist today is, of course, President Trump. A pathological liar, and a brazen adulterer who brags about making sexual assaults, he is openly nepotistic, racist, and xenophobic. He is chaotic, thuggish, wilfully ignorant (although not stupid), and a self-deluding egotist with very thin skin and a finger on the nuclear button. He is likely to be proven a traitor before his term expires, and he is certainly an autocratically inclined threat to democracy.

Given all this, the opposition to Trump’s version of populism has been surprisingly muted. The day after President Trump’s inauguration, the Women’s March turned into one of the largest nationwide demonstrations in American history. But since then, Democratic Party leaders have struggled to make their voices heard above the brouhaha raised by Trump’s potty tweets and his wildly disingenuous press announcements, so they tried cutting deals with him instead of insisting that his behaviour was abnormal and unacceptablei. The Republicans are holding their noses and drowning their scruples for the sake of a tax cut, at the risk of devastating their party if and when the Trump bubble bursts. The most potent resistance has come from comedians like Bill Maher, Stephen Colbert and Samantha Bee.

Liberalism needs to recover its voice. It needs to fight back against populism both intellectually and emotionally. Enlightenment Now is a powerful contribution at the intellectual level.

Progress

OurWorldInData
Part two of the book (chapters 4 to 20) accounts for two-thirds of the text. It is a comprehensive demolition of the core populist claim that the past was better than today, and that there has been no progress. It draws heavily (and avowedly) on the work of Max Rosen, who runs the Our World In Data website, and is the protégé of the late Hans Rosling, whose lively and engaging TED talks are a must-watch for anyone wishing to understand what is really going on in our world.

Whatever metric you choose, human life has become substantially and progressively better in the last two hundred years. You can see it in life expectancy, diets, incomes, environmental measures, levels of violence, democracy, literacy, happiness, and even equality. I’m not going to go into a defence of any of these claims here: read the book!

Pinker makes clear that he does not think the world today is perfect – far from it. We have not achieved utopia, and probably never will. Similarly, he is not saying that progress is inevitable, or that setbacks have not occurred. But he believes there are powerful forces driving us in the direction of incremental improvement.

Criticisms

Enlightenment Now is already a best-seller, and the subject of numerous reviews. It has attracted its fair share of scorn, especially from academics. Some of that is for his support for muscular atheism, and some for his alleged over-simplification of the Enlightenment. This latter criticism might be a fair cop, but the book is not intended to be an academic historical analysis, so he may not be overly troubled by that.

Indeed, Pinker seems almost to invite academic criticism: “I believe that the media and intelligentsia were complicit in populists’ depiction of modern Western nations as so unjust and dysfunctional that nothing short of a radical lurch could improve them.” He is an equal-opportunity offender, as scathing about left-inclined populist sympathisers as those on the right: “The left, too, has missed the boat in its contempt for the market and its romance with Marxism. Industrial capitalism launched the Great Escape from universal poverty in the 19th century and is rescuing the rest of humankind in a Great Convergence in the 21st.”

A lot of people are irritated by what they see as Pinker’s glib over-optimism, and here he seems more vulnerable: he derides warnings of apocalyptic dangers as a “lazy way of achieving moral gravitas”, and while he has a point, it sometimes leads him into complacency. “Since nuclear weapons needn’t have been invented, and they are are useless in winning wars or keeping the peace, that means they can be un-invented – not in the sense that the knowledge of how to make them will vanish, but in the sense that they can be dismantled and no new ones built.”

Pinker’s blind spot regarding AI

Musk sceptical
And so to the reason for reviewing Enlightenment Now on this blog. Pinker’s desire to downplay the negative forces acting on our world leads him to be scathing about the idea that artificial intelligence poses any significant risks to humanity. But his arguments are poor, and while he reels off some AI risk jargon fluently enough, and name-checks some of the major players, it is clear that he does not fully understand what he is talking about. Comments like “Artificial General Intelligence (AGI) with God-like omniscience and omnipotence” suggest that he does not know the difference between AGI and superintelligence, which led Elon Musk to tweet wryly that if even Pinker did not understand AI, then humanity really is in trouble.

Pinker claims that “among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence”. This is grossly misleading: while many AI researchers don’t see superintelligence as a near-term risk, very few deny that it is a serious possibility within a century or two, and one which we should prepare for. It appears that Pinker has been overly influenced by some of these outliers, as he cites some of them, including Rodney Brooks. But presumably in error rather than mischief, he also lists Professor Stuart Russell as one of the eminent AI researchers who discount the existential risk from superintelligence, whereas Russell was actually one of the first to raise the alarm.

Pinker makes the bizarre claim that “Driving a car is an easier engineering problem than unloading a dishwasher” and goes on to observe that “As far as I know, there are no projects to build an AGI”. In fact there are several, including Doug Lenat’s long-running Cyc initiative, Ben Goertzel’s OpenCog Foundation, and most notably, DeepMind’s splendid ambition to “solve intelligence, and use that to solve everything else.”

If you want to dive further into these arguments, the standard recommendation is of course Nick Bostrom’s seminal “Superintelligence”, but I’m told that “Surviving AI”, by a certain Calum Chace, explores the issues pretty well too.

Resistance Now

Happily, although regrettable, this blind spot does not spoil “Enlightenment Now”’s important and valuable contribution to the resistance to the tide of populism. Highly recommended.

The productivity paradox

Tech progress
In a July 2015 interview with Edge, an online magazine, Pulitzer Prize-winning veteran New York Times journalist John Markoff articulated a widespread idea when he lamented the deceleration of technological progress. In fact he claimed that it has come to a halt.i He reported that Moore’s Law stopped reducing the price of computer components in 2013, and pointed to the disappointing performance of the robots entered into the DARPA Robotics Challenge in June 2015 (which we reviewed in chapter 2.3).

He claimed that there has been no profound technological innovation since the invention of the smartphone in 2007, and complained that basic science research has essentially died, with no modern equivalent of Xerox’s Palo Alto Research Centre (PARC), which was responsible for many of the fundamental features of computers which we take for granted today, like graphical user interfaces (GUIs) and indeed the PC.

Markoff grew up in Silicon Valley and began writing about the internet in the 1970s. He fears that the spirit of innovation and enterprise has gone out of the place, and bemoans the absence of technologists or entrepreneurs today with the stature of past greats like Doug Engelbart (inventor of the computer mouse and much more), Bill Gates and Steve Jobs. He argues that today’s entrepreneurs are mere copycats, trying to peddle the next “Uber for X”.

He admits that the pace of technological development might pick up again, perhaps thanks to research into meta-materials, whose structure absorbs, bends or enhances electromagnetic waves in exotic ways. He is dismissive of artificial intelligence because it has not yet produced a conscious mind, but he thinks that augmented reality might turn out to be a new platform for innovation, just as the smartphone did a decade ago. But in conclusion he believes that “2045… is going to look more like it looks today than you think.”

It is tempting to think that Markoff was to some extent playing to the gallery, wallowing self-indulgently in sexagenarian nostalgia about the passing of old glories. His critique blithely ignores the arrival of deep learning, social media and much else, and dismisses the basic research that goes on at the tech giants and at universities around the world.

Early washing machine

Nevertheless, Markoff does articulate a fairly widespread point of view. Many people believe that the industrial revolution had a far greater impact on everyday life than anything produced by the information revolution. Before the arrival of railroads and then cars, most people never travelled outside their town or village, much less to a foreign country. Before the arrival of electricity and central heating, human activity was governed by the sun: even if you were privileged enough to be able to read, it was expensive and tedious to do so by candlelight, and everything slowed down during the cold of the winter months.

But it is facile to ignore the revolutions brought about by the information age. Television and the internet have shown us how people live all around the world, and thanks to Google and Wikipedia, etc., we now have something close to omniscience. We have machines which rival us in their ability to read, recognise images, and process natural language. And the thing to remember is that the information revolution is very young. What is coming will make the industrial revolution, profound as it was, seem pale by comparison.

Part of the difficulty here is that there is a serious problem with economists’ measurement of productivity. The Nobel laureate economist Robert Solow famously remarked in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Economists complain that productivity has stagnated in recent decades. Another eminent economist, Robert Gordon, argues in his 2016 book “The Rise and Fall of American Growth” that productivity growth was high between 1920 and 1970 and nothing much has happened since then.

1970s car breakdown

Anyone who was alive in the 1970s knows this is nonsense. Back then, cars broke down all the time and were also unsafe. Television was still often black and white, it was broadcast on a tiny number of channels, and it was shut down completely for many hours a day. Foreign travel was rare and very expensive. And we didn’t have the omniscience of the internet. Much of the dramatic improvements that have improved this pretty appalling state of affairs is simply not captured in the productivity or GDP statistics.

Measuring these things has always been a problem. A divorce lawyer deliberately aggravating the animosity between her clients because it will boost her fees is contributing to GDP because she gets paid, but she is only detracting from the sum of human happiness. The Encyclopedia Britannica contributed to GDP but Wikipedia does not. The computer you use today probably costs around the same as the one you used a decade ago, and thus contributes the same to GDP, even though today’s version is a marvel compared to the older one. It seems that the improvement in human life is becoming increasingly divorced from the things that economists can measure. It may well be that automation will deepen and accelerate this phenomenon.

The particulars of the future are always unknown, and all predictions are perilous. But the idea that the world will be largely unchanged three decades hence is little short of preposterous.

Productivity Paradox

Reflections on Dubai’s World Government Summit

IMG_0449

I lived in Dubai for three years in the early 1980s. The United Arab Emirates (UAE) was a very young country then, but its ambition was clear. The tallest building was the Dubai Trade Centre, at 30 stories. In fact, it was the tallest building in the whole of the Middle East at the time, and many people thought it was a folly: why build a skyscraper in the desert? It was a fair question: the road leading south from the Trade Centre towards Abu Dhabi was flanked on both sides by desert.

Now the Trade Centre is a smallish shrub in a forest of skyscrapers, most of them much taller, that has sprung up to accompany the traveller all the way to Jumeirah, 20 kilometers to the south.

Dubai is still run by the man who was effectively in charge when I worked there, and despite Emirate’s immense achievements, the ambition has not dimmed. The World Government Summit held there this week was a bold statement to the world: Dubai is looking to the future; it is visionary and optimistic.

IMG_0465
In particular, Dubai is looking forward to the opportunities presented by our most powerful technology: artificial intelligence. It has appointed the world’s first Minister for AI, and in the Summit’s opening address, the Minister for Cabinet Affairs spoke knowledgeably about many of the opportunities and challenges that AI presents.

No country is perfect, and Dubai is not without its problems, both of reality and of image. This isn’t an article about those, but one thing that struck me was that the ratio of men to women was far more equal among the local delegates than among the foreign contingent.

Dubai’s optimism is sailing into headwinds. The wealthy Russians who used to spend freely in the emirate’s luxury malls have had their wings clipped by Western sanctions against Putin’s aggressive kleptocracy. The wealthy Saudis are similarly chastened by the revolution taking place the other side of the Liwa desert, and the rift with Qatar does not help. Local businesses are chafing under new restrictions and fees that all this is causing.

No-one knows whether these will prove temporary setbacks, or whether they will cause lasting damage. For now at least, But Dubai is looking confidently towards the future.  

Landing back in the UK, the comparison is stark.

The metropolitan elite of Murdoch, Dacre, Farrage, Banks, Johnson, Rees-Mogg, Redwood, Hannan, Jenkin, and the rest have a vision of the UK as a buccaneering, free-trading nation, cutting deals with the rising powers of China and India as well as the existing superpower. Their claim that these are the fastest-growing countries in the world is obviously true, but it is far from obvious that Britain will be able to cut favourable deals with these thrusting new economies if it weakens itself by flouncing out of the powerful trading bloc in which it enjoyed such a privileged position, and churlishly insulting its former colleagues in the process. (Thanks for that, Mr Farrage.)

In any case, the metropolitan Brexiteer elite didn’t win the 2016 referendum with this vision. They knew that a population which feels wounded by globalism would not vote for the laissez-faire, low-tax, small-state Britain which would enrich the elite. Instead they appealed to our most backward-looking, resentful instincts, to our fear of the immigrant, our fear of the foreigner; they promised we could “take back control”, and head back in time to a halcyon bygone era.

Take back control

Swallowed whole by Brexit, the UK government is largely ignoring the AI-fuelled future which is bearing down on us. When they do mention it, they talk blithely about the UK being a leader in the field, not least because we are home to DeepMind (ignoring its US ownership). Most of our politicians* seem genuinely unaware that AI is currently a two-horse race between the US and China, with everyone else jostling for a distant third place.

In particular, our political leaders are mostly either ignorant of the possibility of technological unemployment, or dismissive of it, eagerly lapping up the bromides of tech giant CEOs, who mumble soothingly that automation will not cause unemployment in the future because it didn’t in the past.

Britain sparked the industrial revolution, but its leadership appears largely clueless about the information revolution. Instead it is tiny Dubai, part of a country founded as recently as 1971, which faces forward to the information revolution, and seems eager to grasp the opportunities afforded by AI, and tackle its challenges.

* The All-Party Group on AI strives valiantly to cast light into the darkness, but attendance by parliamentarians is thin. [Disclosure: I’m one of its many advisers.]

Don’t panic!

Roosevelt

Franklin D Roosevelt was inaugurated as US President in March 1933, in the depth of the Great Depression. His famous comment that “The only thing we have to fear is fear itself” was reassuring to his troubled countrymen, and has resonated down the years. If and when it turns out that machines will make it impossible for many people to earn a living, fear will not be our only problem. But it may well turn out to be our first very serious problem.

Fully autonomous, self-driving vehicles will start to be sold during the coming decade – perhaps within five years. Because of the substantial cost saving to the operators of commercial fleets, the humans driving taxis, lorries, vans and buses will be laid off quickly during the decade which follows. Within fifteen or twenty years from now, it is likely there will be very few professional drivers left.

Well before this process is complete, though, people will understand that it is happening, and that it is inevitable. Most of us will have a friend, acquaintance or family member who used to be a professional driver. And the technology that destroyed their job will be very evident. One of the interesting and important things about self-driving cars is they are not invisible, like Google Translate, or Facebook’s facial recognition AI systems. They are tangible, physical things which cannot be ignored.

Most people are not thinking about the possibility of technological unemployment today. They see reports about it in the media, and they hear some people saying it is coming and others saying it cannot happen. They shrug – perhaps shudder – and get on with their lives. This response will no longer be possible when robots are driving around freely, and human drivers are losing their jobs. This cannot fail to strike people as remarkable. Learning to drive is a difficult process, a rite of passage which humans are only allowed to undertake on public roads when they are virtually adult. The fact that robots can suddenly do it better than humans is not something you can ignore.

No doubt some will try to dismiss the phenomenon by explaining that driving wasn’t evidence of intelligence after all: like chess, it is mere computation. Tesler’s Theorem – the definition of AI as whatever we cannot yet do – will persist. But most people will not be fooled. Self-driving vehicles will probably be the canary in the coal mine, making it impossible to ignore the impact of cognitive automation. People will realise that machines have indeed become highly competent, and they will realise that their own job may also be vulnerable.

canary_art22

If we have a Franklin D. Roosevelt in charge at the time – perhaps one in every country – this may not be a problem. If there is a plausible plan for how to navigate the economic singularity, and a safe pair of hands to implement the plan, then we may be OK.

Unfortunately we do not currently have a plan. There is no consensus about what kind of economy could cope with a majority of the population being permanently unemployed, nor how to get from here to there. Neither are all the top jobs in safe pairs of hands.

In the absence of a solid plan explained by a reassuringly competent leadership, the reaction of large numbers of people realising that their livelihoods are in jeopardy is not hard to predict: there will be panic.

When will this panic occur? Within a few years of self-driving vehicles starting to lay off human drivers. In other words, in a decade or so.

The election of Trump and the Brexit referendum result were political earthquakes. Politics has not been so “interesting” since at least the fall of the Berlin Wall and the end of the Cold War at the end of the 1980s. But compared with what could happen if a majority of the population believes that they are about to become unemployable, they were relatively minor. The possible impacts of a panic about impending widespread joblessness could be enormous, and they are worth expending considerable effort to avoid.

Don't panic with Marvin

Reviewing last year’s AI-related forecasts

Robodamus 3

This time last year I made some forecasts about how AI would change, and how it would change us. It’s time to look back and see how those forecasts for 2017 panned out.

A bit rubbish, to be honest – five out of 12 by my reckoning.  Must do better.

  1. Machines will equal or surpass human performance in more cognitive and motor skills. For instance, speech recognition in noisy environments, and aspects of NLP – Natural Language Processing. Google subsidiary DeepMind will be involved in several of the breakthroughs.

A machine called Libratus beat some of the best human players of poker, but speech recognition in noisy environments is not yet at human standards. DeepMind made several breakthroughs. I’ll award myself a half-point.

  1. Unsupervised neural networks will be the source of some of the most impressive results.

Yup.

  1. In silico models of the brains of some very small animals will be demonstrated. Some prominent AI researchers will predict the arrival of strong AI – Artificial General Intelligence, or AGI – in just a few decades.

Not as far as I’m aware. No points.

  1. Speech will become an increasingly common way for humans to interact with computers. Amazon’s early lead with Alexa will be fiercely challenged by Google, Microsoft, Facebook and Apple.

Yup: Alexa is very popular, and the competition is indeed heating up.

  1. Some impressive case studies of AI systems saving significant costs and raising revenues will cause CEOs to “get” AI, and start demanding that their businesses use it. Companies will start to appoint CAIOs – Chief AI Officers.

There have been fewer case studies than I expected, but they do exist, and it is a rare CEO who is not building capability in AI. CAIOs are not yet common, but Dubai has a minister for AI, and the UK’s All-Party Parliamentary Group on AI has called for the UK to have one. (Disclosure: I’m an adviser to the APPG AI, and I support that call.) Half a point.

  1. Self-driving vehicles (Autos) will continue to demonstrate that they are ready for prime time. They will operate successfully in a wide range of weather conditions. Countries will start to jockey for the privilege of being the first jurisdiction to permit fully autonomous vehicles throughout their territory. There will be some accidents, and controversy over their causes.

Investment in Autos is galloping ahead, and they are clocking up huge numbers of safe miles, and generating huge amounts of data. States and countries are competing to declare themselves Auto-friendly.

  1. Some multi-national organisations will replace their translators with AIs.

Not as far as I’m aware. No points.

  1. Some economists will cling to the Reverse Luddite Fallacy, continuing to deny that cognitive automation will displace humans from employment. Others will demand that governments implement drastic changes in the education system so that people can be re-trained when they lose their jobs. But more and more people will come to accept that many if not most people are going to be unemployed and unemployable within a generation or so.

The Reverse Luddite Fallacy is proving tenacious. If anything, there seems to be a backlash against acceptance that widespread mass unemployment is a possibility that must be addressed. No points.

  1. As a result, the debate about Universal Basic Income – UBI – will become more realistic, as people realise that subsistence incomes will not suffice. Think tanks will be established to study the problem and suggest solutions.

Nope. No points.

  1. Machine language will greatly reduce the incidence of fake news.

Sadly not yet. No points.

  1. There will be further security scares about the Internet of Things, and some proposed consumer applications will be scaled back. But careful attention to security issues will enable successful IoT implementations in high-value infrastructural contexts like railways and large chemical processing plants. The term “fourth industrial revolution” will continue to be applied – unhelpfully – to the IoT.

There was less news about the IoT this year than I expected. It was all blockchain instead, thanks to the Bitcoin bubble. But there was plenty of fourth industrial revolution nonsense.

  1. 2016 was supposed to be the year when VR finally came of age. It wasn’t, partly because the killer app is games, and hardcore gamers like to spend hours on a session, and the best VR gear is too heavy for that. Going out on a limb, that problem won’t be solved in 2017.

Yup

Putting your money where your mouth is

Long Bet image

Robert Atkinson and I have made the 749th Long Bet shown above (and online here). Robert is president and founder of the Information Technology and Innovation Foundation, a Washington-based think tank.

Robert’s claim

With the rise of AI and robotics many now claim that these technologies will improve exponentially and in so doing destroy tens of millions of jobs, leading to mass unemployment and the need for Universal Basic Income. I argue that these technologies are no different than past technology waves and to the extent they boost productivity that will create offsetting spending and investment, leading to offsetting job creation, with no appreciable increase in joblessness.

My response

AI and robotics are different to past technology waves. Past rounds of automation have mostly been mechanisation; now we will see cognitive automation. Machines can already drive cars better than humans, and their story is just beginning: they will increasingly do many of the tasks we do in our jobs cheaper, better and faster than we can. Unlike us, they are improving at an exponential rate, so that in ten years they will be 128 times more powerful, in 20 years 8,000 times, and in 30 years (if the exponential growth holds that long) a million times. We are unlikely to see the full impact of technological unemployment by 2035, but it should be appreciable. Our job now, of course, is to make sure that an economy which is post-jobs for many or most people is a great economy, and that everyone thrives. The way to do that may well be the Star Trek economy.

I would like to be able to credit the person who created the excellent image below. If you know who it is (or if it is you!) please do let me know.

Horses and tech unemp