Review of “The Age of AI”

A YouTube series, presented by Robert Downey Jr

Iron Man
Robert Downey Jr is best known as Tony Stark, the character behind Iron Man in the Avengers movies. It is said that Downey Jr modelled his portrayal of Stark on Elon Musk, the creator of Tesla and SpaceX, and one of the most outspoken commentators about artificial intelligence. Musk famously said that by developing advanced AI we are “summoning the demon”, and that we must work hard and fast to ensure it remains safe. In fact he thinks we must develop the technology to link our minds intimately with AI systems, so that instead of being replaced by them we can be enhanced by them.

So it is apt that Downey Jr is introducing “The Age of AI”, YouTube’s expensive new eight-part series on AI. The first two episodes are available now, and the remaining six will be released over the coming weeks – unless you are impatient, and sign up for the premium service. Inevitably, the series has high production values: Robert Downey Jr is not going to lend his name to content below Hollywood standards. Indeed, he introduces each episode from a hangar where the original Iron Man movies were shot, a dozen years ago. The camera moves around a lot, and each shot is short, with lots of close-ups of faces, hands, musical instruments – lots of eye candy for viewers with short attention spans.

Baby X
How do you find a way into a subject as large, complex, and important as artificial intelligence? The storytellers behind “The Age of AI” chose to start by focusing on how far AI can enhance us, and whether it could end up replicating, and even replacing us. The first episode introduces us to Baby X, a lifelike avatar of a baby girl developed by digital effects artist Mark Sagar, who helped create King Kong for Peter Jackson, and the Na’vi characters in Avatar for James Cameron. Graphics by Hollywood, behavioural traits courtesy of machine learning. The experts go on to develop an avatar for Will.I.Am, founder of the Black Eyed Peas, who is impressed by the creation, and then suggests that it should remain a little robotic, so as not to confuse his mother.

The second story in episode one shows us prosthetic hands for two musicians – a drummer and a guitarist. Existing prosthetic hands are rather blunt instruments, and often quickly abandoned by their intended users. Adding analysis by machine learning of the nerve signals the brain can still send down a phantom limb seems to enable a much more lifelike prosthesis. The message of the episode is that machine learning and AI can make us more human, not less, but we will have to think carefully about where we want to draw the line.

Age of AI retinal scans
A geek might ask for more detailed explanations of how AI works. Terms are explained as the series unfolds, but very briefly. Machine learning, for instance, is a technique to find patterns in data. And, er… that’s it. But viewers unfamiliar with AI will learn a lot. The second episode addresses how AI is advancing medical science, and also disseminating it – making it more widely available in the developing world, for instance. It rams home the point that the availability of masses of data is what enables machines to diagnose illnesses faster and more cheaply than human doctors can. In India, which has a chronic shortage of doctors for its enormous population, machines can quickly and accurately diagnose retinal damage caused by diabetes, and push patients through to surgery in time to prevent blindness. There was no discussion in this episode of the controversy surrounding the sharing of patients’ intimate data which is necessary to enable this – perhaps that will come in a later episode.

Sometimes the show feels like an infomercial, either for AI as a whole, or simply for Google, which provided many of the filmed examples. This must have been much easier to arrange, given that YouTube is owned by Google, but it is surprising they didn’t wander down the road to speak to Facebook or Apple, for instance, or hop on a plane to see Amazon, IBM, or even Baidu or Tencent. The programme follows teams from Google as they help ex-NFL star Tim Shaw regain his natural voice after losing muscle control to the tragic disease ALS, also known as Lou Gehrig’s disease. The achievement is impressive, and the emotion provoked in his family is profound and moving. But the failure to mention any of the other tech giants, or the controversy swirling around the industry, will leave some viewers feeling manipulated.

Age of AI title
AI is our most powerful technology, and in the next few decades it will change everything about the nature of being human. Understanding what it is, how it works, and something about its promise and its peril will increasingly be basic literacy for citizens. This is a well-made, well-informed show that will get many more people up to speed, and that is greatly to be welcomed.

This article first appeared in Forbes

Review of “More from Less”, by Andrew McAfee

Cover image

The New Optimists

Andrew McAfee wants to cheer you up. If you read his latest book with an open mind, he might well succeed. McAfee, an MIT economist, is joining the New Optimists (Bill Gates, Stephen Pinker, Hans Rosling and others) in trying to persuade us that the world is not going to the dogs. The central claim of “More From Less” is that capitalism and technological progress are allowing us “to tread more lightly on the earth instead of stripping it bare.” Unfortunately, he admits, this good news is hard for many people to believe because catastrophism has such a strong hold on our imaginations.


For hundreds of years before 1700, England’s population oscillated between two and six million. When peace coincided with good harvests, the number would rise, only to slump again when our inability to feed the growing population brought famine again. Robert Malthus made the reasonable assumption that this pattern would continue, and issued a dire warning about the consequence of Britain’s fast-growing population in the early industrial revolution. He was wrong. Capitalism and technology changed the game entirely, enabling us to feed far larger populations than ever before. Malthus’ name became a synonym for dramatically inaccurate predictions.

Paul Ehrlich is Malthus’ intellectual heir. Since the 1960s he has been forecasting doom and disaster from the exhaustion of all the natural resources we depend upon. The first New Optimist, Julian Simon, offered Ehrlich a bet: choose any resource and any time-frame above a year. If the price of the resource rose, Julian would pay Ehrlich; if it fell, the reverse. Ehrlich chose five – copper, chromium, nickel, tin, and tungsten – and the prices of all five fell. Ehrlich is surprisingly unrepentant: after all these years of abysmal forecasting failure, he is still telling students at Stanford that disaster is just around the corner.

Ehrlich is not alone. Any number of environmentalists and lobby groups will tell you that we are polluting, deforesting, and generally destroying the planet, exhausting its natural resources, and driving most other species extinct. All this is making us sick, and crucially, the damage is accelerating.

Using fewer natural resources

Aluminium cans

Implausible as it will seem to many, the data shows the opposite. As we get richer, we are using resources more efficiently, using less energy, causing less pollution and cleaning up the pollution of the past. We are even re-foresting the earth and protecting other species. McAfee produces compelling data and numerous examples, but sadly, many people will refuse to believe him: good news is no news, and if it bleeds, it leads. We all love a good horror story.

The evidence about resource consumption in America comes from the US Geological Survey, a federal agency formed in 1879. It tracks seventy-two resources, from aluminium to zinc, and only six of them are not yet post-peak. Even energy usage is decreasing, down two percent in 2017 from its 2008 peak, despite a 15 percent growth in GDP between those two years.

America is getting more and more efficient. Milk and aluminium are two of McAfee’s examples. Between 1950 and 2015, US milk production rose from 117 billion pounds to 209 billion, while the herd shrank from 22 million cows to 9 million. This is a productivity improvement of 330 percent. When aluminium cans were introduced in 1959 they weighed 85 grams. This fell to 21 grams by 1972, and by 2011 it was down to 13 grams.


The information revolution has powered much of this improvement, as illustrated by the story of railcars. In the late 1960s, US railway companies owned thousands of these 30-ton beasts, and only about five percent of them moved on any given day. This was not because the other 95 percent needed to rest: it was because their owners didn’t know where they were. They knew that if they could increase the percentage of cars moving each day from 5 percent to 10 percent, they would need only half as many of them. Today, of course, every railcar reports its precise location to its owner several times a second – thanks to the information revolution.

It’s not just the US. In the UK, the Office for National Statistics publishes the annual Material Flow Accounts, and a 2011 paper entitled ‘Peak Stuff’ concluded that the UK reached maximum use of material resources in the early 2000s. Data from the EU’s statistical agency Eurostat show that Germany, France, and Italy have generally seen flat or declining total consumption of metals, chemicals, and fertilizer in recent years.

And no, before you ask, this reduction in natural resource usage is not just the result of our economies switching from goods to services. While goods have been declining compared to services as a percentage of total GDP, the output and consumption of products has carried on increasing in absolute terms. We are experiencing a great decoupling: we are de-materialising industrial production. (This is actually quite an old idea: it was called ephemeralisation by Buckminster Fuller back in 1927. You may remember him as the inventor of geodesic domes, which are very efficient structures.)

Reducing harms

Reducing pollution

As well as using fewer natural resources, the developed world is generating less pollution. In the US, the Clean Air Act was substantially amended and strengthened in 1970, 1977, and 1990. The Clean Water Act was passed in 1972, the Safe Drinking Water Act in 1974, and the Toxic Substances Control Act in 1976. Other developed countries have their equivalents.

The results are impressive. McAfee quotes another member of the New Optimists, Matt Ridley: “A car today emits less pollution travelling at full speed than a parked car did from leaks in 1970.”

Leaky old car

McAfee also denies that we are driving thousands of species extinct: “documented extinctions are relatively rare (with about 530 recorded within the past five hundred years) and appear to have slowed down in recent decades”. That is not to say that our impact on other species is altogether benign: “the biggest threat to animal species isn’t absolute extinction, but instead huge declines in population size due to over-hunting and habitat loss.” But even here the trend is encouraging. “Parks and other protected areas made up only 4 percent of global land area in 1985, but by 2015, this figure had almost quadrupled, to 15.4 percent. At the end of 2017, 5.3 percent of the earth’s oceans were similarly protected.”

It turns out we are using less land for farming, and land that we no longer farm reverts to forest. “Throughout the developed world this process is now dominating any and all tree felling that is taking place, and overall reforestation has become the norm.” This not the case in the developing world, but “even with continued deforestation in developing countries and other challenges, a critical milestone has been reached: across the planet as a whole we have, as an international research team concluded in 2015, experienced a ‘recent reversal in loss of global terrestrial biomass.’ For the first time since the start of the Industrial Era, our planet is getting greener, not browner.”

As the world continues to grow richer, McAfee argues, we can expect this good news to spread. “In 1999, 1.76 billion people were living in extreme poverty. Just sixteen years later, this number had declined by 60 percent, to 705 million. Hundreds of millions fewer people are living in poverty now than in 1820, when the world’s total population was seven times smaller than it is today.” Happily, “the story of global poverty reduction isn’t a purely Chinese one. … Every region around the world has seen large poverty reductions in recent years.”

The Four Horsemen of the Optimist

Four horsemen, 2

If you can suspend your disbelief for a bit longer, you’ll be wondering what is causing these happy developments? McAfee identifies four drivers, which he calls the four horsemen of the optimist: Technology, Capitalism, Public awareness, and Responsive government.

Technology gives us new ways to solve old problems, and capitalism provides the incentive for people to invent these new ways and to implement them once they have been invented. As Abraham Lincoln put it, we add “the fuel of interest [capitalism] to the fire of genius [technology] in the discovery and production of new and useful things.”


Sadly, capitalism is a hard sell in many quarters these days, so McAfee also provides a poignant example of how its great rival, socialism, often yields disastrous outcomes. The USSR was part of the 1946 international convention against whale hunting, but between 1948 and 1973 it killed 180,000 more whales than it reported. Unlike the Japanese, the Russians have no great appetite for eating whale flesh, and most of the animals’ bodies were thrown back into the sea. And why? Because the five-year plan demanded seafood tonnage, and it had no mechanism to incentivise the production (or in this case, hunting) of things that people actually wanted.

Technology and capitalism are not enough, of course. Some humans, capitalist or otherwise, will pillage and poison unless they are prevented from doing so. Public awareness and responsive government is needed to address the fact that markets often ignore what economists call negative externalities, and they often fail to support people who are unlucky and / or unsuccessful.

Nevertheless, McAfee insists that the spread of capitalism has improved the lot of humanity beyond recognition. Its partial adoption by India in “1991… deserves its spot in the annals of economic history alongside December 1978, when China’s Communist Party approved the opening up of its economy, or even May 1846, when Britain voted to repeal the Corn Laws.” “Between 1978 and 1991, more than 2.1 billion people—about 40 percent of the world’s 1990 population—began living within substantially more capitalist economic systems.”

McAfee is confident that in the long run, the four horsemen will continue to ride. “Smartphone use and access to the Internet are increasing quickly across the planet. This means that people no longer need to be near a decent library or school to gain knowledge and improve their abilities.” And countries are unlike companies in that size does not necessarily beget bureaucratic sloth: our most valuable resource is human ingenuity, and “an economy with a larger total stock of human capital will experience faster growth.”

Climate change and its solutions

Extinction Rebellion

To establish that he is no climate change denier, McAfee cites the mantra, “it’s warming; it’s us; it’s bad; and we can fix it.” But once again, he argues that the trend in the developed world is much better than most people think. In the US, “greenhouse gas emissions have gone down even more quickly than has total energy use. This is largely because we have in recent years been using less coal and more natural gas to generate electricity.”

How can we entrench and spread this positive trend? McAfee proposes two solutions: first, cap and tax carbon emissions, and allow companies to trade permits. Second, rehabilitate nuclear energy. “Nuclear power doesn’t deserve its bad reputation. As is the case with vaccines, glyphosate, and GMOs, public awareness around nuclear power is broadly out of step with reality.”

Inequality and Populism


Despite all this good news, the world is undeniably grumpy. People in many countries have elected Populist governments, and in some places, especially rural America, “deaths of despair” like suicide and the mis-use of drugs and alcohol are rising. McAfee thinks that growing inequality plays a significant role in this, but the data from his favourite source, the excellent website “Our World in Data” suggests otherwise. Inequality is certainly not growing on a global level, as developing countries have been growing much faster than developed ones. And while the Gini coefficient, the usual yardstick of inequality, has become slightly worse in the US, the same is not true elsewhere in the developed world, where the coefficient has remained fairly steady at just under 40 since the early 1990s. (100 is perfectly unequal and 0 is perfectly equal.)

The real villain of the piece is not inequality, but the perception of unfairness, which is something people actually care much more about. As McAfee himself notes, “people prefer fair inequality over unfair equality.” Populists have risen to power on the back of resentment. McAfee quotes a book on America’s Tea Party: “Blacks, women, immigrants, refugees – all have cut ahead of you in line. But it’s people like you who have made this country great. The line cutters irritate you. They are violating rules of fairness.”

Pluralists and authoritarians

Pluralism 2

The roots of the perceived inequality lie in the remarkable success of social liberalism in recent decades. Rightly or wrongly, many people feel this has gone too far: it is “political correctness gone mad”. The culture wars are being fought by pluralists and authoritarians. As McAfee puts it, “most countries are becoming significantly more pluralistic—they’re seeing more ethnic diversity and immigration, gender equality, support for gay marriage and other non-traditional lifestyles, and related changes that enhance diversity. A fascinating stream of recent research finds that a large percentage of people in all countries studied have an innate intolerance for this greater diversity. [They] want a strong central authority to enforce obedience and conformity.”

This battle between pluralists and authoritarians is raging all over the world, and it has eclipsed traditional loyalties of class, and the ideologies of the left and the right. How can this battle be won, or at least resolved? McAfee is clearly a pluralist, but he discounts the possibility of persuading authoritarians by rational argument. “It’s particularly important not to try to win arguments with them. … A better way is to start by finding common ground.”

This seems an unpromising approach. As he admits, “more and more people are choosing to have fewer ties to people with dissimilar values and beliefs, opting instead to spend more time among the like-minded. The journalist Bill Bishop calls this phenomenon ‘the big sort.’” Perhaps a better way to respond to the fear and anger which authoritarians breathe is simply to make pluralism the more attractive option, using fun and humour. This should not be hard, since pluralism is inherently more optimistic, although it often trips itself up by taking itself too seriously, and engaging in self-righteous circular firing squads.

Automation and abundance

Robot production line

McAfee is probably best known for his 2014 book “The Second Machine Age”. In that book, he and his co-author, fellow MIT academic Erik Brynjolfsson, argued that many jobs will be automated by artificial intelligence, and that although many new jobs will be created, societies must get better at re-skilling and re-training people to move from the old to the new.

I agree that for the next two or three decades there will be a Big Churn in the job market, but I have been trying for some time to persuade Brynjolfsson and McAfee to cast their minds further forward, and take seriously the idea that after two or three more decades of exponential improvement, our machines will be cheaper, better, and faster at pretty much everything that most of us can do for money. In which case, technological unemployment will become a reality.

McAfee makes little reference to the theme of automation in “More From Less”, which is ironic, because it helps to answer this big question: if machines do take all the jobs, how do we pay for the humans? The answer may well be to reduce the cost of all the goods and services we need to almost zero.

This is called the economy of abundance, and “More From Less” is invaluable in showing some of the ways it could materialise.

Abundance sign


More From Less” is a well-written and convincing book. If it makes a few of us more optimistic, it will also be remembered as an important one

A Brexiteer Among the Bots – review of “The AI Economy” by Roger Bootle


Roger Bootle is not afraid to think and say unconventional things. He is that rare phenomenon: a professional economist who thinks that Brexit is a Good Idea. Indeed, he belongs to a group called Economists for Brexit, now renamed as Economists for Free Trade, which argues for a no-deal Brexit.

Whatever you think of that, the economics consultancy that Bootle founded, Capital Economics, has been very successful financially, and in 2012 it was awarded the £250,000 Wolfson Economics Prize, the second most valuable economics prize in the world after the Nobel, for a proposal that EU member states who wanted to exit should default on a large part of their debts. A book on technological unemployment from such a high-profile economist is to be warmly welcomed. What’s more, it is a well-researched, enjoyable, and thoughtful book.

The AI Economy
The thoughtfulness does have its limits. The book reads as though Bootle was determined to dismiss the possibility of technological unemployment from the outset, and he makes little effort to hide his disdain for those who take the idea seriously. People like Max Tegmark and me, who are guilty of this crime, are labelled “AI visionaries”, and it is clear that this is not a compliment. We “geeks” are “bubbling enthusiasts” but also pessimists, “emanating gloom”. Others who are responsible for “fetid speculation about the implications of AI” are Stephen Hawking, Martin Rees, Stuart Russell, Elon Musk and Bill Gates.  Quite the rogues’ gallery.

Overall, Bootle’s writing style is clear and relaxed, and the book is mostly calm and measured. Occasionally he does give free rein to his inner curmudgeon: “As to the Internet of Things, rarely can something have been so overhyped. … In the future, doorknobs and curtains will also be able to speak to us when they need some attention, rather like those disembodied voices or noises in cars that tell us when we haven’t fastened our seatbelts. Heaven forfend!”

Less than a quarter of the way through the book, Bootle delivers what he thinks is the killer blow to the idea that technological unemployment is possible. “Unless and until robots can produce and reproduce themselves costlessly … human beings will always have some comparative advantage.” He admits that this might not help, as the income they could earn “might be appallingly low such that it hardly seemed worth working and the state has to intervene in a major way.” But he thinks humans have something better than comparative advantage: “In fact, such an outcome lies a long way off and, I suspect, will never transpire. For there are many areas where humans possess an absolute advantage over robots and AI, including manual dexterity, emotional intelligence, creativity, flexibility, and most importantly, humanity. These qualities ensure that in the AI economy there will be a plethora of jobs for humans.” And apparently that’s it.

Move 37
I disagree. AlphaGo’s famous move 37 in its second game against Lee Sedol in 2016 is one of many proofs that machines can be creative, even if their version of creativity does not involve a shred of consciousness. And anyone who has been watching the progress of robots developed by Boston Dynamics and others in the last few years will be under no illusion that humans will remain supreme forever in manual dexterity and flexibility.

The truth is that no-one knows for sure whether technological unemployment will happen, or when. None of us has a crystal ball. But if you think seriously about the impact of the exponential growth in the power of computers, and if you think ahead just a few decades, you realise that it is dangerously complacent to dismiss the possibility of technological unemployment out of hand.

Bootle does consider the phenomenon of exponential growth – he borrows my illustration of a football stadium filling up with water – but he dismisses it because it always collapses into an S curve, and he argues that because observations of exponential growth are sometimes described as a law, they lead to assertions that “rest on flimsy, if not nonexistent foundations.” This is a blatant Aunt Sally: everyone knows that exponential growth always collapses into an S curve eventually – the question is how long before that happens. (You are composed of around 27 trillion cells, which were created by fission, or division – an exponential process. It required 46 steps of fission to create all of your cells. Moore’s Law, by comparison, has had 36 steps in the 54 years of its existence.) And I’m not aware of anybody writing about Moore’s Law who doesn’t realise that it is an observation, not a physical law.

Partly the problem seems to lie in a failure of Bootle’s imagination – or perhaps his unwillingness to exercise it. He studied PPE at Oxford, and one of his favourite questions from back then is “Was the Black Death a good thing?” He says he “cannot imagine any form of AI being capable of assessing adequately the range of possible answers to this question.” I bet he could if he really tried.

Quite a few of Bootle’s assertions are out-of-date, or simply mistaken. He pours scorn on the idea of the paperless office, but the use of paper in offices peaked in 2007. He reports that chess computers are enhanced by collaborating with humans, but this has not been true for several years now. He thinks Kevin Kelly is a singularitarian, when he is actually a prominent opponent of the idea. A quick look at Wikipedia would have saved him from making the erroneous claim that Stanislav Petrov (the man who saved the world by bravely declaring a report about incoming American nuclear weapons to be a false alarm) was sacked. More seriously, his account of the progress with self-driving cars is highly contentious, and probably considerably off the mark. He regards autonomous cars as a bubble which is about to burst and destroy much of the automotive industry which has been foolish enough to invest so heavily in it.

From my point of view, it is a great shame that Bootle seems to have begun his enquiry so prejudiced against the idea that technological unemployment is a realistic possibility some decades ahead. In general, he is a congenial guide to the issues, and it would have been fascinating to have had his economic expertise applied to the idea, for instance, that the economy of abundance is a better solution to the problem than universal basic income, and that fully automated luxury capitalism is a better aspiration than fully automated luxury communism. As it stands, most of his book is only of academic interest if you do take the idea of technological unemployment seriously.

Robot arm wrestle

                   This article first appeared in Forbes magazine in October 2019.

Surveillance capitalism and anti-capitalism

In the last few years, the computer scientists and entrepreneurs who fuel Silicon Valley have gone through a bewildering series of transformations. Once upon a time they were ostracised nerds. Then they were the lovable geeks of the Big Bang Theory TV show, and for a short while they were superheroes. (In case you’re wondering, geeks wonder what sex in zero gravity is like; nerds wonder what sex is like.) Then it all went wrong, and now they are the tech bros; the anti-heroes in the dystopian saga of society’s descent into algorithmic rule by Big Brother, soon to be followed by extermination by Terminators.

Techlash is in full swing, and Shoshana Zuboff is its latest high priestess. She is professor emerita at Harvard Business School, and author of “Surveillance Capitalism”, a 600-page book on how the tech giants, especially Google and Facebook, have developed a “rogue mutation of capitalism” which threatens our personal autonomy, and democracy.

Zuboff is beyond scathing about Google and Facebook: even favourable reviewers agree she is extreme. She likens tech giant executives to the Spanish conquistadores, with the rest of us as the indigenous populations of South America, and rivers of blood as the consequence. (She doesn’t specify which countries have lost 90% of their populations as a result of their citizens using Facebook.) She describes Sheryl Sandberg, Facebook’s COO, as the “Typhoid Mary” of this socio-economic plague.

Apparently, the goal of the tech giants is not just to understand our behaviour so they can enable other organisations to sell things to us. It is to control us and turn us into robots, “to automate us”. She quotes a data scientist: “We are learning how to write the music, and then we let the music make [our victims] dance.”

Zuboff wants governments to “interrupt and outlaw surveillance capitalism’s data supplies and revenue flows … outlawing the secret theft of private experience.” After all, “We already outlaw markets that traffic in slavery or human organs.” The old phrase (which pre-dates the Web) “if you’re not paying for it, you’re the product” isn’t extreme enough for Zuboff: she compares the social media platforms to elephant poachers who kill us in order to steal our ivory tusks. “You are not the product … You are the abandoned carcass.”

Dead elephant, white

Zuboff claims that Google’s founders are fully aware of the harms their company causes, and that originally, they swore off using our personal data so perniciously. They were effectively bullied into exploiting the opportunity – and into becoming billionaires – by the demands of the stock market.

She also claims that surveillance capitalism would not have evolved if there had not been a corresponding rise in state surveillance. She claims that in 2000, the FTC was poised to regulate the tech giants, but the war on terror prompted by the 9/11 attacks drained away any support for privacy campaigns in US government circles.

If we give Zuboff the benefit of the doubt, and push the hyperbole to one side, is her thesis reasonable? Do the tech giants steal our data and sell it to new breeds of capitalists who use it to control us? If we take it literally, much of it is simply mistaken. In general, Google and Facebook do not steal our data. You have to accept their terms and conditions in order for them to access and use it, although of course, none of us read those conditions, and most of us have no detailed knowledge of what they contain. The tech giants could and should do a much better job of explaining that.

It is also untrue that Google and Facebook have spawned new types of capitalists: for decades, firms have spent significant sums of money to obtain data about their customers. In the bad old days when junk mail clogged up hallways, companies desperately wanted to avoid wasting money sending mailers about lawnmowers to people living in high-rise apartments. Direct marketing was a large and growing industry well before the invention of the Web.

Nevertheless, there is clearly a genuine need for debate about whether Google, Facebook, and other tech giants are harming us with the ways they use our data. Certainly it can be disconcerting when you search for information about a product category, and then notice that ads for companies selling that product are following you around the internet for several hours or even days. Many people find this exploitative, dishonest, creepy, and intrusive.

There are plenty of instances where the tech giants, and indeed many other organisations, have obtained personal data improperly, mis-used it, and / or failed as its custodians. The FTC has just imposed its largest-ever fine on Facebook for allowing its customers data to be mis-used by Cambridge Analytica, although some people felt that $5 billion was too trivial a sum.

Cambridge Analytica
But does that mean that the business model is illegitimate? An important test of that is whether consumers want it. It is patronising and simply wrong to say that the population as a whole does not know what is going on. Most users do know that companies sell access to our data to companies that want to show us adverts, and in return we get free stuff. “Take my data and give me free shit”, as one consumer put it. We might be foolish to accept this trade-off (it might even be “false consciousness”, as the Marxists like to say) but governments would ban it at their peril – and the ones subject to elections don’t.

Zuboff claims that “research over the past decade suggests that when users are informed of surveillance capitalism’s backstage operations, they want protection, and they want alternatives,” but most of the evidence points to the contrary. Erik Brynjolffson, a professor of economics at MIT, ran a survey in 2018 to assess how much Americans would have to be paid to avoid using the products provided for free by the tech giants. Facebook and other social media were valued at $322 a year, and search was valued at an eye-opening $17,500. Globally, Facebook makes $80 per person for using our data, so on the face of it, the deal is not too shabby. (Americans are more profitable, at $105 per head, and Europeans rather less so, at $35.)

Duck Duck Go
Those who find the trade-off unacceptable are in no way obliged to engage in it. Duck Duck Go is by all accounts a pretty good substitute for Google Search, and sells itself on not using your data. I have never used Facebook, not because of privacy concerns, but because I reckon I would spend too much time looking at cat videos.

The term “Surveillance capitalism” was invented by Zuboff in a 2014 essay. It’s a great phrase, but it is deliberately misleading. The Cambridge Dictionary defines surveillance as “the careful watching of a person or placeespecially by the police or army, because of a crime that has happened or is expected.” That is not what Google is doing. It is trying to figure out what makes me and a thousand other people like me choose to buy a particular type of car and when, and then sell that information to a firm that sells cars. The data about me is useless unless combined in that way, and it is data that I could not possibly sell on my own.

An alternative to the phrase surveillance capitalism would be personalised capitalism. It would be more accurate, but of course it wouldn’t be as scary, or generate as many headlines.

The place we should look for dangerous surveillance is not the capitalists, but the state. China’s developing Social Credit system shows clearly where the real threat lies. Capitalists just want to sell us fizzy black water and cars. Governments provide security and a welfare safety net, but in order to do this they lay claim to between a third and a half of our income, they send some of us to war, and they lock some of us up. It seems that many Chinese are intensely relaxed about Social Credit: they say it improves public behaviour, and they argue that there is nothing to worry about if you have done nothing wrong. This is a very poor argument. State surveillance leads to self-censorship, and if the levers of state power fall into malign hands – which from time to time they do – then a powerful surveillance network becomes a disaster for everyone.

A lot of the current wave of techlash is actually anti-capitalism. The real problem with the tech giants in the eyes of many of their critics is they are too big, too powerful, and above all, they make too much profit. And profit is a Bad Thing. This may not be not true of Zuboff, who declares herself a fan of good old-fashioned capitalism, but it is certainly true of Jeremy Corbyn and Bernie Sanders. Corbyn and Sanders are just as populist as the alt-right, and just as dangerous. They are wilfully ignorant of the huge benefits delivered by modern capitalism, and they seek to wreck it.

It is ironic that the tech giants are currently among the most hated targets of the left, since their founders and staff are so clearly left-leaning themselves. In attacking the tech giants for spreading fake news they are surely missing the most egregious culprits. For instance the blatant lies told about the EU by Murdoch’s News International, the Telegraph, and the Daily Mail are what gave us Brexit, and gave permission to racists and homophobes to re-emerge blinking into the daylight.

In any discussion of the future, timing is important. The data being hoovered up and exploited by the tech giants today is mostly about our shopping habits. We are on the verge of an era when we will generate tsunamis of data about our health. Apple Watches are showing the way, and before long most of us will wear devices which take readings of our pulse, our sweat, our eye fluids, our electrical impulses, analyse some of it on the device and stream more of it to the cloud. Even those of us who are relatively relaxed about Google’s privacy terms today should be thinking about who we want to be custodians of our minute-by-minute health data.

And perhaps further ahead, when AI, biotech, and other technologies are powerful and cheap enough to enable a gruntled teenager to slaughter people in their thousands, what price privacy then? When a megadeath is priced in the mere hundreds of dollars, can we avoid the universal panopticon?

Batman's Panopticon

“Calum’s Rule”

Forecasts should specify the timeframe

Time dispute

Disagreements which suggest profound differences of philosophy sometimes turn out to be merely a matter of timing: the parties don’t actually disagree about whether a thing will happen or not, they just disagree over how long it will take. For instance, timing is at the root of apparently fundamental differences of opinion about the technological singularity.

Elon Musk is renowned for his warnings about superintelligence:

With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” We are the biological boot-loader for digital super-intelligence.”

Comments like this have attracted fierce criticism:

I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” (Andrew Ng)

We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature.” (Yann LeCun)

Superintelligence is beyond the foreseeable horizon.”  (Oren Etzioni)

Surviving cover, compressed
If you look closely, these people don’t disagree with Musk that superintelligence is possible – even likely, and that its arrival could be an existential threat for humans. What they disagree about is the likely timing, and the difference isn’t as great as you might think. Ng thinks “There could be a race of killer robots in the far future,” but he doesn’t specify when. LeCun seems to think it could happen this century: “if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future.” And Etzioni’s comment was based on a survey where most respondents set the minimum timeframe as a mere 25 years. As Stephen Hawking famously wrote, “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not.”

Although it is less obvious, I suspect a similar misunderstanding is at play in discussions about the other singularity – the economic one: the possibility of technological unemployment and what comes next. Martin Ford is one of the people warning us that we may face a jobless future:

A lot of people assume automation is only going to affect blue-collar people, and that so long as you go to university you will be immune to that … But that’s not true, there will be a much broader impact.”

The opposing camp includes most of the people running the tech giants:

People keep saying, what happens to jobs in the era of automation? I think there will be more jobs, not fewer.” “… your future is you with a computer, not you replaced by a computer…” “[I am] a job elimination denier.” – Eric Schmidt

Schmidt and bot
There are many things AI will never be able to do… When there is a lot of artificial intelligence, real intelligence will be scarce, real empathy will be scarce, real common sense will be scarce. So, we can have new jobs that are actually predicated on those attributes.” – Satya Nadella

For perfectly good reasons, these people mainly think in time horizons of up to five years, maybe ten at a stretch. And in that time period they are surely right to say that technological unemployment is unlikely. For machines to throw us out of a job, they have to be able to do it cheaper, better, and / or faster. Automation has been doing that for centuries: elevator operator and secretary are very niche occupations these days. When a job is automated, the employer’s process becomes more efficient. This creates wealth, and wealth creates demand, and thus new jobs. This will continue to happen – unless and until the day arrives when the machines can do almost all the work that we do for money.

Time horizon

If and when that day arrives, any new jobs which are created as old jobs are destroyed will be taken by machines, not humans. And our most important task as a species at that point will be to figure out a happy ending to that particular story.

Will that day arrive, and if so, when? People often say that Moore’s Law is dead or dying, but it isn’t true. It has been evolving ever since Gordon Moore noticed, back in 1965, that his company was putting twice as many transistors on each chip every year. (In 1975 he adjusted the time to two years, and shortly afterwards it was adjusted again, to eighteen months.) The cramming of transistors has slowed recently, but we are seeing an explosion of new types of chips, and Chris Bishop, the head of Microsoft Research in the UK, argues that we are seeing the start of a Moore’s Law for software: “I think we’re seeing … a similar, singular moment in the history of software … The rate limiting step now is … the data, and what’s really interesting is the amount of data in the world is – guess what – it’s growing exponentially! And that’s set to continue for a long, long time to come.”

So there is plenty more Moore, and plenty more exponential growth. The machines we have in 10 years time will be 128 times more powerful than the ones we have today. In 20 years time they will be 8,000 times more powerful, and in 30 years time, a million times more powerful. If you take the prospect of exponential growth seriously, and you look far enough ahead, it becomes hard to deny the possibility that machines will do pretty much all the things we do for money cheaper, better and faster than us.

New rule
So I would like to propose a new rule, and with no superfluous humility I’m calling it Calum’s Rule:

Forecasts should specify the time frame.”

If we all follow this injunction, I suspect we will disagree much less, and we can start to address the issue more constructively.

Has AI ethics got a bad name?


Amid all the talk of robots and artificial intelligence stealing our jobs, there is one industry that is benefiting mightily from the dramatic improvements in AI: the AI ethics industry. Members of the AI ethics community are very active on Twitter and the blogosphere, and they congregate in real life at conferences in places like Dubai and Puerto Rico. Their task is important: they want to make the world a better place, and there is a pretty good chance that they will succeed, at least in part. But have they chosen the wrong name for their field?

Artificial intelligence is a technology, and a very powerful one, like nuclear fission. It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire. Like nuclear fission, electricity, and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative outcomes.

Bias - 6 or 9

This is what concerns people in the AI ethics community. They want to minimise the amount of bias in the data which informs the decisions that AI systems help us to make – and ideally, to eliminate the bias altogether. They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible, so that in advance or in retrospect, we can check for sources of bias, and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”? We don’t have “fire ethics” or “electricity ethics”, so why should we have AI ethics? There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems. The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car, or a rock. It will probably be many years before we create an AI which can reasonably be described as a moral agent.

Sophia citizen

It is ironic that people who regard themselves as AI ethicists are falling into this trap, because many of them get very heated when robots are anthorpomorphised, as when the humanoid Sophia was given citizenship by Saudi Arabia.

There is a more serious potential downside to the nomenclature. People are going to disagree about the best way to obtain the benefits of AI and minimise or eliminate its harms. That is the way it should be: science, and indeed most types of human endeavour, advance by the robust exchange of views. People and groups will have different ideas about what promotes benefit and minimises harm. These ideas should be challenged and tested against each other. But if you think your field is about ethics rather than about what is most effective, there is a danger that you start to see anyone who disagrees with you as not just mistaken, but actually morally bad. You are in danger of feeling righteous, and unwilling or unable to listen to people who take a different view. You are likely to seek the company of like-minded people, and to fear and despise the people who disagree with you. This is again ironic, as AI ethicists are generally (and rightly) keen on diversity.

Bridge failThe issues explored in the field of AI ethics are important, but it would help to clarify them if some of the heat was taken out of the discussion. It might help if instead of talking about AI ethics, we talked about beneficial AI, and AI safety. When an engineer designs a bridge she does not finish the design and then consider how to stop it falling down. The ability to remain standing in all foreseeable circumstances is part of the design criteria, not a separate discipline called “bridge ethics”. Likewise, if an AI system has deleterious effects it is simply a badly designed AI system.

Interestingly, this change has already happened in the field of AGI research, the study of whether and how to create artificial general intelligence, and how to avoid the potential downsides of that development, if and when it does happen. Here, researchers talk about AI safety. Why not make the same move in the field of shorter-term AI challenges?

This article first appeared in Forbes magazine on 7th March 2019

The greatest generations

Greatest generation 2

Every generation thinks the challenges it faces are more important than what has gone before. American journalist Tom Brokaw bestowed the name “the greatest generation” on the people who grew up in the Great Depression and went on to fight in the Second World War. As a late “baby boomer” myself, I certainly take my hat off to that generation.

The Boomers were named for demography: they were a bulge in the population (“the pig in the python”) caused by soldiers returning from the war. They saw themselves as special, and maybe they were. They invented sex in the 1960s, apparently, along with rock and roll, the counter-culture, the civil rights movement and the second wave of feminism.

Generation X was the first to take a letter as its title, although that happened late in their history, with the publication in 1991 of Canadian author Douglas Coupland’s novel, “Generation X: tales for an accelerated culture”. Cynical Boomers said Generation X got its name because its members were cyphers: their role in the world was less clear, their contribution to it was doubtful. Early on they were accused of being lazy and disaffected: they were the MTV generation, and their musical styles were grunge and hip hop. But these are accusations that most parents hurl at their successors. Later on, Generation X showed high levels of entrepreneurship, and appeared to be above averagely happy, with a good work-life balance. Their profile may be lower because there are fewer of them: they were the first generation whose parents had access to the contraceptive pill.

Generation Y

Generation Y, 2
Generation X was followed, naturally enough, by Generation Y, also known as the Millennials, since they were born between the early 1981 and 2000. (There are no generally agreed dates for the generations; I like 1941-60 for Boomers, 1961-80 for Generation X, and 1981-2000 for Millennials.) Following Generation X, and still being born, is Generation Z. Whatever their predecessors may think, it is these two generations which will face the biggest challenges yet presented to humanity.

Speaking at the United Nations in 1963, John F Kennedy said something which would not be out of place today: “Never before has man had such capacity to control his own environment, to end thirst and hunger, to conquer poverty and disease, to banish illiteracy and massive human misery. We have the power to make this the best generation of mankind in the history of the world – or make it the last.”i

The members of Generation Y and Z have been born at the best time ever to be a human, in terms of life expectancy, health, wealth, access to education, information, and entertainment. They have also been born at the most interesting time, and the most important. Whether they like it or not, they have the task of navigating us through the economic singularity of mass unemployment, and then the technological singularity of super-intelligence.

The economic singularity will arrive when Generation Y is running the show, which makes their name apposite, since one of the challenges the economic singularity will raise is to ensure that everyone finds meaning in a life without jobs. Generation Y will have to come up with a great new answer to the question of “why” we are here.

Generation Z

Generation Z
Generation Z is, if anything, even better named, although again, entirely by accident. One way or another, they are likely to be the last generation of humans to reach old age in a form their ancestors would recognise. These timings are of course uncertain and tendentious, but Generation Z is likely to be the dominant force in politics and business when the first superintelligence appears, and humanity becomes the second-smartest species on the planet. The consequences for humans will be staggering. If things go well, and the superintelligence really likes us, then at a minimum, humans will quickly be augmented to dispense with many of the limitations and frailties which have afflicted us since life on earth began: ageing, vulnerability, and probably even death. These augmentations will render us barely recognisable, and hard to continue to classify as human. If things go very well, perhaps we will merge with the machines we have created, and travel the universe together to wonder at its marvels, immune to the ravages of vacuum and radiation. If things go less well, generation Z could be the last generation of humans for less cheerful reasons.

Generations Y and Z are destined to be our greatest generations. If either of them fails in their respective tasks, humanity’s future could be bleak. But if they succeed, it could be almost incredibly good. They must succeed.

Stories from 2045


Sparky, a NAO robot who lives at Queen Mary University, helped launch a book this week.  In the very whooshy surroundings of the Reform Club on London’s Pall Mall, she read out a story written by an AI.  It’s not a very good story, to be honest, but it’s impressive that an AI can write stories at all.

The other stories, written by humans, are very good indeed.  You’ll find them at an Amazon site near you. They speculate on what life might be like during and after the economic singularity.  Two-thirds of them are positive, which is what we need – Hollywood has given us more than enough dystopias already.

red cover 27 nov
In the coming decades, artificial intelligence (AI) and related technologies will have enormous impacts on the job market. At the moment, no-one can predict exactly what will happen or when. The outcomes could be anywhere from very good to very bad, and which ones we get will depend significantly on the actions taken (and not taken) by governments and others in the coming few years.

The Economic Singularity Club think tank (ESC) was set up to discuss these issues, and try to influence their outcome positively. This book is our first tangible project.

esc key
One possible impact of the AI revolution is that many people will be unemployable within a relatively short space of time – maybe in two or three decades. If this does happen, and if we are smart and perhaps a bit lucky, the outcome could be wonderful, and we should certainly try to make it so.

The book is intended to encourage political leaders, policy makers, and everyone else to bring a much more serious level of attention and investment to the possibility of technological unemployment. The idea is to make the prospect of technological unemployment seem more real and less academic to people who have not previously given the idea much thought. It should also stimulate readers to devise their own solutions, and to decide what actions they can take to help ensure that we get a good outcome and not one of the bad ones.

ar screengrab
The book has an augmented reality cover.  Download an app, point your phone at the book, and a gaggle of TV screens emerge, and hover in front of you.  Select one, and click it to watch a short video.

All proceeds from this book will go to a charitable foundation set up by the ESC to promote its objectives. We hope you find it enjoyable and stimulating. 

charity jar
Finally, if you feel inspired to write your own story from 2045, you can submit it to the book’s website,  If it’s shorter than 1,000 words, and not illegal or hateful, we’ll publish it there.  It we get enough great stories, we’ll publish a sequel book.

stories webpage

Road rage against the machines? Self-driving cars in 2018 and 2019

Self-driving image

Self-driving cars – or Autos, as I hope we’ll call them – passed several important milestones in 2018, and they will pass several more in 2019. The big one came at the end of the year, on 5th December: Google’s Autos spin-out Waymo launched the world’s first commercial self-driving taxi service, open to citizens in Phoenix, Arizona, who are not employees of the company, and not bound by confidentiality agreements.

This service, branded Waymo One, was an extension of the company’s EasyRider programme, which was launched back in April. In that programme, selected members of the public who were willing to sign non-disclosure agreements (NDAs) got free rides in cars where sometimes no-one sat up front: no driver, no supervising engineer. There is much debate about how often the cars in both these programmes run with the front seats empty. Google and Waymo won’t say, but the answer seems to be sometimes, but not often. Some people argue this means that self-driving cars won’t be ready for prime time for years to come. Others see it as commendable caution.

Waymo is the clear front-runner in this business. In October it announced that its test cars had driven 10 million miles, and they have not been the unambiguous cause of a single accident. In simulations, they drive that many miles every single day.

General Motors, America’s biggest car maker by volume, is determined not to lag far behind, and has said for some time that it will launch a fleet of self-driving taxis during 2019. In October it announced a $2.75bn JV with Honda in Cruise, its self-driving car unit, which added to the earlier $2.25bn investment by Softbank to bring the valuation of Cruise to $14bn,i which is almost half the parent company’s equity value.

The rest of America’s car industry is also in hot pursuit, especially its newest and most valuable participant, Tesla Motors, which is pursuing the contrarian strategy of offering more and more driver assistance rather than jumping straight to full automation.

Tesla dog driver

Autos are still expensive, not least because production volumes of their LIDAR sensors are still low. So for some years to come, these vehicles will probably only be sold to commercial fleets, especially taxis and trucks. Unless, of course, Tesla’s Elon Musk is proved right, and Autos can operate solely with cameras, and don’t need LIDAR. So far he’s in a small minority, but his contrarian views have been vindicated before. Even if Musk is wrong, city dwellers in particular may well stop buying cars and start using Auto taxis. In which case, how long would the switch take? A famous pair of photographs taken on the same New York street on the same day in 1900 and 1913 shows that it took just 13 years to effect a complete swap in that city from horse-drawn carriages to automobiles. The switch took longer in rural areas of the US, and much longer again in less developed countries.

Self-driving adoption - from horse to car

In short, anyone who thinks that self-driving vehicles will not be in widespread use by the mid-2020s is probably in for a shock.

The US is in the vanguard of the Autos revolution, but other countries are keen to catch up. Both the UK government and London’s leading private hire company (Addison Lee) have stated their intention to have Autos operating in London by 2021. Driving in London is a whole different proposition to driving in Phoenix, so this two-year delay does not denote a lack of ambition.

But as usual in AI, it is China which is most likely to catch the US if there is a race to deploy self-driving technology. Baidu, often described as China’s Google, is the leader so far, with more than 100 partners involved in its Apollo project, including car manufacturers like Ford and Hyundai, and technology providers. The Chinese government is keeping close tabs on these developments, not least in obliging foreign companies to source their maps from Chinese companies.

Baidu self-driving car

Are we ready for the arrival of Autos? Can our infrastructures cope? The belief that Autos require modifications to our road infrastructure is a misapprehension. Waymo’s cars don’t need smart lane dividers, special traffic light telematics, or dedicated local area networks. They drive on ordinary roads, just like you and me. No doubt Autos will lead to our cities and towns becoming smarter and more intelligible, but they don’t require it to get started.

What about resistance? Will there be road rage against the machines? The most tragic thing to happen in the self-driving car industry this year was also perhaps the most revealing. In April, an Uber Auto ran over and killed a woman walking a bicycle across a busy road. There is still disagreement about what caused the accident, and Uber stopped its self-driving test programme immediately. But the most interesting thing is that no other company followed suit – and there are over 40 companies trialling self-driving cars in the US alone. Despite this, and despite blanket press coverage, there was no popular protest against Autos. It seems that people have already “discounted” the arrival of Autos: it’s a done deal.

Even if the arrival of Autos is a done deal for society as a whole, there may well be pockets of resistance. On a low level, this will come from petrol heads who find themselves banned from more and more roads because they are much more dangerous drivers than machines. Eventually they will only be allowed to drive on designated racetracks, after signing detailed indemnifications. We should welcome this, not resist it: right now, we kill 1.2 million people around the world each year by running them over, and we maim another 50 million. We are sending humans to do a machine’s job, and there is a holocaust taking place on our roads. We should hurry to embrace Autos. And anyone tempted to vandalise Autos will quickly find that they are bristling with cameras: if people start spray-painting their LIDARS to disable them, they will find themselves on the wrong end of a criminal prosecution.

Cross Clarkson

But there is another form of resistance which may not be so easy to assuage. In June, I gave a talk about AI to a room full of senior US police officers – just outside Phoenix, Arizona, appropriately enough. When I argued that a million Americans who currently earn a reasonable living driving trucks are going to be out of a job fairly soon because the economics of truck driving is going to flip, there was an audible gulp in the hall. They didn’t need me to point out that many of these people have guns.

One of the most significant impacts of Autos may well be to play the role of the canary in the coal mine: they could alert people to the likelihood that technological unemployment is coming – not now, and not in five years, but in a generation. If it is coming, we had better have a plan for how to cope. Otherwise there could be a panic which makes the current wave of populism look mild. At the moment we have no plan, and we’re not even thinking about developing a plan because so many influential people are saying that it cannot happen. They might be right to say that it will not happen. But to say that it cannot happen is dangerous complacency.

So what of 2019? Assuming success in Phoenix, Google is likely to roll out its pilot to other US cities – we could maybe see a dozen of them start during 2019. GM will be anxious not be seen as lagging, and no doubt Tesla will make startling announcements followed by almost-as-startling achievements. I’ll be surprised if there aren’t some significant pilots in China by the end of 2019 as well. And who knows, maybe all this will spur Europe into getting more serious about AI in general. Here’s hoping. 

This article was first published by Forbes magazine

Reviewing last year’s AI-related forecasts

Robodamus 3

As usual, I made some forecasts this time last year about how AI would change, and how it would change us. It’s time to look back and see how those forecasts for 2018 panned out. The result: a 50% success rate, by my reckoning. Better than the previous year, but lots of room for improvement. Here are the forecasts, with my verdicts in italics.

1. Non-tech companies will work hard to deploy AI – and to be seen to be doing so. One consequence will be the growth of “insights-as-a-service”, where external consultants are hired to apply machine learning to corporate data. Some of these consultants will be employees of Google, Microsoft and Amazon, looking to make their open source tools the default option (e.g. Google’s TensorFlow, Microsoft’s CNTK, Amazon’s MXNet).

Yes. The conversation among senior business people at the events I speak at has moved from “What is this AI thing?” to “Are we moving fast enough?”

2. The first big science breakthrough that could not have been made without AI will be announced. (I stole this from DeepMind’s Demis Hassabis. Well, I want to get at least one prediction right!)

Yes. In May, an AI system called Eve helped researchers at Manchester University discover that triclosan, an ingredient commonly found in toothpaste, could be a powerful anti-malarial drug. The research was published in the journal Scientific Reports (here).

3. There will be media reports of people being amazed to discover that a customer service assistant they have been exchanging messages with is a chatbot.

Yes. Google Duplex

4. Voice recognition won’t be quite good enough for most of us to use it to dictate emails and reports – but it will become evident that the day is not far off.

Yes. Alexa is pretty good, but not yet a reliable stenographer. (Other brands of AI assistant are available.)

5. Some companies will appoint Chief Artificial Intelligence Officers (CAIOs).

Not sure. I don’t know of any, but I bet some exist.

6. Capsule networks will become a buzz word. These are a refinement of deep learning, and are being hailed as a breakthrough by Geoff Hinton, the man who created the AI Big Bang in 2012.

Not as far as I know.

7. Breakthroughs will be announced in systems that transfer learning from one domain to another, avoiding the issue of “catastrophic forgetting”, and also in “explainable AI” – systems which are not opaque black boxes whose decision-making cannot be reverse engineered. These will not be solved problems, but encouraging progress will be demonstrated.

I think I’ve seen reports of progress, but nothing that could fairly be described as a major breakthrough.

8. There will be a little less Reverse Luddite Fallacism, and a little more willingness to contemplate the possibility that we are heading inexorably to a post-jobs world – and that we have to figure out how to make that a very good thing. (I say this more in hope than in anticipation.)

No, dammit.