Road rage against the machines? Self-driving cars in 2018 and 2019

Self-driving image

Self-driving cars – or Autos, as I hope we’ll call them – passed several important milestones in 2018, and they will pass several more in 2019. The big one came at the end of the year, on 5th December: Google’s Autos spin-out Waymo launched the world’s first commercial self-driving taxi service, open to citizens in Phoenix, Arizona, who are not employees of the company, and not bound by confidentiality agreements.

This service, branded Waymo One, was an extension of the company’s EasyRider programme, which was launched back in April. In that programme, selected members of the public who were willing to sign non-disclosure agreements (NDAs) got free rides in cars where sometimes no-one sat up front: no driver, no supervising engineer. There is much debate about how often the cars in both these programmes run with the front seats empty. Google and Waymo won’t say, but the answer seems to be sometimes, but not often. Some people argue this means that self-driving cars won’t be ready for prime time for years to come. Others see it as commendable caution.

Waymo is the clear front-runner in this business. In October it announced that its test cars had driven 10 million miles, and they have not been the unambiguous cause of a single accident. In simulations, they drive that many miles every single day.

General Motors, America’s biggest car maker by volume, is determined not to lag far behind, and has said for some time that it will launch a fleet of self-driving taxis during 2019. In October it announced a $2.75bn JV with Honda in Cruise, its self-driving car unit, which added to the earlier $2.25bn investment by Softbank to bring the valuation of Cruise to $14bn,i which is almost half the parent company’s equity value.

The rest of America’s car industry is also in hot pursuit, especially its newest and most valuable participant, Tesla Motors, which is pursuing the contrarian strategy of offering more and more driver assistance rather than jumping straight to full automation.

Tesla dog driver

Autos are still expensive, not least because production volumes of their LIDAR sensors are still low. So for some years to come, these vehicles will probably only be sold to commercial fleets, especially taxis and trucks. Unless, of course, Tesla’s Elon Musk is proved right, and Autos can operate solely with cameras, and don’t need LIDAR. So far he’s in a small minority, but his contrarian views have been vindicated before. Even if Musk is wrong, city dwellers in particular may well stop buying cars and start using Auto taxis. In which case, how long would the switch take? A famous pair of photographs taken on the same New York street on the same day in 1900 and 1913 shows that it took just 13 years to effect a complete swap in that city from horse-drawn carriages to automobiles. The switch took longer in rural areas of the US, and much longer again in less developed countries.

Self-driving adoption - from horse to car

In short, anyone who thinks that self-driving vehicles will not be in widespread use by the mid-2020s is probably in for a shock.

The US is in the vanguard of the Autos revolution, but other countries are keen to catch up. Both the UK government and London’s leading private hire company (Addison Lee) have stated their intention to have Autos operating in London by 2021. Driving in London is a whole different proposition to driving in Phoenix, so this two-year delay does not denote a lack of ambition.

But as usual in AI, it is China which is most likely to catch the US if there is a race to deploy self-driving technology. Baidu, often described as China’s Google, is the leader so far, with more than 100 partners involved in its Apollo project, including car manufacturers like Ford and Hyundai, and technology providers. The Chinese government is keeping close tabs on these developments, not least in obliging foreign companies to source their maps from Chinese companies.

Baidu self-driving car

Are we ready for the arrival of Autos? Can our infrastructures cope? The belief that Autos require modifications to our road infrastructure is a misapprehension. Waymo’s cars don’t need smart lane dividers, special traffic light telematics, or dedicated local area networks. They drive on ordinary roads, just like you and me. No doubt Autos will lead to our cities and towns becoming smarter and more intelligible, but they don’t require it to get started.

What about resistance? Will there be road rage against the machines? The most tragic thing to happen in the self-driving car industry this year was also perhaps the most revealing. In April, an Uber Auto ran over and killed a woman walking a bicycle across a busy road. There is still disagreement about what caused the accident, and Uber stopped its self-driving test programme immediately. But the most interesting thing is that no other company followed suit – and there are over 40 companies trialling self-driving cars in the US alone. Despite this, and despite blanket press coverage, there was no popular protest against Autos. It seems that people have already “discounted” the arrival of Autos: it’s a done deal.

Even if the arrival of Autos is a done deal for society as a whole, there may well be pockets of resistance. On a low level, this will come from petrol heads who find themselves banned from more and more roads because they are much more dangerous drivers than machines. Eventually they will only be allowed to drive on designated racetracks, after signing detailed indemnifications. We should welcome this, not resist it: right now, we kill 1.2 million people around the world each year by running them over, and we maim another 50 million. We are sending humans to do a machine’s job, and there is a holocaust taking place on our roads. We should hurry to embrace Autos. And anyone tempted to vandalise Autos will quickly find that they are bristling with cameras: if people start spray-painting their LIDARS to disable them, they will find themselves on the wrong end of a criminal prosecution.

Cross Clarkson

But there is another form of resistance which may not be so easy to assuage. In June, I gave a talk about AI to a room full of senior US police officers – just outside Phoenix, Arizona, appropriately enough. When I argued that a million Americans who currently earn a reasonable living driving trucks are going to be out of a job fairly soon because the economics of truck driving is going to flip, there was an audible gulp in the hall. They didn’t need me to point out that many of these people have guns.

One of the most significant impacts of Autos may well be to play the role of the canary in the coal mine: they could alert people to the likelihood that technological unemployment is coming – not now, and not in five years, but in a generation. If it is coming, we had better have a plan for how to cope. Otherwise there could be a panic which makes the current wave of populism look mild. At the moment we have no plan, and we’re not even thinking about developing a plan because so many influential people are saying that it cannot happen. They might be right to say that it will not happen. But to say that it cannot happen is dangerous complacency.

So what of 2019? Assuming success in Phoenix, Google is likely to roll out its pilot to other US cities – we could maybe see a dozen of them start during 2019. GM will be anxious not be seen as lagging, and no doubt Tesla will make startling announcements followed by almost-as-startling achievements. I’ll be surprised if there aren’t some significant pilots in China by the end of 2019 as well. And who knows, maybe all this will spur Europe into getting more serious about AI in general. Here’s hoping. 

This article was first published by Forbes magazine

Advertisements

Reviewing last year’s AI-related forecasts

Robodamus 3

As usual, I made some forecasts this time last year about how AI would change, and how it would change us. It’s time to look back and see how those forecasts for 2018 panned out. The result: a 50% success rate, by my reckoning. Better than the previous year, but lots of room for improvement. Here are the forecasts, with my verdicts in italics.

1. Non-tech companies will work hard to deploy AI – and to be seen to be doing so. One consequence will be the growth of “insights-as-a-service”, where external consultants are hired to apply machine learning to corporate data. Some of these consultants will be employees of Google, Microsoft and Amazon, looking to make their open source tools the default option (e.g. Google’s TensorFlow, Microsoft’s CNTK, Amazon’s MXNet).

Yes. The conversation among senior business people at the events I speak at has moved from “What is this AI thing?” to “Are we moving fast enough?”

2. The first big science breakthrough that could not have been made without AI will be announced. (I stole this from DeepMind’s Demis Hassabis. Well, I want to get at least one prediction right!)

Yes. In May, an AI system called Eve helped researchers at Manchester University discover that triclosan, an ingredient commonly found in toothpaste, could be a powerful anti-malarial drug. The research was published in the journal Scientific Reports (here).

3. There will be media reports of people being amazed to discover that a customer service assistant they have been exchanging messages with is a chatbot.

Yes. Google Duplex

4. Voice recognition won’t be quite good enough for most of us to use it to dictate emails and reports – but it will become evident that the day is not far off.

Yes. Alexa is pretty good, but not yet a reliable stenographer. (Other brands of AI assistant are available.)

5. Some companies will appoint Chief Artificial Intelligence Officers (CAIOs).

Not sure. I don’t know of any, but I bet some exist.

6. Capsule networks will become a buzz word. These are a refinement of deep learning, and are being hailed as a breakthrough by Geoff Hinton, the man who created the AI Big Bang in 2012.

Not as far as I know.

7. Breakthroughs will be announced in systems that transfer learning from one domain to another, avoiding the issue of “catastrophic forgetting”, and also in “explainable AI” – systems which are not opaque black boxes whose decision-making cannot be reverse engineered. These will not be solved problems, but encouraging progress will be demonstrated.

I think I’ve seen reports of progress, but nothing that could fairly be described as a major breakthrough.

8. There will be a little less Reverse Luddite Fallacism, and a little more willingness to contemplate the possibility that we are heading inexorably to a post-jobs world – and that we have to figure out how to make that a very good thing. (I say this more in hope than in anticipation.)

No, dammit.

Book review: “21 Lessons for the 21st Century”, by Yuval Harari

Cover

The title of Yuval Harari’s latest best-seller is a misnomer: it asks many questions, but offers very few answers, and hardly any lessons. It is the least notable of his three major books, since most of its best ideas were introduced in the other two. But it is still worth reading. Harari delights in grandiloquent sweeping generalisations which irritate academics enormously, and part of the fun is precisely that you can so easily picture his colleagues seething with indignation that he is trampling on their turf. More important, some of his generalisations are acutely insightful.

The insight at the heart of “Sapiens”, his first book, was that humans dominate the planet not because we are logical, but because 70,000 or so years ago we developed the ability to agree to believe stories that we know are untrue. These stories are about religion, and political and economic organisation. The big insight in his second book, “Homo Deus” is that artificial intelligence and other technologies are about to transform our lives far more – and far more quickly – than almost anyone realises. Both these key ideas are reprised in “21 Lessons”, but they are big ideas which bear repeating.

Happily, he has toned down his idiosyncratic campaigns about religion and vegetarianism. In the previous books he encountered religion everywhere: capitalism and communism have passionate adherents, but they are not religions. The first third of “Homo Deus” is religious in a different way: it is a lengthy sermon about vegetarianism.

Sapiens and Homo Deus

21 Lessons” is divided into five parts, of which the first is the most coherent and the best. It concerns the coming technological changes, which Harari first explored in “Homo Deus”. “Most people in Birmingham, Istanbul, St Petersburg and Mumbai are only dimly aware, if at all, of the rise of artificial intelligence and its potential impact on their lives. It is undoubtable, however, that the technological revolutions will gather momentum in the next few decades, and will confront humankind with the hardest trials we have ever encountered.”

He is refreshingly blunt about the possibility of technological unemployment: “It is dangerous just to assume that enough new jobs will appear to compensate for any losses. The fact that this has happened during previous waves of automation is absolutely no guarantee that it will happen again under the very different conditions of the twenty-first century. The potential social and political disruptions are so alarming that even if the probability of systemic mass unemployment is low, we should take it very seriously.”

Very well said, but this part of the book would be much more powerful if he had offered a fully worked-through argument for this claim, which in the last couple of years has been sneeringly dismissed by a procession of tech giant CEOs, economists, and politicians. Perhaps next year, the World Economic Forum could organise a debate on this question between Harari and a leading sceptic, such as David Autor.

It is also a shame that he offers no prescriptions, beyond categorising them: “Potential solutions fall into three main categories: what to do in order to prevent jobs from being lost; what to do in order to create enough new jobs; and what to do if, despite our best efforts, job losses significantly outstrip job creation.” Fair enough, but this should be the start of the discussion, not the end. Still, at least he doesn’t fall back on the usual panacea of universal basic income, and his warning about what happens if we fail to develop a plan is clear: “as the masses lose their economic importance … the state might lose at least some of the incentive to invest in their health, education and welfare. It’s very dangerous to be redundant.”

Harari is also more clear-sighted than most about the risk of algocracy – the situation which arises when we delegate decisions to machines because they make better ones than we do. “Once we begin to count on AI to decide what to study, where to work, and who to marry, human life will cease to be a drama of decision-making. … Imagine Anna Karenina taking out her smartphone and asking the Facebook algorithm whether she should stay married to Karenin or elope with the dashing Count Vronsky.” Warning about technological unemployment, he coined the brutal phrase “the gods and the useless”. Warning about algocracy, he suggests that humans could become mere “data cows”.

Data cows

The remaining four parts of the book contain much less that is original and striking. Harari is a liberal and an unapologetic globalist, pointing out reasonably enough that global problems like technological disruption require global solutions. He describes the EU as a “miracle machine”, which Brexit is throwing a spanner into. He does not see nationalism as a problem in itself, although he observes that for most of our history we have not had nations, and they are unnatural things and hard to build. In fact he thinks they can be very positive, but “the problem starts when benign patriotism morphs into chauvinistic ultra-nationalism.”

Although he sees nationalism as a possible problem, he also thinks it has already lost the game: “we are all members of a single rowdy global civilisation … People still have different religions and national identities. But when it comes to the practical stuff – how to build a state, an economy, a hospital, or a bomb –almost all of us belong to the same civilisation.” He supports this claim by pointing out that the Olympic Games, currently “organised by stable countries, each with boringly similar flags and national anthems,” could not have happened in mediaeval times, when there were no such things as nation states. And he argues that this is a very good thing: “For all the national pride people feel when their delegation wins a gold medal and their flag is raised, there is far greater reason to feel pride that humankind is capable of organising such an event.”

Globalisation

He is even more dismissive of religion – especially monotheism – despite his obsession with it. “From an ethical perspective, monotheism was arguably one of the worst ideas in human history … What monotheism undoubtedly did was to make many people far more intolerant than before … the late Roman Empire was as diverse as Ashoka’s India, but when Christianity took over, the emperors adopted a very different approach to religion.” Religion, he says, has no answers to any of life’s important questions, which is why there is no great following for a Christian version of agriculture, or a Muslim version of economics. “We don’t need to invoke God’s name in order to live a moral life. Secularism can provide us with all the values we need.”

He seems to be applying for membership of the “new atheists” club, in which Richard Dawkins and Stephen Pinker deliberately goad the religious by diagnosing religion as a disease which can be cured. Harari suggests that “when a thousand people believe some made-up story for one month, that’s fake news. When a billion people believe it for a thousand years, that’s a religion.”

New atheists

Oddly, given his perceptive take on the future of AI, Harari is weak on science fiction, displaying a fundamental misunderstanding of both The Matrix and Ex Machina. He is stronger on terrorism, pointing out that it is much less of a threat than it seems, contrary to the deliberate mis-representations by populists: “Since 11 September 2001, every year terrorists have killed about fifty people in the European Union, about ten people in the USA, about seven people in China, and up to 25,000 people globally (mostly in Iraq, Afghanistan, Pakistan, Nigeria and Syria).  In contrast, each year traffic accidents kill about 80,000 Europeans, 40,000 Americans, 270,000 Chinese, and 1.25 million people altogether.” Terrorists “challenge the state to prove it can protect all its citizens all the time, which of course it can’t.” They are trying to make the state over-react, and populists are their eager accomplices.

The book seems to be building to a climax when it addresses the meaning of life. Here and elsewhere, Harari has said that humans create meaning – or at least the basis of power – by telling ourselves stories. So is he going to give us a story which will help us navigate the challenges of the 21st century?

Sadly not. The closest we get is a half-baked version of Buddhism.

The Buddha taught that the three basic realities of the universe are that everything is constantly changing, nothing has any enduring essence, and nothing is completely satisfying.  Suffering emerges because people fail to appreciate this … The big question facing humans isn’t ‘what is the meaning of life?’ but rather, ‘how do we get out of suffering?’ … If you really know the truth about yourself and about the world, nothing can make you miserable. But that is of course much easier said than done.” Indeed.

Meaning

Harari has worked out his own salvation: “Having accepted that life has no meaning, I find meaning in explaining this truth to others.” Given his six-figure speaking fees, this makes perfect sense.

Harari also finds solace in meditation, which he practices for two hours every day, and a whole month or two every year. “21 Lessons” is a collection of essays written for newspapers and in response to questions. This shows in its disjointed, discursive, and inconclusive nature. If Harari had spent less time meditating, maybe he would have found more time to answer the questions he raises. It’s still definitely worth reading, though. 

This article first appeared in Forbes Magazine

Shooting the Messenger

Facebook evilIt was Facebook wot dunnit.”

Select the unpleasantness of your choice, and Facebook is almost certainly being blamed for it by someone, and probably a lot of someones. Also in the dock are YouTube and Twitter, with Instagram and Snapchat lurking about, keeping their heads down and hoping that nobody notices them.

The charge sheet is long. Facebook and the other social media have shortened our attention spans, leaving us easy prey to slick salesmen with plausible one-liners. They have corralled us all into echo chambers, so that we only ever hear voices telling us what we already think. We are now all isolated from the wider community. They sneakily deploy algorithms of such breathtaking sophistication that they can delve inside our neurons and suck out the information that constitutes their marrow, and then use that information against us like master hypnotists. OK, maybe I made that last bit up. But they definitely plug into our neocortex and inject a sort of digital heroin, forcing us, like enslaved machines, to spend hours online, clicking away to generate ad money.

It is because of all this skull-duggery (pun intended) that political discussion has become so heated, and people aren’t listening to each other any more.

Well, it has to be someone’s fault, doesn’t it, and the tech giants who own the social media are uniquely well-placed to attract universal opprobrium. For people on the political left, they are large companies which make a lot of money. For many on the the left, capitalism is a conspiracy against the masses, profit is a Bad Thing, and a large profit is a Very Bad Thing Indeed. For people on the political right, the tech giants are run by suspiciously hippy-ish types, who give away their money and talk about Universal Basic Income. Their employees are a bunch of snowflake lefties who cannot bear to work with the military, and who excoriate anyone who doesn’t share their hatred of the patriarchy, and who dares to question the wisdom of affirmative action hiring and training policies. And people from both political wings can hold hands while condemning the tech giants for not paying enough tax.

Warc-Google-Facebook-Share-Global-Ad-Revenues-Dec2017

The mainstream media is also furious with the tech giants, and understandably so: Google and Facebook stole their lunch. Local newspapers grew fat on a diet of classified ads, but these were the first casualty of the web. National newspapers depended much less on classified ads and more on display ads and cover prices, but these have also dwindled, as advertisers have discovered the charms of paying only for eyeballs which have actually scanned their messages, and which can be micro-targeted to make those messages more relevant.

This is not something to make light of. If today’s febrile political atmosphere tells us anything, it is that we need professional journalists who have genuinely mastered their craft, and who care about getting the story right as well as getting it first. We are far from figuring out all the business models we need in order to pay for this, and one way or another, pay we must.

I’m not here to defend the tech giants. They are very rich, they hire great lobbyists, and they can look after themselves. (By way of disclosure, I have never used Facebook, as I don’t trust myself not to lose whole afternoons, chatting with friends and looking at cat videos. I think Twitter and Reddit are fabulous, and LinkedIn is handy, although inexplicably clumsy.) But mis-diagnosing major social problems simply allows those problems to fester while causing new ones, and it is mis-diagnosis on a grand scale to blame social media for today’s vicious style of political debate. It implies that there was a halcyon past when the electorate took great care to inform itself of the arguments from both sides, thought deeply about the philosophical underpinnings of each position, and arrived at a sophisticated understanding of the issues of the day. In reality, before social media came along, people lived in the political echo chamber constructed by their newspaper of choice. Guardian readers didn’t check out the talking points promoted by the Daily Telegraph or vice versa, and likewise the Sun and the Daily Mirror. And if you’re looking for examples of the blatant exploitation of human appetites to boost sales, remember it was only in 2015 that The Sun stopped publishing photos of topless young women on page three.

The real reason for the bitterness of today’s political conversation is not the arrival of social media. It is the considerable success of the liberal social agenda.

The left is always angry. That is as it should be: if the left isn’t angry, then it’s not doing its job. The job of the left is to make people discontented with the current state of affairs, and to agitate to improve it. The world can never be equal enough or just enough, and it is the left’s job to keep pushing it in the right direction. The right, traditionally, is more relaxed. By and large it thinks that the world is in pretty good condition, that the institutions are doing a reasonable job, and that throwing all the cards in the air and risking anarchy is a terrible idea. Generally, both have a very good point. Societies should change: they should look for ways to solve problems, but they should do so in ways that will actually improve the lives of citizens, not to accommodate ideological whims. They should recognise that ordinary citizens today live better lives than the kings of a couple of centuries ago, and that over-zealous radicalism has caused at least as much misery as any other social force. Apart from religion, of course.

JS Mill On Liberty
But something changed at the end of the first decade of the twenty-first century. Left-wingers complain that global economic policy has long been dominated by right-wing neoclassical orthodoxy. Be that as it may, during the end of the 20th century and the start of the 21st century, social policy in the developed world was driven strongly in a liberal direction. Strongly and quickly. Governments in developed countries now mostly spend between a third and two-fifths of GDP, and much of that spend is on social and welfare support. The treatment of homosexuals, women, and minority races has been greatly improved. Citizens were relieved of interference in their intimate lives by church and state, while on the other hand, health and safety officers worked to make construction sites and consumer goods less likely to kill and maim people.

As the economies of developed countries grew, their people became wealthier. This made them have fewer children, and made them less willing to do menial work for low wages. This spurred new waves of immigration, and innumerable studies have shown that economic migration is a boon for both the country receiving the immigrant and the one they came from.

All this means change, and change is uncomfortable. And when immigration is from countries with sharply different cultures, and perhaps different skin colours, it is more obvious, and more uncomfortable. If women, gays, and ethnic minorities are advancing, the previously privileged populations might not do worse in absolute terms, but their privilege is undermined, and less reassuring.

Tea partyIt is no coincidence that the creation of the Tea Party in the US, the moment when the right started to get cross, was in 2009, the year of Obama’s inauguration. The election of a black president was perhaps the apogee of liberal values. It also coincided with the credit crunch and the start of the prolonged recession which that caused – a source of further discontent all round.

The extremists, the alt right and the Saundernistas, have a spring in their step, and the squishy middle which is more-or-less neoclassical in economics and liberal in social values, is looking weak. This is an important battle, and it will take some years for its fog to lift.

Meanwhile, what of social media? If we accept they are a messenger that shouldn’t be shot, we shouldn’t just sit back and relax. We can do much better than leaving the present undifferentiated mess of facts and lies, thoughtful opinion and rabid conspiracy theory to contend on equal terms for the attention of the unwary. Editorials in the mainstream media call for social media platforms to be treated like media and to be regulated as such. But media regulation, whether by government or by industry bodies, is ponderous and generally timid. Thanks to the magic of the web, we can do better.

Reddit gives an idea of how. Many of the posts of Reddit are based on links to newspapers, magazines and other outlets, to which readers add comments – sometimes dumb, often insightful, and occasionally hilarious. Reddit automatically rates each source, and readers vote each others’ comments up or down. The most highly up-voted posts appear at the top of the page. Wikipedia is another site that crowd-sources opinion very effectively to fact-check and verify.

Using these ideas and new ones, and with the thousands of clever technologists and user experience designers they employ, the tech giants can semi-automate fact-checking, and over time, make social media better than any media or platform we have known so far. The process of getting there won’t be fast, and it will be messy: the likes of TrustPilot and TripAdviser are afflicted with false reviews on an almost industrial scale. But if problems like this are soluble, and they probably are, then we can have media and platforms where readers can easily assess the veracity of any piece of content, guided by judgements which even out and transcend partisan opinion.

Looking ahead, the tech giants are going to have the mother of all PR problems when AI-powered automation starts causing job churn, but for the time being, if we can refrain from shooting the messenger, maybe we can take the mess out of the message.

Fact check

Algocracy

Grace JonesPowerful new technologies can produce great benefits, but they can often produce great harm. Artificial intelligence is no exception. People have numerous concerns about AI, including privacy, transparency, security, bias, inequality, isolation, oligopoly, and killer robots. One which perhaps gets less attention than it deserves is algocracy.

Decisions about the allocation of resources are being made all the time in societies, on scales both large and small. Because markets are highly efficient systems for allocating resources in economies characterised by scarcity, capitalism has proved highly effective at raising the living standards of societies which have adopted it. Paraphrasing Churchill, it is the worst possible economic system except for all the others.

Historically, markets have consisted of people. There may be lots of people on both sides of the transaction (flea markets are one example, eBay is another). Or there may be few buyers and many sellers (farmers selling to supermarket chains) or vice versa (supermarket chains selling to consumers). But typically, both buyers and sellers were humans. That is changing. 

Machine-made decisions

Robot bossAlgorithms now take many decisions which were formerly the responsibility of humans. They initiate and execute many of the trades on stock and commodity exchanges. They manage resources within organisations providing utilities like electricity, gas and water. They govern important parts of the supply chains which put food on supermarket shelves. This phenomenon will only increase.

As our machines get smarter, we will naturally delegate decisions to them which would seem surprising today. Imagine you walk into a bar and see two attractive people at the counter. Your eye is drawn to the blond but your digital assistant (located now in your glasses rather than your phone) notices that and whispers to you, “hang on a minute: I’ve profiled them both, and the red-head is a much better match for you. You share a lot of interests. Anyway, the blond is married.”

In his 2006 book “Virtual Migration”, Indian-American academic A. Aneesh coined the term “algocracy”. The difficulty with it has been explored in detail by the philosopher John Danaher, who sets the problem up as follows. Legitimate governance requires transparent decision-making processes which allow for involvement by the people affected. Algorithms are often not transparent and their decision-making processes do not admit human participation. Therefore algorithmic decision-making should be resisted.i

Danaher thinks that algocracy poses a threat to democratic legitimacy, but does not think that it can be, or should be, resisted. He thinks there will be important costs to embracing algocracy and we need to decide whether we are comfortable with those costs.

What not to delegate?

Robot diplomatOf course many of the decisions being delegated to algorithms are ones we would not want returned to human hands – partly because the machines make the decisions so much better, and partly because the intellectual activity involved is deathly boring. It is not particularly ennobling to be responsible for the decision whether to switch a city’s street lights on at 6.20 or 6.30 pm, but the decision could have a significant impact. The additional energy cost may or may not be offset by the improvement in road safety, and determining that equation could involve collating and analysing millions of data points. Much better work for a machine than a human, surely.

Other applications make us much less sanguine. Take law enforcement: a company called Intrado provides an AI scoring system to the police in Fresno, California. When an emergency call names a suspect, or a house, the police can “score” the danger level of the person or the location and tailor their response accordingly.i Other forces use a “predictive policing” system called PredPol which forecasts the locations within a city where crime is most likely to be carried out in the coming few hours.ii Optimists would say this is an excellent way to deploy scarce resources. Pessimists would reply that Big Brother has arrived.

AI is already helping to administer justice after the event. In 2016 the San Francisco Superior Court began using an AI system called PSA to determine whether parole should be given to alleged offenders. They got the tool free from the John and Laura Arnold Foundation, a Texas-based charity focused on criminal justice reform. Academics studying this area have found it very hard to obtain information about how these systems work: they are often opaque by their nature, and they are also often subject to commercial confidentiality.iii

There are many decisions which machines could make better than humans, but we might feel less comfortable having them do so. The allocation of new housing stock, the best date for an important election, the cost ceiling for a powerful new drug, for instance. Arguments about which decisions should be made by machines, and which should be reserved for humans are going to become increasingly commonly and increasingly vehement. Regardless whether they make better decisions than we do, not everyone is going to be content (to paraphrase Grace Jones) to be a slave to the algorithm.

Information is power. Machines may intrude on our freedom without actually making decisions. In September 2017 a research team from Stanford University was reported to have developed an AI system which could do considerably more than just recognise faces. It could tell whether their owners were straight or gay. The idea of a machine with “gaydar” is startling; it becomes shocking when you consider the uses it might be put to – in countries where homosexuals are persecuted and even prosecuted, for instance.iv The Stanford professor who led the research later said that the technology would probably soon be able to predict with reasonable accuracy a person’s IQ, their political inclination, or their predisposition towards criminality.

Things are getting weird.

 

Just Another Day in Utopia

a Guest Post by Stuart Armstrong

Stuart is a Fellow in Artificial Intelligence and Machine Learning at Oxford University’s Future of Humanity Institute. He works on how to map humanity’s partially defined values onto the potential goals of AI. He is probably best known for his collaboration with DeepMind on how to stop a developing AI resisting being switched off after showing signs of going rogue – see here.

He has also written some excellent short science fiction stories, such as the one this is excerpted from, which seems to me an admirable blend of Iain M. Banks and Roger Williams’ 1994 novella, “The Metamorphosis of Prime Intellect”. There are links to the full version, and to another of Stuart’s other stories at the end.

SF woman in boots

Ishtar went to sleep in the arms of her lover Ted, and awoke locked in a safe, in a cargo hold of a triplane spiralling towards a collision with the reconstructed temple of Solomon.

Again! Sometimes she wished that a whole week would go by without something like that happening. But then, she had chosen a high excitement existence (not maximal excitement, of course – that was for complete masochists), so she couldn’t complain. She closed her eyes for a moment and let the thrill and the adrenaline warp her limbs and mind, until she felt transformed, yet again, into a demi-goddess of adventure. Drugs couldn’t have that effect on her, she knew; only real danger and challenge could do that.

Right. First, the safe. She gave the inner door a firm thud, felt it ring like a bell, heard the echo return – and felt the tumblers move. So, a sound-controlled lock, then. A search through her shoes produced a small pebble which sparked as she dashed it against the metal. Trying to ignore the ominous vibration as the triplane motor shook itself to pieces, she constructed a mental image of the safe’s inside from the brief flashes of light. Symmetric gold and gilded extravagances festooned her small prison – French Baroque decorations, but not yet Roccoco. So Louis XIV period. She gave the less visited parts of her mind a good dusting, trying to remember the tunes of Jean Batiste Lully, the period’s most influential composer. She hoped it wasn’t any of his ballets; she was much better with his operas. The decorations looked vaguely snake-like; so she guessed Lully’s ‘Persée’ opera, about the death of the Medusa.

The engine creaked to a worrying silence as she was half-way through humming the Gorgon theme from the opera. Rushing the rest of the composition, she felt the door shift, finally, to a ten-times speeded up version of Andromeda’s response to Perseus’s proposal. She kicked the door open, exploded from the safe, took in the view of the temple of Solomon rushing up towards her, seconds away, snatched a picture from the floor, grabbed an axe from the wall, hacked off one of the wings with three violent cuts, and jumped out of the plane after it.

Behind her, the plane disintegrated in mid-air as the temple lasers cut it to shreds and she fell through space, buffeted by the wind, not losing her grip on the mangled wing. She had maybe thirty seconds to tie herself to the wing, using the object’s own canvas as binding, and she rushed through that. The Machines wouldn’t allow the fall to kill her, of course, but it would hurt quite a bit (another of her choices – she’d allowed herself to feel moderate amounts of pain). It would put back her attempts to ever find Ted, and, most importantly of all, it would be crushingly embarrassing socially.

Once she was lashed to the plummeting piece of wood and canvas, and she was reasonably confident that the fall was slow enough, and her knots secure enough, she finally looked at the photograph she had grabbed during her explosive exit from the plane. It showed Ted, trussed up in chains but smiling and evidently enjoying the novel experience. Underneath was a finely engraved note: “If you ever want to see your lover again, bring me the missing Stradivarius by noon tomorrow. Nero the 2nd”. Each capital letter was beautifully decorated with heads on spikes.

So! It seemed that her magnificent enemy Nero had resorted to kidnapping in order to get his way. It wasn’t as if Nero could actually harm Ted – unlike Ishtar, her lover had never chosen to accept any level of pain above mild, brief discomfort. But if he was ‘killed’, Ted would feel honour-bound to never see her again, and she wasn’t prepared to accept that. On the other hand, if she gave Nero her last Stradivarius, he might destroy it for good. It was her own choice: she had requested that her adventures have real meaning, with real consequences. If she failed, and if Nero so choose, a piece of humanity’s cultural history could be destroyed forever, permanently stymieing her attempts to reconstruct Stradivarius’s violin-making techniques for the modern world. Culture or love, what to choose? Those were her final thoughts before she crashed into an oak tree shaped like a duck.

She returned to bleary consciousness fifteen minutes later. Her fainting was a sign that the Machines had judged her escape attempt to be only a partial success; she would have to try harder next time. In the meantime, however, she would have to deal with shotgun pressed into her face and the gorgeous man at the other side of it shouting “Get off my property!”.

SF woman with clock

Pause,” she said softly. The man nodded; she had temporarily paused her adventure, so that she wouldn’t have to deal with danger or pursuit for the next few minutes, and so that this guy wouldn’t have to get her away immediately to protect his property from collateral damage. Some Adventurers disdained the use of the pause, claiming it ruined the purity of their experience. But Ishtar liked it; it gave her the opportunity, as now, of getting to know the people she bumped into. And this person definitely seemed to be in the ‘worth getting to know’ category. He put down his shotgun without a word and picked up his paintbrush, applying a few more touches to the canvas in front of him.

After disengaging herself from both the mangled wing and the duck-shaped tree (she’d have a dramatic scar from that crash, if she chose to accept it), she worked her way round to what he was painting. It was a rather good neo-impressionistic portrait of her, unconscious in the tree, pieces of torn canvas around her, framed by broken branches and a convenient setting moon. Even with his main subject out of the frame, as it were, he still seemed intent on finishing his painting.

Why did you splice your tree’s genes to make it look like a duck?” she asked, when the silence had gone on, in her estimation, for ten times as long as it should have. He had done a pretty good job with that oak, in fact; the feathers and the features were clear and distinct amongst the wood – or had been, until someone had crashed a triplane wing into the middle of it.

I didn’t,” he said. “That’s normal oak; I just trim it and tie it.”

But…” she looked at it again in astonishment; the amount of work involved to get that detail from natural wood was beyond belief. And oak wasn’t exactly a fast growing plant… “It must have taken you decades!”

Two centuries,” he answered with dour satisfaction. “All natural, no help from the Machines.” He waved his hand up the side of the hill. “I’m making the perfect landscape. And then, I shall paint it.”

Her gaze followed his hand. The scenery was a tapestry of secret themes. Hedges, streams, tree-rows, pathways, ridges and twined lianas carved the landscape into hidden pockets of beauty. Each pocket was a private retreat, cut off from the others and from the rest of the world – and yet all were visible at once, the layout a cunning display of multiple intimacy. Here and there were formal gardens, with lines of flowers all at attention, row after row, shading across colour and size from huge orchids to tiny snowdrops. Some pockets were carefully dishevelled, mini deserts or prairies or jungles, perfect fragments of wild untamed nature that could only exist at the cost of supreme artifice. There were herb gardens, rock gardens, orchards, water parks and vineyards; modelled on ancient Persia, England, Japan, France, Korea, Spain, the Inca and Roman empires – and others she didn’t immediately recognise.

And then a few touches of fancy, such as the segment they were in, with the oaks shaped into animals. Further off, a dramatic slew of moss-coated sculptures, with water pouring out from every nook and cranny. Then a dynamic garden, with plants blasting each other with discharges of pollen, set-up in a simple eight-beat rhythm. And a massive Baobab, its limbs plated with a forest of tiny bonsai trees.

What’s your safety level for all this?” she asked. If he’d chosen total safety, he wouldn’t have needed her off his property, as the Machines wouldn’t have allowed his creations to be damaged by her adventure. But surely he wouldn’t have left such artistic creation vulnerable to the fallout of Adventurers or random accidents…

 “Zero,” he said.

What?” No-one choose zero safety; it just wasn’t done.

As I said, no help from the Machines.” He looked at her somewhat shyly, as she stared in disbelief. “It’s been destroyed twice so far, but I’ll see it out to the end.”

No wonder he’d wanted her out… He only had himself to count on for protection, so he had to chase out any potential disturbances. She felt deeply moved by the whole grandiose, proud and quixotic project. Acting almost – almost – without thinking, she drew out a battered papyrus scroll: “Can you keep this for me?”

What is it?” he asked, before frowning and tearing up his painting with a sigh. Only then did he look at the scroll, and at her.

It’s my grandfather’s diary,” she said, “with my own annotations. It’s been of great use and significance to me.” Of course it had been – the Machines would have gone to great pains to integrate such a personal and significant item deeply into her adventures. “Could you keep it for my children?” When she finally found the right person to have them with, she added mentally. Ever since her split with Albert… No, that was definitely not what she needed to be thinking right now. Focus instead on this gorgeous painter, name still unknown, and his impossible dreams.

What was he like?” he asked.

My grandfather? Odd, and a bit traditional. He brought me up. And when we were all grown up, all his grandchildren, he decided we needed, like in ancient times, to lose our eldest generation.”

He died?” The painter sounded sceptical; there were still a few people choosing to die, of course, but those events were immensely rare and widely publicised.

No, he had his intelligence boosted. Recursively. Then he withdrew from human society, to have direct philosophical conversations with the Machines.”

He thought for a while, then took the scroll from her, deliberately brushing her fingers as he did so. “I’ll keep this. And I’m sure your children will find their ways to me.” An artefact, handed down and annotated through the generations, and entrusted in a quirky landscape artist who laboured obsessively with zero safety level? It was such a beautiful story hook, there was no way the Machines wouldn’t make use of it. As long as one of her children had the slightest adventurous streak, they’d end up here.

This feels rather planned,” he said. “I expect it’s not exactly a coincidence you ended up here.”

Of course not.” He was reclusive, brilliant, prickly; Ishtar realised a subtle seduction would be a waste of time. “Shall we make love?”, she asked directly.

Of course.” He motioned her towards a bed of soft blue moss that grew in the midst of the orchids. “I have to warn you, I insist that the pleasure-enhancing drugs we use be entirely natural, and picked from my garden. Let me show you around first, and you can make your choice.” They wandered together through the garden, shedding their clothes and choosing their pleasures.

Later, after love, she murmured “unpause” before the moment could fade. “Get off my property!” he murmured, then kissed her for the last time. She dived away, running from the vineyard and onto the street, bullets exploding overhead and at her feet.

Three robot gangsters roared through the street in a 1920 vintage car, spraying bullets from their Tommy guns. The bullets ricocheted off the crystal pavement and gently moving wind-houses, causing the passers-by (all of whom had opted for slight excitement that week) to duck enthusiastically to the floor, with the bullets barely but carefully missing them. Diving round a conveniently placed market stall a few seconds before it exploded in a hail of hurtling lead, she remembered a call she needed to return. She murmured her friend’s name, and a virtual screen opened with the corresponding face.

Sigsimund, bit busy to talk now, but can you meet me in the Temple of Tea in about five…” a laser beam from a circling drone sliced off the pavement she was standing on, while three robot samurai rose to bar her passage, katanas drawn (many humans were eager and enthusiastic to have a go at being evil masterminds, but few would settle for being minions). “…in about ten minutes? Lovely, see ya there!”

Stuart

The full version of this story is here: https://www.lesswrong.com/posts/sMsvcdxbK2Xqx8EHr/just-another-day-in-utopia 

A longer one set in a similar universe is here: https://www.lesswrong.com/posts/Ybp6Wg6yy9DWRcBiR/the-adventure-a-new-utopia-story

Time for Europe to step up

DuopolyIt is widely known that investment in artificial intelligence (AI) is concentrated in two Pacific Rim countries, China and the USA. But the strength of this duopoly is not generally appreciated, so here are two recent data points which throw it into sharp relief.

First, we have learned that Amazon’s R&D spend has reached 50% of the total spent by the UK on R&D. This means all the spend on any kind of R&D by the UK government and all the UK’s companies and universities. This astonishing fact becomes even more significant when you consider that a great deal of Amazon’s R&D spend is on AI, whereas not much of the UK’s R&D spend is on AI.

The second data point is a piece of research revealed by Jeffrey Ding, a Sinologist at the Future of Humanity Institute in Oxford, at the CogX conference in June. He started with the Chinese government’s widely-reported plan to invest $150bn in AI by 2020, and wondered if this included investments by China’s municipal and regional authorities. He discovered that it didn’t, and he managed to obtain data for much of that spend as well. It brings China’s total planned government investment in AI by 2020 to a breath-taking $429bn.

Jeffrey Ding 3
Earlier at the same conference, Matt Hancock, who, as minister for digital in the UK’s Department for Culture Media and Sport (DCMS), is the nearest thing the UK has to a minister for AI, claimed that with a projected investment of £1bn, the UK was “in the front rank of AI countries.” This frankly preposterous claim was echoed by the Mayor of London, on the publication of a report which trumpeted London as the centre of AI within Europe, which is not unlike claiming to be the best champagne producer in Maidstone, Kent.

It is misleading to describe the development of AI as a race, partly because there is no fixed point when the process stops and one team is declared the winner, but mostly because the enormous benefits that will flow from the development of better and better AI will accrue to people all over the world. Just like the development of the smartphone did. However, there are at least three powerful arguments why Europe really should make more of a contribution to the global project of developing AI.

The first argument is that AI will deliver enormous benefits to the world, and the faster we reap those benefits, the better. To cite just two of many examples, AI will improve healthcare so that people who would otherwise suffer or die will remain healthy, and self-driving cars will stop the appalling holocaust of 1.2 million deaths each year (and 50 million maimings) which human drivers cause. Europe has great wealth, great universities, and millions of smart and energetic people; it can and should be contributing more to realising these benefits.

AI benefits
The second argument is gloomier, but perhaps also more compelling. Europeans should not feel relaxed about the development of AI, humanitys’s most powerful technology, being so heavily concentrated elsewhere.

Jeffrey Ding argues that there is a far more lively debate in China about the government infringing on individual privacy than we in the West usually think. If so, this is great news, but it is hard to believe that China’s current approach to the development of AI would be acceptable in Europe. Most people here would be uncomfortable with schools using face recognition and other AI techniques to check whether the children are paying attention in class, and the way AI is being used to control the Uygur population in the western Chinese province of Xinjiang would also raise serious objections.

Many Europeans are also feeling slightly nervous about the great AI power to their west. So far, the development of AI in the USA has been a project for the private sector, but the government is showing signs of waking up to its importance, particularly with regard to military applications. The USA is currently a vibrant democracy, and has long been an invaluable ally. But things can change. President Trump’s ruminations about NATO and his willingness to initiate a trade war against the EU mean that Europe cannot be certain that America will always share the benefits of its AI prowess.

Trump trade war 2
The third reason is that AI might well be the source of much of the value in the economy in two or three decades’ time. Countries and regions which play only a minor role in developing the technology are likely to find themselves enfeebled.

To be clear, these are not arguments for autarky or self-reliant isolationism. We will all do better if the countries of the world collaborate to develop AI together, and share its benefits openly. That is the approach which Europe should champion. But sometimes, while planning for the best, it is wise to have a backup plan for the worst.

Jurgen Schmidhuber, one of the foundation figures of modern AI, argues that AI is currently dominated by the Pacific Rim countries for two main reasons: they both have huge single markets, and they both pursued muscular industrial strategies to promote the development of their technology industries. (Silicon Valley got started as a tech hub because of the sinking of the Titanic. To prevent a recurrence of that tragedy, the authorities decided that all ships must have powerful ship-to-shore radios, and it so happened that Silicon Valley was the home of a nascent radio industry. Later, the military research organisation DARPA funded cutting-edge tech research there, especially after America’s Sputnik moment – which was, of course, Sputnik.)

Juergen 2
Schmidhuber urges that Europe should strengthen its single market (woops, Brexit – yet another reason for the People’s Vote), and that it should enact similarly forward-thinking industrial policies. He also observes that while the Pacific Rim countries clearly dominate the internet of humans, the internet of things (IoT) is still up for grabs. He argues that Europe is home to the leading companies in the development and manufacture of many of the component parts of the IoT, so the game is still wide open.

It is time for Europe to step up.

AI in Europe