Time for Europe to step up

DuopolyIt is widely known that investment in artificial intelligence (AI) is concentrated in two Pacific Rim countries, China and the USA. But the strength of this duopoly is not generally appreciated, so here are two recent data points which throw it into sharp relief.

First, we have learned that Amazon’s R&D spend has reached 50% of the total spent by the UK on R&D. This means all the spend on any kind of R&D by the UK government and all the UK’s companies and universities. This astonishing fact becomes even more significant when you consider that a great deal of Amazon’s R&D spend is on AI, whereas not much of the UK’s R&D spend is on AI.

The second data point is a piece of research revealed by Jeffrey Ding, a Sinologist at the Future of Humanity Institute in Oxford, at the CogX conference in June. He started with the Chinese government’s widely-reported plan to invest $150bn in AI by 2020, and wondered if this included investments by China’s municipal and regional authorities. He discovered that it didn’t, and he managed to obtain data for much of that spend as well. It brings China’s total planned government investment in AI by 2020 to a breath-taking $429bn.

Jeffrey Ding 3
Earlier at the same conference, Matt Hancock, who, as minister for digital in the UK’s Department for Culture Media and Sport (DCMS), is the nearest thing the UK has to a minister for AI, claimed that with a projected investment of £1bn, the UK was “in the front rank of AI countries.” This frankly preposterous claim was echoed by the Mayor of London, on the publication of a report which trumpeted London as the centre of AI within Europe, which is not unlike claiming to be the best champagne producer in Maidstone, Kent.

It is misleading to describe the development of AI as a race, partly because there is no fixed point when the process stops and one team is declared the winner, but mostly because the enormous benefits that will flow from the development of better and better AI will accrue to people all over the world. Just like the development of the smartphone did. However, there are at least three powerful arguments why Europe really should make more of a contribution to the global project of developing AI.

The first argument is that AI will deliver enormous benefits to the world, and the faster we reap those benefits, the better. To cite just two of many examples, AI will improve healthcare so that people who would otherwise suffer or die will remain healthy, and self-driving cars will stop the appalling holocaust of 1.2 million deaths each year (and 50 million maimings) which human drivers cause. Europe has great wealth, great universities, and millions of smart and energetic people; it can and should be contributing more to realising these benefits.

AI benefits
The second argument is gloomier, but perhaps also more compelling. Europeans should not feel relaxed about the development of AI, humanitys’s most powerful technology, being so heavily concentrated elsewhere.

Jeffrey Ding argues that there is a far more lively debate in China about the government infringing on individual privacy than we in the West usually think. If so, this is great news, but it is hard to believe that China’s current approach to the development of AI would be acceptable in Europe. Most people here would be uncomfortable with schools using face recognition and other AI techniques to check whether the children are paying attention in class, and the way AI is being used to control the Uygur population in the western Chinese province of Xinjiang would also raise serious objections.

Many Europeans are also feeling slightly nervous about the great AI power to their west. So far, the development of AI in the USA has been a project for the private sector, but the government is showing signs of waking up to its importance, particularly with regard to military applications. The USA is currently a vibrant democracy, and has long been an invaluable ally. But things can change. President Trump’s ruminations about NATO and his willingness to initiate a trade war against the EU mean that Europe cannot be certain that America will always share the benefits of its AI prowess.

Trump trade war 2
The third reason is that AI might well be the source of much of the value in the economy in two or three decades’ time. Countries and regions which play only a minor role in developing the technology are likely to find themselves enfeebled.

To be clear, these are not arguments for autarky or self-reliant isolationism. We will all do better if the countries of the world collaborate to develop AI together, and share its benefits openly. That is the approach which Europe should champion. But sometimes, while planning for the best, it is wise to have a backup plan for the worst.

Jurgen Schmidhuber, one of the foundation figures of modern AI, argues that AI is currently dominated by the Pacific Rim countries for two main reasons: they both have huge single markets, and they both pursued muscular industrial strategies to promote the development of their technology industries. (Silicon Valley got started as a tech hub because of the sinking of the Titanic. To prevent a recurrence of that tragedy, the authorities decided that all ships must have powerful ship-to-shore radios, and it so happened that Silicon Valley was the home of a nascent radio industry. Later, the military research organisation DARPA funded cutting-edge tech research there, especially after America’s Sputnik moment – which was, of course, Sputnik.)

Juergen 2
Schmidhuber urges that Europe should strengthen its single market (woops, Brexit – yet another reason for the People’s Vote), and that it should enact similarly forward-thinking industrial policies. He also observes that while the Pacific Rim countries clearly dominate the internet of humans, the internet of things (IoT) is still up for grabs. He argues that Europe is home to the leading companies in the development and manufacture of many of the component parts of the IoT, so the game is still wide open.

It is time for Europe to step up.

AI in Europe

Advertisements

Sunny side up

Sunny-Side-Up-Eggs-–-Cat-Egg-Mold-4

Satisfying stories feature a hero or heroine facing jeopardy and triumphing over adversity. This explains why most science fiction is dystopian: that’s where the jeopardy is.

This gives us a problem. Science fiction provides the metaphors we use to think about and discuss the future, and unfortunately, for every Star Trek there are multiple Star Wars and Terminators. Fear of the future stops many of us from thinking about it seriously. Maybe we should offset the likes of Black Mirror with some White Mirror.

So here is a description of the world in which AI has turned everything upside down – for the good. It’s a scenario, not a forecast, but maybe if we’re smart we can get there.

2025: panic averted

Plan

Vehicles without human drivers are becoming a common sight in cities and towns all over the world. Professional drivers are starting to be laid off, and it is clear to everyone that most of them will be redundant within a few years. At the same time, employment in call centres and the retail industry is hollowing out as increasingly sophisticated digital assistants are able to handle customer enquiries, and the move to online shopping accelerates. The picking function in warehouses has been cost-effectively automated, and we are starting to see factories which normally have no lights on because no humans are working there.

Farmers are experimenting with robots for both crops and animal husbandry. Small wheeled devices patrol rows of vegetables, interrogating plants which don’t appear to be healthy specimens, and eliminating individual weeds with herbicides, or targeted jets of very hot water. Cattle are entirely content to be milked by robots, so fewer members of the declining population of farm workers still have to get up before daybreak every day.

Construction firms are experimenting with pre-fabricated units, but most construction projects remain subject to great variability of on-site conditions. Robots which can handle this unpredictability are still too expensive to replace human construction workers.

The tedious jobs which traditionally provided training wheels for accountants and lawyers (“ticking and bashing” for auditors and “disclosure” or “discovery” for lawyers) are increasingly being handled by machines. Skeptics about technological unemployment point out that the amount of work carried out by professional firms has actually increased, as whole categories of previously uneconomic jobs have become possible, and that professionals are kept busy because the machines still need training on each new data set. But fewer trainees are being hired, and thoughtful practitioners are asking where tomorrow’s qualified lawyers and accountants will come from.

Many companies have laid off some workers, but most have reduced their headcount primarily by natural wastage: not replacing people who retired or moved on. As a result, there have been fewer headlines about massive redundancies than some people feared, but at the same time it has become much harder for people to find new jobs. The unemployment rate among new graduates is at historically high levels.

But instead of panicking, the populations of most countries are reassured by the consensus which evolved when governments and philanthropists started to sponsor serious work on the issue of technological unemployment in the late 2010s. Slowly at first, and then rapidly, it became conventional wisdom that the Star Trek economy is achievable.

Meanwhile, governments are investing heavily in retraining to help workers cope with rapid job churn. AI personal tutor systems show promise, but are still rudimentary.

2035: Transition

Internet-of-Things_980bcc6853d6b88be078c7d27049ec84

Large numbers of people are now unemployed, and welfare systems are swollen. Universal Basic Income was not the solution: there was no point paying generous welfare to the many people who remained in lucrative employment, and a basic income was insufficient for the rest. The new welfare has many incarnations, and interesting new experiments are springing up all the time. Concepts like PCI (Progressive Comfortable Income), and HELP (Human Elective Leisure Programme) are being discussed not only in the think tanks, but in kitchens, bars and restaurants all over the world.

Professional drivers are now extremely rare. Their disappearance was resisted for a while – road rage against the machines was ferocious in places. Autonomous vehicles were attacked, their cameras and LIDARs sprayed with paint. Some high-profile arrests and jail sentences quickly put a stop to the practice. Most jurisdictions now have roads which are off-limits to human drivers. Insurance premiums have plummeted, and fears about self-driving cars being routinely hacked have not been realised.

Deliveries of fast food and small parcels in major cities are now mostly carried out by autonomous drones, operating within their own designated level of airspace. Sometimes the last mile of a delivery is carried out by autonomous wheeled containers. For a while, teenagers delighted in “bot-tipping”, but with all the cameras and other sensory equipment protecting the bots, the risk of detection and punishment became too high.

In manufacturing, 3D printing has advanced less quickly than many expected, as it remained more expensive than mass production. But it is common in niche applications, like urgently required parts, components with complex designs, and situations where products are bespoke, as in parts of the construction industry. They have an impact on businesses and the economy far greater than their modest output level would suggest.

On construction sites, human supervision is still the norm for laying foundations, but pre-fabricated (often 3D-printed) walls, roofs and whole building units are becoming common. Robot labour and humans in exoskeletons are increasingly used to assemble them. Drones populate the air above construction sites, tracking progress and enabling real-time adjustments to plans and activities.

The Internet of Things has materialised, with everyone receiving messages continuously from thousands of sensors and devices implanted in vehicles, roads, trees, buildings, etc. Fortunately, the messages are intermediated by personal digital assistants, which have acquired the generic name of “Friends”, but whose owners often endow them with pet names. New types of relationship and etiquette are evolving to govern how people interact with their own and other peoples’ Friends, and what personalities the Friends should present.

There is lively debate about the best ways to communicate with Friends and other computers. One promising technology is tattoos worn on the face and around the throat which have micro-sensors to detect and interpret the tiny movements when people sub-vocalise, i.e., speak without actually making a noise. The tattoos are usually invisible, but some people have visible ones which give them a cyborg appearance. Brain-Computer Interfaces (BCI) have made less progress than their early enthusiasts expected.

A growing amount of entertainment and personal interaction is mediated through virtual reality. Good immersive VR equipment is now found in most homes, and it is increasingly rare to see an adolescent in public outside school hours. All major movies made by Netflix, Hollywood and Bollywood are now produced in VR, along with all major video games. To general surprise, levels of literacy – and indeed book sales – have not fallen. Many people now have more time and energy for reading. In a number of genre categories, especially romance and crime, the most popular books are written by AIs.

Major sporting competitions have three strands: robots, augmented humans, and un-augmented humans. Audiences for the latter category are dwindling.

Dating sites have become surprisingly effective. They analyse videos of their users, and allocate them to “types” in order to match them better. They also require their members to provide clothing samples from which they extract data about their smells and their pheromones. The discovery that relationship outcomes can be predicted with surprising accuracy with these kinds of data has slashed divorce rates.

Opposition to the smartphone medical revolution has subsided in most countries, and most people obtain diagnoses and routine health check-ups from their “Friends” several times a week. Automated nurses are becoming increasingly popular, especially in elder care.

Several powerful genetic manipulation technologies are now proved beyond reasonable doubt to be effective, but backed by public unease, regulators continue to hold up their deployment. Cognitive enhancement pharmaceuticals are available in some countries under highly regulated circumstances, but are proving less effective than expected. There are persistent rumours that they are deliberately being engineered that way.

Ageing is increasingly seen as an enemy which can be defeated.

Education is finally undergoing its digital revolution. Customised learning plans based on continuous data analysis and personal AI tutors are becoming the norm. Teachers are becoming coaches and mentors rather than instructors.

2045: The Star Trek Economy

Star_Trek_economy

Artificial intelligence has made companies so efficient that the cost of most non-luxury goods and services is close to zero. Few people pay more than a token amount for entertainment or information services, which means that education and world-class healthcare are also much improved in quality and universally available. The cost of energy is dramatically reduced also, as solar power can now be harvested, stored and transmitted almost for free. Transportation involves almost no human labour, so with energy costs low, people can travel pretty much wherever they want, whenever they want. The impressive environments available in virtual reality do a great deal to offset the huge demand for travel that this might otherwise have created.

Food production is almost entirely automated, and the use of land for agriculture is astonishingly efficient. Vertical farms play an important role in high-density areas, and wastage is hugely reduced. The quality of the housing stock, appliances and furniture is being continuously upgraded. A good standard of accommodation is guaranteed to all citizens in most developed countries, although of course there are always complaints about the time it takes to arrive. Personalised or more luxurious versions are available at very reasonable prices to those still earning extra income. Almost no-one in developed countries now lives in cramped, damp, squalid or noisy conditions. Elsewhere in the world, conditions are catching up fast.

Other physical goods like clothes, jewellery and other personal accessories, equipment for hobbies and sports, and a bewildering array of electronic equipment are all available at astonishingly low cost. The Star Trek economy is almost mature. But access to goods and some services is still rationed by price. Nobody – in the developed world at least – wants for the necessities of civilised life, but nobody who is not employed can afford absolutely everything they might wish for. It is generally accepted that this is actually a good thing, as it means the market remains the mechanism for determining what goods are produced, and when.

Unemployment has passed 75% in most developed countries. Among those still working, nobody hates their job: people only do work that they enjoy. Everyone else receives an income from the state, and there is no stigma attached to being unemployed, or partially employed. In most countries the citizens’ is funded by taxes levied on the minority of wealthy people who own most of the productive capital in the economy, and in particular on those who own the AI infrastructure. The income is sufficient to afford a very high standard of living, with access to almost all digital goods being free, and most physical goods being extremely inexpensive.

In many countries, some of the wealthy people have agreed to transfer the means of production and exchange into communally owned, decentralised networks using blockchain technology. Those who do this enjoy the sort of celebrity and popularity previously reserved for film and sports stars.

Some countries mandated these transfers early on by effectively nationalising the assets within their legislative reach, but found that their economies were stagnating, as many of their most innovative and energetic people emigrated. Worldwide, the idea is gaining ground that private ownership of key productive assets is distasteful. Most people do not see it as morally wrong, and don’t want it to be made illegal, but it is often likened to smoking in the presence of non-smokers. This applies particularly to the ownership of facilities which manufacture basic human needs, like food and clothing, and to the ownership of organisations which develop and deploy the most essential technology – the technology which adds most of the value in every industry sector: artificial intelligence.

The gap in income and wealth between rich and poor countries has closed dramatically. This did not happen because of a transfer of assets from the West to the rest, but thanks instead to the adoption of effective economic policies, the eradication of corruption, and the benign impact of technology in the poorer countries.

Another concern which has been allayed is that life without work would deprive the majority of people of a sense of meaning in their lives. Just as amateur artists were always happy to paint despite knowing that they could never equal the output of a Vermeer, so people now are happy to play sport, write books, give lectures and design buildings even though they know that an AI could do any of those things better than them.

Not everyone is at ease in this brave new world, however. Around 10% of the population in most countries suffers from a profound sense of frustration and loss, and either succumbs to drugs or indulges almost permanently in escapist VR entertainment. A wide range of experiments is under way around the world, finding ways to help these people join their friends and families in less destructive or limiting lifestyles. Huge numbers of people outside that 10% have occasional recourse to therapy services when they feel their lives becoming slightly aimless.

Governments and voters in a few countries resisted the economic singularity, seeing it as a de-humanising surrender to machine rule. Although they found economically viable alternatives at first, their citizens’ standard of living quickly fell far behind. Most of these governments have now collapsed like the communist regimes of Eastern Europe in the early 1990s, and there are persistent rumours that President-for-life Putin met a very grisly end in 2036. The other hold-outs look set to follow – hopefully without violence.

Significant funds are now allocated to radical age extension research, and there is talk of “longevity escape velocity” being within reach – the point when each year, science adds a year to your life expectancy. Most forms of disability are now offset by implants and exoskeletons, and cognitive enhancements through pharmaceuticals and brain-computer interface techniques are showing considerable promise.

The education sector has ballooned, and is vacational rather than vocational. Most education is provided by AIs.

Safeguards have now been found to enable direct democracy to be implemented in many areas. Professional politicians are now rare.

In London, DeepMind announces that it expects to unveil the first artificial general intelligence. With bated breath, the world awaits the arrival of Earth’s first superintelligence.

Starchild and Earth 2

New book: Artificial Intelligence and the Two Singularities

Two Sing Cover

My latest book has just been published by CRC Press, an imprint of the academic publishing house Taylor and Francis.  It updates and expands on my previous non-fiction AI books, especially the sections on the economic singularity.

Having been published in the past by Random House, and having also self-published, it is exciting to complete the trio of publishing options with an academic publisher, and it is an honour to be picked up by CRC.  (This does have an impact on pricing beyond my control.)

The book has attracted some gratifying reviews, including these:

“The arrival of super-intelligent AI and the economic replacement of humans by machines on a global scale are among the greatest challenges we face. This book is an excellent introduction to both. It is thoroughly researched and persuasive. Chace’s principal argument seems to be correct: we need to prepare now for the economic singularity or face a serious disruption of our civilization.”
Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley

“A brilliantly lucid guide to the current state of artificial intelligence and its possibilities and potential – including the likelihood that it will eliminate employment as we currently know it. What makes this a must-read is what comes next – this is no dystopian alarm but a clarion call to start thinking about how to cope with this eventuality – and an intriguing guide to some possible destinations. We should all be thinking about these potential problems right now – thankfully Calum has already provided an entertaining and thoughtful roadmap.”
Mark Mardell, Presenter, BBC Radio Four

AI: it’s not a race

Giant cogs are starting to turn

Giant machinery 6
Western governments are finally waking up to the significance of artificial intelligence. It’s a slow process, with much further to go, and most politicians have yet to grasp the significance of AI, but the gradual setting into motion of giant cogs is to be welcomed.

In the November 2017 budget the UK’s Chancellor excitedly announced expenditure of £75m on AI. In March 2018, France’s President Macron put that to shame by announcing a spend of €1.5bn by 2020, primarily to halt the brain drain of French computer scientists. The following month Germany’s Chancellor Merkel declared that Germany would be at the forefront of AI development, although she refrained from quoting any numbers. Shortly afterwards the UK announced a substantial upward revision of its investment, to £1bn, most of it coming from the private sector.

These are large sums of money, but they are modest compared to what the Chinese government is spending. The announcement in January of a $2bn AI industrial park in a mountainous province in the west of the country is just one example of the government’s ambitions. Alibaba’s announcement of $15bn to be invested in AI over three years gives a better idea of the scale of China’s activity.

China’s declared aim is to overtake the US as the predominant provider of AI development by 2030, and it has made astonishing progress in the short time since its “Sputnik” moment in March 2016, when DeepMind’s AlphaGo beat the world’s best human Go player. It now attracts more VC investment into AI firms than the US, for instance.

The US presents a curious dichotomy. Its tech giants still dominate the AI landscape: Google, Facebook, Amazon, Microsoft and Apple are increasingly AI companies, and they are the largest companies in the world (followed immediately by Alibaba and Tencent). Astonishingly, Amazon spends on R&D half of what the whole of the UK spends – including all UK government departments, all UK companies, and all the UK’s universities and NGOs. But the US government seems blithely unaware of the growing importance of AI. When Facebook founder Mark Zuckerberg was grilled by the US Senate in April, the senators revealed how little understanding they had of what kind of company it is. And the US president is too busy reassuring unemployed coal miners that he will bring their jobs back to have noticed the likelihood of massive waves of job disruption from machines.

The “Three Wise Men” moment

three wise men 2
Still, the big picture is that politicians are slowly waking up to the importance of AI. It is reminiscent of the “three wise men” moment in 2014, when Stephen Hawking, Elon Musk and Bill Gates all declared that strong AI is probably coming, and will either be the best or the worst thing ever to happen to humanity. The initial result was pictures of the Terminator all over the newspapers, but fairly soon a more balanced discussion emerged.

If the giant cogs of the government machines are indeed finally beginning to turn, that is to be welcomed. We need politicians and policy makers to be fully engaged if we are to grasp the amazing opportunities that AI offers – and to meet the serious challenges it poses. We need governments to help train AI researchers and engineers, to engage with the tech giants on issues such as privacy and bias, and to regulate them if necessary. Above all, we need governments to prepare for the possibility of technological unemployment.

These activities are not zero-sum between countries. Humanity will get the best out of AI only if it collaborates across borders. That is the natural inclination of scientists and business people, but not necessarily of voters and governments. And so the nascent discussion about AI and government is being framed as a race. Does that actually make any sense?

It’s not a race…

Smart-Phone-for-Africa 2
The most profoundly influential technology of the last decade was arguably the smartphone – itself powered increasingly by AI. The race to sell them was won by Apple, which earns almost all of the profit in the smartphone industry. But that does not mean that only Americans benefit from smartphones: they have become essential tools for fully half the world’s population – they are constant companions for people in all countries and at all income levels. The same will be true of new forms of AI: self-driving cars and digital assistants, for instance. As soon as they are ready for prime time they will spread around the planet like the rising sun.

Of course the governments and citizens of the countries which produce the most financially successful AI companies will benefit the most from their tax payments, and their workforces will benefit most from the ecosystem of jobs and skills they build. But building companies (and universities) is not a fixed-term process where a winner surges through the tape to claim the prize and everyone else loses. Assuming we meet and overcome the challenges that AI poses, when country A produces more and better AI, the people in countries B, C and D are beneficiaries too.

…but there is a duopoly

Duopoly
The idea of leadership in AI is less incoherent than the idea of an AI race. The problem here is the bizarre failure by politicians to recognise the enormous difference in scale between the AI activities of different countries. Leaders in the UK, France and Germany all declare their countries to be leaders in AI, whereas the truth is they are bit players compared to the duopoly of China and the US. The report published by the UK’s House of Lords in April was a refreshing exception: “vague statements about the UK “leading” in AI are unrealistic and unhelpful, especially given the vast scale of investment in AI by both the US and China.”

The lords went on to propose a more nuanced role for the UK. Citing its strong tradition of computer science research (Alan Turing, among many others), successful startups (notably DeepMind), globally respected institutions like the BBC and the British legal system, they suggested the UK could help “convene, guide and shape the international debates which need to happen.”

Even setting aside the question of how much harm Brexit is doing and will do to Britain’s standing in the world, is this noble vision realistic? Perhaps it is – but probably only so long as there are no serious and lasting differences of opinion between the world’s regions about how AI should evolve. If China cleaves to its view that the privacy of individuals is of little account compared to the safety of society at large – and the power of the communist party – then why should they listen when self-appointed leadership organisations in the UK and elsewhere tell them otherwise? Certainly, they will have to obey the rules of the countries they sell their products and services in, but consumers in those countries will demand the latest and the best technology, wherever it originates, and they may not look kindly on governments that get in the way.

Senior figures within the EU are now talking about investing €20bn into AI, but there is no realistic prospect yet of that happening. Absent such a massive change, it is hard to see why effective leadership in ethical frameworks will reside anywhere other than the same place as the leadership in R&D and commercial implementation. And that is in the AI duopoly of the US and China.

Aggravating algorithms

Zuckerberg

You’ll have noticed there is something of a backlash against the tech giants these days. In the wake of the scandal over Cambridge Analytica’s alleged unauthorised use of personal data about millions of Facebook users, Mark Zuckerberg was subjected to an intensive grilling by the US Senate. (In the event, the questions were about as lacerating as candy floss because it turns out that most Senators have a pretty modest level of understanding about how social media works.) More seriously, perhaps, the share prices of the tech giants have tumbled in recent weeks, although Mr Zuckerberg’s day in Congress raised Facebook’s sufficiently to increase its CEO’s personal wealth by around $3bn. Not a bad day’s pay.

The tech firms are in the dog house for a number of reasons, but one of the most pressing is the widespread perception that they are using – and enabling others to use – algorithms which are poorly understood, and causing harm.

Al-Khwarizmi

The word “algorithm” comes from the name of a ninth-century Persian mathematician, Al-Khwarizmi, and is surprisingly hard to explain. It means a set of rules or instructions for a computer to follow, but not the precise, step-by-step instructions which a computer programme contains. A machine learning algorithm uses an initial data set to build an internal model which it uses to make predictions; it tests these predictions against additional data and uses the results to refine the model.

If the initial data set is not representative of the population (of people, for instance) which it will offer decisions about, then those decisions can be prejudiced and harmful. When asked to provide pictures of hands, algorithms trained on partial data sets have returned only pictures of white hands. In 2015, a Google algorithm reviewing photographs labelled pictures of black people as gorillas.

When this kind of error is made when dealing with decisions about who should get a loan, or be sent to jail, the consequences can obviously be serious. It is not enough to say (although it is true) that the humans being replaced by these systems are often woefully prejudiced. We have to do better.

Needle

Algorithms’ answers to our questions are only as good as the data we give them to work with. To find the best needles you need really big haystacks. And not just big: you need them to be diverse and representative too. Machine language researchers are very well aware of the danger of GIGO – garbage in, garbage out – and all sorts of efforts and initiatives are under way in the tech giants and elsewhere to create bigger and better datasets.

Sensors
Society has to walk a tightrope regarding the use of data. Machine learning and other AI techniques already provide great products and services: Google Search provides us with something like omniscience, and intelligent maps tell you how long your journey will be at different times of day, and divert you if an accident causes a blockage. In the future they will provide many more wonders, like the self-driving cars which will finally end the holocaust taking place continuously on our roads, killing 1.2 million people each year and maiming another 50 million or so.

The data to fuel these marvels is generated by the fast-growing number of sensors we place everywhere, and by the fact that more and more of our lives are digital, and we leave digital breadcrumb trails everywhere we go. It would be a tragedy if the occasionally hysterical backlash against the tech giants we are seeing today ended up throttling the availability of data which is needed to let ML systems weave their magic – and do so with less bias than we humans harbour.

Left to their own devices, it is likely that most of us would carry on using Facebook and similar services with the same blithe disregard for our own privacy as they always have done. But the decisions about how our data is used are not ours alone to make. Quite rightly, governments and regulators will make their opinions known. The European Union’s GDPR (General Data Protection Regulation), which comes into force in May, is a powerful example. It requires everyone who stores or processes the personal data (liberally defined) of any EU citizens to do so only when necessary, and to provide the subjects of that data with access to it on demand.

This is all well and good, but as we call for “something to be done” to curb the power of the tech giants, let’s bear a few things in mind. First, regulation is a blunt weapon: regulators, like generals, often fight the war that has just finished, and as technology accelerates, this effect will become more pronounced. Second, regulation frequently benefits incumbents by raising barriers to entry, and by doing so curbs innovation. In general, regulation should be used sparingly, and to combat harms which are either proven, or virtually inevitable.

Social credit

Third, the issue is who controls or has access to the data, not who owns it. “Ownership” of a non-rivalrous, weightless good like data is a nebulous idea. But access and control are important. More regulation generally means that the state gains more access and control over our data, and we should think carefully before we rush in that direction. China’s terrifying Social Credit system shows us where that road can lead: the state knows everything about you, and gives you a score according to how you behave on a range of metrics (including who your friends are and what they say on social media). That score determines whether you have access to a wide range of benefits and privileges – or whether you are punished.

As our data-fuelled future unfolds, there will be plenty more developments which oblige us to discuss all this. At the moment the data we generate mostly concerns places we go, people we talk to, and things we buy. In future it will get more and more personal as well as more and more detailed. Increasingly we’ll be generating – and looking to control the use of – data about our bodies and our minds. Then the debates about privacy, transparency and bias will get really interesting.

Superintelligence: a balanced approach

A couple of recent events made me think it would be good to post a brief but (hopefully) balanced summary of the discussion about superintelligence.

Can we create superintelligence, and if so, when?

Brain on motherboard

Our brains are existence proof that ordinary matter organised the right way can generate general intelligence – an intelligence which can apply itself to any domain. They were created by evolution, which is slow, messy and inefficient. It is also un-directed, although non-random. We are now employing the powerful, fast and purposeful method of science to organise different types of ordinary matter to achieve the same result.

Today’s artificial intelligence (AI) systems are narrow AIs: they can excel in one domain (like arithmetic calculations, playing chess, etc) but they cannot solve problems in new domains. If and when we create an AI which has all the cognitive ability of an adult human, we will have created an artificial general intelligence (AGI).

Although the great majority of AI research is not specifically targeted at creating an AGI, some of it is. For instance, creating an AGI is an avowed aim of DeepMind, which is probably the most impressive team of AI researchers on the planet. Furthermore, many other AI researchers will contribute more or less inadvertently to the development of the first AGI.

We do not know for sure that we will develop AGI, but the arguments that it is impossible are not widely accepted. Much stronger are the arguments that the project will not be successful for centuries, or even thousands of years. There are plenty of experts on both sides of that debate, However, it is at least plausible that AGI will arrive within the lifetime of people alive today. (Let’s agree to leave aside the separate question about whether it will be conscious.)

We do not know for sure that the first AGI will become a superintelligence, or how long that process would take. There are good reasons to believe that it will happen, and that the time from AGI to superintelligence will be much shorter than the time from here to AGI. Again there is no shortage of proponents on both sides of that debate.

I am neither a neuroscientist nor a computer scientist, and I have no privileged knowledge. But having listened to the arguments and thought about it a great deal for several decades, my best guess is that the first AGI will arrive in the second half of this century, in the lifetime of people already born, and that it will become a superintelligence within weeks or months rather than years.

Will we like it?

This is the point where we descend into the rabbit hole. If and when the first superintelligence arrives on Earth, humanity’s future becomes either wondrous or dreadful. If the superintelligence is well-disposed towards us it may be able to solve all our physical, mental, social and political problems. (Perhaps they would be promptly replaced by new problems, but the situation should still be an enormous improvement on today.) It will advance our technology unimaginably, and who knows, it might even resolve some of the basic philosophical questions such as “what is truth?” and “what is meaning?”

ASI
Within a few years of the arrival of a “friendly” superintelligence, humans would probably change almost beyond recognition, either uploading their minds into computers and merging with the superintelligence, or enhancing their physical bodies in ways which would make Marvel superheroes jealous.

On the other hand, if the superintelligence is indifferent or hostile towards us, our prospects could be extremely bleak. Extinction would not be the worst possible outcome.

None of the arguments advanced by those who think the arrival of superintelligence will be inevitably good or inevitably bad are convincing. Other things being equal, the probability of negative outcomes is greater than the probability of positive outcomes: humans require very specific environmental conditions, like the presence of exactly the right atmospheric gases, light, gravity, radiation, etc. But that does not mean we would necessarily get a negative outcome: we might get lucky, or a bias towards positive outcomes on this particular issue might be hard-wired into the universe for some reason.

What it does mean is that we should at least review our options and consider taking some kind of action to influence the outcome.

No stopping

steamroller 2

There are good reasons to believe that we cannot stop the progress of artificial intelligence towards AGI and then superintelligence: “relinquishment” will not work. We cannot discriminate in advance between research that we should stop and research that we should permit, and issuing a blanket ban on any research which might conceivably lead to AGI would cause immense harm – if it could be enforced.

And it almost certainly could not be enforced. The competitive advantage to any company, government or military organisation of owning a superior AI is too great. Bear in mind too that while the cost of computing power required by cutting-edge AI is huge now, it is shrinking every year. If Moore’s Law continues for as long as Intel thinks it will, today’s state-of-the-art AI will soon come within the reach of fairly modest laboratories. Even if there was an astonishing display of global collective self-restraint by all the world’s governments, armies and corporations, when the technology falls within reach of affluent hobbyists (and a few years later on the desktops of school children) surely all bets are off.

There is a danger that, confronted with the existential threat, individual people and possibly whole cultures may refuse to confront the problem head-on, surrendering instead to despair, or taking refuge in ill-considered rapture. We are unlikely to see this happen on a large scale for some time yet, as the arrival of the first superintelligence is probably a few decades away. But it is something to watch out for, as these reactions are likely to engender highly irrational behaviour. Influential memes and ideologies may spread and take root which call for extreme action – or inaction.

At least one AI researcher has already received death threats.

Rather clever mammals

Astronaut

We are an ingenious species, although our range of comparisons is narrow: we know we are the smartest species on this planet, but we don’t know how smart we are in a wider galactic or universal setting because we haven’t met any of the other intelligent inhabitants yet – if there are any.

The Friendly AI problem is not the first difficult challenge humanity has faced. We have solved many problems which seemed intractable when first encountered, and many of the achievements of our technology that 21st century people take for granted would seem miraculous to people born a few centuries earlier.

We have already survived (so far) one previous existential threat. Ever since the nuclear arsenals of the US and the Soviet Union reached critical mass in the early 1960s we have been living with the possibility that all-out nuclear war might eliminate our species – along with most others.

Most people are aware that the world came close to annihilation during the Cuban missile crisis in 1962; fewer know that we have also come close to a similar fate another four times since then, in 1979, 1980, 1983 and 1995i. In 1962 and 1983 we were saved by individual Soviet military officers who decided not to follow prescribed procedure. Today, while the world hangs on every utterance of Justin Bieber and the Kardashian family, relatively few of us even know the names of Vasili Arkhipov and Stanislav Petrov, two men who quite literally saved the world.

Perhaps this survival illustrates our ingenuity. There was an ingenious logic in the repellent but effective doctrine of mutually assured destruction (MAD). More likely we have simply been lucky.

We have time to rise to the challenge of superintelligence – probably a few decades. However, it would be unwise to rely on that period of grace: a sudden breakthrough in machine learning or cognitive neuroscience could telescope the timing dramatically, and it is worth bearing in mind the powerful effect of exponential growth in the computing resource which underpins AI research and a lot of research in other fields too.

It’s time to talk

Bubbles

What we need now is a serious, reasoned debate about superintelligence – a debate which avoids the twin perils of complacency and despair.

We do not know for certain that building an AGI is possible, or that it is possible within a few decades rather than within centuries or millennia. We also do not know for certain that AGI will lead to superintelligence, and we do not know how a superintelligence will be disposed towards us. There is a curious argument doing the rounds which claims that only people actively engaged in artificial intelligence research are entitled to have an opinion about these questions. Some go so far as to suggest that people like Elon Musk are not qualified to comment. This is nonsense: it is certainly worth listening carefully to what the technical experts think, but AI is too important a subject for the rest of us to shrug our shoulders and abrogate all involvement.

We have seen that there are good arguments to take seriously the idea that AGI is possible within the lifetimes of people alive today, and that it could represent an existential threat. It would be complacent folly to ignore this problem, or to think that we can simply switch the machine off if it looks like becoming a threat. It would also be Panglossian to believe that a superintelligence will necessarily be beneficial because its greater intelligence will make it more civilised.

Equally, we must avoid falling into despair, felled by the evident difficulty of the Friendly AI challenge. It is a hard problem, but it is one that we can and must solve. We will solve it by applying our best minds to it, backed up by adequate resources. The establishment of existential risk organisations like the Future of Humanity Institute in Oxford is an excellent development.

To assign adequate resources to the project and attract the best minds we will need a widespread understanding of its importance, and that will only come if many more people start talking and thinking about superintelligence. After all, if we take the FAI challenge seriously and it turns out that AGI is not possible for centuries, what would we have lost? The investment we need at the moment is not huge. You might think that we should be spending any such money on tackling global poverty or climate change instead. These are of course worthy causes, but their solutions require vastly larger sums, and they are not existential threats.

Surviving AI

Surviving cover, compressed

If artificial intelligence begets superintelligence it will present humanity with an extraordinary challenge – and we must succeed. The prize for success is a wondrous future, and the penalty for failure (which could be the result of a single false step) may be catastrophe.

Optimism, like pessimism, is a bias, and to be avoided. But summoning the determination to rise to a challenge and succeed is a virtue.

Book review: “Enlightenment Now” by Stephen Pinker

A valuable and important book

Pinker with book
Enlightenment Now” is the latest blockbuster from Stephen Pinker, the author of “The Blank Slate” and “The Better Angels of Our Nature”. It has a surprising and disappointing blind spot in its treatment of AI risk, which is why it is reviewed here, but overall, it is a valuable and important book: it launches a highly effective attack on populism, which is possibly the most important and certainly the most dangerous political movement today. The resistance to populism needs bolstering, and Pinker is here to help.

Populism

Populism
Populists claim to defend the common man against an elite – usually a metropolitan elite. They claim that the past was better than the present because the birthright of the masses has been stolen. The populists claim that they can right this wrong, and rescue the people from their fate. (The irony that most populists are members of the same metropolitan elite is strangely lost on their supporters. The hypocrisy of Boris Johnson, Jacob Rees-Mogg, Rupert Murdoch and the rest complaining about metropolitan elites is breath-taking.)

The claims of populists are mostly false, and they usually know it, so their advocacy is often as dishonest as it is strident, which undermines public debate. What is worse, their policies don’t work, and often cause great harm.

Past outbreaks of populism have had a range of outcomes. The term originated in the US, where a Populist Party was electorally successful in the late nineteenth and early twentieth centuries, but then fizzled out without leaving much of a trace. Other outbreaks have had far more lasting consequences: communist populists have murdered millions, and the Nazis plunged the whole world into fire and terror.

Today, populism has produced dangerously illiberal governments in Central Europe, and it is dragging Britain out of the EU with the nostalgic rallying cry of “take back control”. The hard left faction currently in charge of Britain’s Labour party wants to take the country back to the 1970s, and Bernie Saunders enchants his followers with visions of a better world which has been stolen by plutocrats.

The populist-in-chief

Twitter in prison
The most obvious and blatant populist today is, of course, President Trump. A pathological liar, and a brazen adulterer who brags about making sexual assaults, he is openly nepotistic, racist, and xenophobic. He is chaotic, thuggish, wilfully ignorant (although not stupid), and a self-deluding egotist with very thin skin and a finger on the nuclear button. He is likely to be proven a traitor before his term expires, and he is certainly an autocratically inclined threat to democracy.

Given all this, the opposition to Trump’s version of populism has been surprisingly muted. The day after President Trump’s inauguration, the Women’s March turned into one of the largest nationwide demonstrations in American history. But since then, Democratic Party leaders have struggled to make their voices heard above the brouhaha raised by Trump’s potty tweets and his wildly disingenuous press announcements, so they tried cutting deals with him instead of insisting that his behaviour was abnormal and unacceptablei. The Republicans are holding their noses and drowning their scruples for the sake of a tax cut, at the risk of devastating their party if and when the Trump bubble bursts. The most potent resistance has come from comedians like Bill Maher, Stephen Colbert and Samantha Bee.

Liberalism needs to recover its voice. It needs to fight back against populism both intellectually and emotionally. Enlightenment Now is a powerful contribution at the intellectual level.

Progress

OurWorldInData
Part two of the book (chapters 4 to 20) accounts for two-thirds of the text. It is a comprehensive demolition of the core populist claim that the past was better than today, and that there has been no progress. It draws heavily (and avowedly) on the work of Max Rosen, who runs the Our World In Data website, and is the protégé of the late Hans Rosling, whose lively and engaging TED talks are a must-watch for anyone wishing to understand what is really going on in our world.

Whatever metric you choose, human life has become substantially and progressively better in the last two hundred years. You can see it in life expectancy, diets, incomes, environmental measures, levels of violence, democracy, literacy, happiness, and even equality. I’m not going to go into a defence of any of these claims here: read the book!

Pinker makes clear that he does not think the world today is perfect – far from it. We have not achieved utopia, and probably never will. Similarly, he is not saying that progress is inevitable, or that setbacks have not occurred. But he believes there are powerful forces driving us in the direction of incremental improvement.

Criticisms

Enlightenment Now is already a best-seller, and the subject of numerous reviews. It has attracted its fair share of scorn, especially from academics. Some of that is for his support for muscular atheism, and some for his alleged over-simplification of the Enlightenment. This latter criticism might be a fair cop, but the book is not intended to be an academic historical analysis, so he may not be overly troubled by that.

Indeed, Pinker seems almost to invite academic criticism: “I believe that the media and intelligentsia were complicit in populists’ depiction of modern Western nations as so unjust and dysfunctional that nothing short of a radical lurch could improve them.” He is an equal-opportunity offender, as scathing about left-inclined populist sympathisers as those on the right: “The left, too, has missed the boat in its contempt for the market and its romance with Marxism. Industrial capitalism launched the Great Escape from universal poverty in the 19th century and is rescuing the rest of humankind in a Great Convergence in the 21st.”

A lot of people are irritated by what they see as Pinker’s glib over-optimism, and here he seems more vulnerable: he derides warnings of apocalyptic dangers as a “lazy way of achieving moral gravitas”, and while he has a point, it sometimes leads him into complacency. “Since nuclear weapons needn’t have been invented, and they are are useless in winning wars or keeping the peace, that means they can be un-invented – not in the sense that the knowledge of how to make them will vanish, but in the sense that they can be dismantled and no new ones built.”

Pinker’s blind spot regarding AI

Musk sceptical
And so to the reason for reviewing Enlightenment Now on this blog. Pinker’s desire to downplay the negative forces acting on our world leads him to be scathing about the idea that artificial intelligence poses any significant risks to humanity. But his arguments are poor, and while he reels off some AI risk jargon fluently enough, and name-checks some of the major players, it is clear that he does not fully understand what he is talking about. Comments like “Artificial General Intelligence (AGI) with God-like omniscience and omnipotence” suggest that he does not know the difference between AGI and superintelligence, which led Elon Musk to tweet wryly that if even Pinker did not understand AI, then humanity really is in trouble.

Pinker claims that “among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence”. This is grossly misleading: while many AI researchers don’t see superintelligence as a near-term risk, very few deny that it is a serious possibility within a century or two, and one which we should prepare for. It appears that Pinker has been overly influenced by some of these outliers, as he cites some of them, including Rodney Brooks. But presumably in error rather than mischief, he also lists Professor Stuart Russell as one of the eminent AI researchers who discount the existential risk from superintelligence, whereas Russell was actually one of the first to raise the alarm.

Pinker makes the bizarre claim that “Driving a car is an easier engineering problem than unloading a dishwasher” and goes on to observe that “As far as I know, there are no projects to build an AGI”. In fact there are several, including Doug Lenat’s long-running Cyc initiative, Ben Goertzel’s OpenCog Foundation, and most notably, DeepMind’s splendid ambition to “solve intelligence, and use that to solve everything else.”

If you want to dive further into these arguments, the standard recommendation is of course Nick Bostrom’s seminal “Superintelligence”, but I’m told that “Surviving AI”, by a certain Calum Chace, explores the issues pretty well too.

Resistance Now

Happily, although regrettable, this blind spot does not spoil “Enlightenment Now”’s important and valuable contribution to the resistance to the tide of populism. Highly recommended.