Book and movie reviews

Book review: “21 lessons for the 21st century” by Yuval Harari

Cover

The title of Yuval Harari’s latest best-seller is a misnomer: it asks many questions, but offers very few answers, and hardly any lessons. It is the least notable of his three major books, since most of its best ideas were introduced in the other two. But it is still worth reading. Harari delights in grandiloquent sweeping generalisations which irritate academics enormously, and part of the fun is precisely that you can so easily picture his colleagues seething with indignation that he is trampling on their turf. More important, some of his generalisations are acutely insightful.

The insight at the heart of “Sapiens”, his first book, was that humans dominate the planet not because we are logical, but because 70,000 or so years ago we developed the ability to agree to believe stories that we know are untrue. These stories are about religion, and political and economic organisation. The big insight in his second book, “Homo Deus” is that artificial intelligence and other technologies are about to transform our lives far more – and far more quickly – than almost anyone realises. Both these key ideas are reprised in “21 Lessons”, but they are big ideas which bear repeating.

Happily, he has toned down his idiosyncratic campaigns about religion and vegetarianism. In the previous books he encountered religion everywhere: capitalism and communism have passionate adherents, but they are not religions. The first third of “Homo Deus” is religious in a different way: it is a lengthy sermon about vegetarianism.

Sapiens and Homo Deus

21 Lessons” is divided into five parts, of which the first is the most coherent and the best. It concerns the coming technological changes, which Harari first explored in “Homo Deus”. “Most people in Birmingham, Istanbul, St Petersburg and Mumbai are only dimly aware, if at all, of the rise of artificial intelligence and its potential impact on their lives. It is undoubtable, however, that the technological revolutions will gather momentum in the next few decades, and will confront humankind with the hardest trials we have ever encountered.”

He is refreshingly blunt about the possibility of technological unemployment: “It is dangerous just to assume that enough new jobs will appear to compensate for any losses. The fact that this has happened during previous waves of automation is absolutely no guarantee that it will happen again under the very different conditions of the twenty-first century. The potential social and political disruptions are so alarming that even if the probability of systemic mass unemployment is low, we should take it very seriously.”

Very well said, but this part of the book would be much more powerful if he had offered a fully worked-through argument for this claim, which in the last couple of years has been sneeringly dismissed by a procession of tech giant CEOs, economists, and politicians. Perhaps next year, the World Economic Forum could organise a debate on this question between Harari and a leading sceptic, such as David Autor.

It is also a shame that he offers no prescriptions, beyond categorising them: “Potential solutions fall into three main categories: what to do in order to prevent jobs from being lost; what to do in order to create enough new jobs; and what to do if, despite our best efforts, job losses significantly outstrip job creation.” Fair enough, but this should be the start of the discussion, not the end. Still, at least he doesn’t fall back on the usual panacea of universal basic income, and his warning about what happens if we fail to develop a plan is clear: “as the masses lose their economic importance … the state might lose at least some of the incentive to invest in their health, education and welfare. It’s very dangerous to be redundant.”

Harari is also more clear-sighted than most about the risk of algocracy – the situation which arises when we delegate decisions to machines because they make better ones than we do. “Once we begin to count on AI to decide what to study, where to work, and who to marry, human life will cease to be a drama of decision-making. … Imagine Anna Karenina taking out her smartphone and asking the Facebook algorithm whether she should stay married to Karenin or elope with the dashing Count Vronsky.” Warning about technological unemployment, he coined the brutal phrase “the gods and the useless”. Warning about algocracy, he suggests that humans could become mere “data cows”.

Data cows

The remaining four parts of the book contain much less that is original and striking. Harari is a liberal and an unapologetic globalist, pointing out reasonably enough that global problems like technological disruption require global solutions. He describes the EU as a “miracle machine”, which Brexit is throwing a spanner into. He does not see nationalism as a problem in itself, although he observes that for most of our history we have not had nations, and they are unnatural things and hard to build. In fact he thinks they can be very positive, but “the problem starts when benign patriotism morphs into chauvinistic ultra-nationalism.”

Although he sees nationalism as a possible problem, he also thinks it has already lost the game: “we are all members of a single rowdy global civilisation … People still have different religions and national identities. But when it comes to the practical stuff – how to build a state, an economy, a hospital, or a bomb –almost all of us belong to the same civilisation.” He supports this claim by pointing out that the Olympic Games, currently “organised by stable countries, each with boringly similar flags and national anthems,” could not have happened in mediaeval times, when there were no such things as nation states. And he argues that this is a very good thing: “For all the national pride people feel when their delegation wins a gold medal and their flag is raised, there is far greater reason to feel pride that humankind is capable of organising such an event.”

Globalisation

He is even more dismissive of religion – especially monotheism – despite his obsession with it. “From an ethical perspective, monotheism was arguably one of the worst ideas in human history … What monotheism undoubtedly did was to make many people far more intolerant than before … the late Roman Empire was as diverse as Ashoka’s India, but when Christianity took over, the emperors adopted a very different approach to religion.” Religion, he says, has no answers to any of life’s important questions, which is why there is no great following for a Christian version of agriculture, or a Muslim version of economics. “We don’t need to invoke God’s name in order to live a moral life. Secularism can provide us with all the values we need.”

He seems to be applying for membership of the “new atheists” club, in which Richard Dawkins and Stephen Pinker deliberately goad the religious by diagnosing religion as a disease which can be cured. Harari suggests that “when a thousand people believe some made-up story for one month, that’s fake news. When a billion people believe it for a thousand years, that’s a religion.”

New atheists

Oddly, given his perceptive take on the future of AI, Harari is weak on science fiction, displaying a fundamental misunderstanding of both The Matrix and Ex Machina. He is stronger on terrorism, pointing out that it is much less of a threat than it seems, contrary to the deliberate mis-representations by populists: “Since 11 September 2001, every year terrorists have killed about fifty people in the European Union, about ten people in the USA, about seven people in China, and up to 25,000 people globally (mostly in Iraq, Afghanistan, Pakistan, Nigeria and Syria).  In contrast, each year traffic accidents kill about 80,000 Europeans, 40,000 Americans, 270,000 Chinese, and 1.25 million people altogether.” Terrorists “challenge the state to prove it can protect all its citizens all the time, which of course it can’t.” They are trying to make the state over-react, and populists are their eager accomplices.

The book seems to be building to a climax when it addresses the meaning of life. Here and elsewhere, Harari has said that humans create meaning – or at least the basis of power – by telling ourselves stories. So is he going to give us a story which will help us navigate the challenges of the 21st century?

Sadly not. The closest we get is a half-baked version of Buddhism.

The Buddha taught that the three basic realities of the universe are that everything is constantly changing, nothing has any enduring essence, and nothing is completely satisfying.  Suffering emerges because people fail to appreciate this … The big question facing humans isn’t ‘what is the meaning of life?’ but rather, ‘how do we get out of suffering?’ … If you really know the truth about yourself and about the world, nothing can make you miserable. But that is of course much easier said than done.” Indeed.

Meaning

Harari has worked out his own salvation: “Having accepted that life has no meaning, I find meaning in explaining this truth to others.” Given his six-figure speaking fees, this makes perfect sense.

Harari also finds solace in meditation, which he practices for two hours every day, and a whole month or two every year. “21 Lessons” is a collection of essays written for newspapers and in response to questions. This shows in its disjointed, discursive, and inconclusive nature. If Harari had spent less time meditating, maybe he would have found more time to answer the questions he raises. It’s still definitely worth reading, though.

.———–

Book review: “Enlightenment Now” by Stephen Pinker

A valuable and important book

Pinker with book

Enlightenment Now” is the latest blockbuster from Stephen Pinker, the author of “The Blank Slate” and “The Better Angels of Our Nature”. It has a surprising and disappointing blind spot in its treatment of AI risk, which is why it is reviewed here, but overall, it is a valuable and important book: it launches a highly effective attack on populism, which is possibly the most important and certainly the most dangerous political movement today. The resistance to populism needs bolstering, and Pinker is here to help.

Populism

Populism

Populists claim to defend the common man against an elite – usually a metropolitan elite. They claim that the past was better than the present because the birthright of the masses has been stolen. The populists claim that they can right this wrong, and rescue the people from their fate. (The irony that most populists are members of the same metropolitan elite is strangely lost on their supporters. The hypocrisy of Boris Johnson, Jacob Rees-Mogg, Rupert Murdoch and the rest complaining about metropolitan elites is breath-taking.)

The claims of populists are mostly false, and they usually know it, so their advocacy is often as dishonest as it is strident, which undermines public debate. What is worse, their policies don’t work, and often cause great harm.

Past outbreaks of populism have had a range of outcomes. The term originated in the US, where a Populist Party was electorally successful in the late nineteenth and early twentieth centuries, but then fizzled out without leaving much of a trace. Other outbreaks have had far more lasting consequences: communist populists have murdered millions, and the Nazis plunged the whole world into fire and terror.

Today, populism has produced dangerously illiberal governments in Central Europe, and it is dragging Britain out of the EU with the nostalgic rallying cry of “take back control”. The hard left faction currently in charge of Britain’s Labour party wants to take the country back to the 1970s, and Bernie Saunders enchants his followers with visions of a better world which has been stolen by plutocrats.

The populist-in-chief

Twitter in prison

The most obvious and blatant populist today is, of course, President Trump. A pathological liar, and a brazen adulterer who brags about making sexual assaults, he is openly nepotistic, racist, and xenophobic. He is chaotic, thuggish, wilfully ignorant (although not stupid), and a self-deluding egotist with very thin skin and a finger on the nuclear button. He is likely to be proven a traitor before his term expires, and he is certainly an autocratically inclined threat to democracy.

Given all this, the opposition to Trump’s version of populism has been surprisingly muted. The day after President Trump’s inauguration, the Women’s March turned into one of the largest nationwide demonstrations in American history. But since then, Democratic Party leaders have struggled to make their voices heard above the brouhaha raised by Trump’s potty tweets and his wildly disingenuous press announcements, so they tried cutting deals with him instead of insisting that his behaviour was abnormal and unacceptablei. The Republicans are holding their noses and drowning their scruples for the sake of a tax cut, at the risk of devastating their party if and when the Trump bubble bursts. The most potent resistance has come from comedians like Bill Maher, Stephen Colbert and Samantha Bee.

Liberalism needs to recover its voice. It needs to fight back against populism both intellectually and emotionally. Enlightenment Now is a powerful contribution at the intellectual level.

Progress

OurWorldInData

Part two of the book (chapters 4 to 20) accounts for two-thirds of the text. It is a comprehensive demolition of the core populist claim that the past was better than today, and that there has been no progress. It draws heavily (and avowedly) on the work of Max Rosen, who is in turn the protégé of the late Hans Rosling, whose lively and engaging TED talks are a must-watch for anyone wishing to understand what is really going on in our world.

Whatever metric you choose, human life has become substantially and progressively better in the last two hundred years. You can see it in life expectancy, diets, incomes, environmental measures, levels of violence, democracy, literacy, happiness, and even equality. I’m not going to go into a defence of any of these claims here: read the book!

Pinker makes clear that he does not think the world today is perfect – far from it. We have not achieved utopia, and probably never will. Similarly, he is not saying that progress is inevitable, or that setbacks have not occurred. But he believes there are powerful forces driving us in the direction of incremental improvement.

Criticisms

Enlightenment Now is already a best-seller, and the subject of numerous reviews. It has attracted its fair share of scorn, especially from academics. Some of that is for his support for muscular atheism, and some for his alleged over-simplification of the Enlightenment. This latter criticism might be a fair cop, but the book is not intended to be an academic historical analysis, so he may not be overly troubled by that.

Indeed, Pinker seems almost to invite academic criticism: “I believe that the media and intelligentsia were complicit in populists’ depiction of modern Western nations as so unjust and dysfunctional that nothing short of a radical lurch could improve them.” He is an equal-opportunity offender, as scathing about left-inclined populist sympathisers as those on the right: “The left, too, has missed the boat in its contempt for the market and its romance with Marxism. Industrial capitalism launched the Great Escape from universal poverty in the 19th century and is rescuing the rest of humankind in a Great Convergence in the 21st.”

A lot of people are irritated by what they see as Pinker’s glib over-optimism, and here he seems more vulnerable: he derides warnings of apocalyptic dangers as a “lazy way of achieving moral gravitas”, and while he has a point, it sometimes leads him into complacency. “Since nuclear weapons needn’t have been invented, and they are are useless in winning wars or keeping the peace, that means they can be un-invented – not in the sense that the knowledge of how to make them will vanish, but in the sense that they can be dismantled and no new ones built.”

Pinker’s blind spot regarding AI

Musk sceptical

And so to the reason for reviewing Enlightenment Now on this blog. Pinker’s desire to downplay the negative forces acting on our world leads him to be scathing about the idea that artificial intelligence poses any significant risks to humanity. But his arguments are poor, and while he reels off some AI risk jargon fluently enough, and name-checks some of the major players, it is clear that he does not fully understand what he is talking about. Comments like “Artificial General Intelligence (AGI) with God-like omniscience and omnipotence” suggest that he does not know the difference between AGI and superintelligence, which led Elon Musk to tweet wryly that if even Pinker did not understand AI, then humanity really is in trouble.

Pinker claims that “among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence”. This is grossly misleading: while many AI researchers don’t see superintelligence as a near-term risk, very few deny that it is a serious possibility within a century or two, and one which we should prepare for. It appears that Pinker has been overly influenced by some of these outliers, as he cites some of them, including Rodney Brooks. But presumably in error rather than mischief, he also lists Professor Stuart Russell as one of the eminent AI researchers who discount the existential risk from superintelligence, whereas Russell was actually one of the first to raise the alarm.

Pinker makes the bizarre claim that “Driving a car is an easier engineering problem than unloading a dishwasher” and goes on to observe that “As far as I know, there are no projects to build an AGI”. In fact there are several, including Doug Lenat’s long-running Cyc initiative, Ben Goertzel’s OpenCog Foundation, and most notably, DeepMind’s splendid ambition to “solve intelligence, and use that to solve everything else.”

If you want to dive further into these arguments, the standard recommendation is of course Nick Bostrom’s seminal “Superintelligence”, but I’m told that “Surviving AI”, by a certain Calum Chace, explores the issues pretty well too.

Resistance Now

Happily, although regrettable, this blind spot does not spoil “Enlightenment Now”’s important and valuable contribution to the resistance to the tide of populism. Highly recommended.

Book review: “Homo Deus” by Yuval Harai

homo-deus-cover

TL;DR*:

A rather plodding first half may deter some fans of “Sapiens” (Harari’s previous book), but it is worth persevering for the extreme views about algocracy which he introduces in the final third.

Clear and direct

sapiens

Yuval Harari’s book “Sapiens” was a richly deserved success. Full of intriguing ideas which were often both original and convincing, its prose style is clear and direct – a pleasure to read.** His latest book, “Homo Deus” shares these characteristics, but personally, I found the first half dragged a little, and some of the arguments and assumptions left me unconvinced. I’m glad that I persevered, however: towards the end he produces a fascinating and important suggestion about the impact of AI on future humans.

Because Harari’s writing is so crisp, you can review it largely in his own words.

From famine, plague and war to immortality, happiness and divinity

Harari opens the book with the claim that for most of our history, homo sapiens has been preoccupied by the three great evils of famine, plague and war. These have now essentially been brought under control, and because “success breeds ambition… humanity’s next targets are likely to be immortality, happiness and divinity.” In the coming decades, Harari says, we will re-engineer humans with biology, cyborg technology and AI.

The effects will be profound: “Once technology enables us to re-engineer human minds, Homo Sapiens will disappear, human history will come to an end, and a completely new kind of process will begin, which people like you and me cannot comprehend. Many scholars try to predict how the world will look in the year 2100 or 2200. This is a waste of time.”

There is, he adds, “no need to panic, though. At least not immediately. Upgrading Sapiens will be a gradual historical process rather than a Hollywood apocalypse.”

Vegetarianism and religion

lisa-the-vegetarian

At this point Harari indulges in a lengthy argument that we should all become vegetarians, asking “is Homo sapiens a superior life form, or just the local bully?” and concluding with the unconvincing (to me) warning that if “you want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans [you should] start by investigating how humans treat their less intelligent animal cousins.” He doesn’t explain why super-intelligent beings would follow the same logic – or lack of logic – as us.

I also find myself uncomfortable with some of his linguistic choices, and two in particular. First, his claim that “now humankind is poised to replace natural selection with intelligent design,” seems to me to pollute an important idea by associating it with a thoroughly discredited term.

Secondly, he is (to my mind) overly keen to attach the label “religion” to pretty much any system for organising people, including humanism, liberalism, communism and so on. For instance, “it may not be wrong to call the belief in economic growth a religion, because it now purports to solve many if not most of our ethical dilemmas.” To many people, a religion with no god is an oxymoron. This habit of seeing most human activity as religious might be explained by the fact that Harari lives in Israel, a country where religious fervour infuses everyday life like smog infuses an industrialising city.

Science escapes being labelled as religion, but Harari has a curious way of thinking about it too: “Neither science nor religion cares that much about the truth, … Science is interested above all in power. It aims to acquire the power to cure diseases, fight wars and produce food.”

Humanism

renaissance-humanism

A longish section of the book is given over to exploring humanism, which Harari sees as a religion that supplanted Christianity in the West. “Due to [its] emphasis on liberty, the orthodox branch of humanism is known as ‘liberal humanism’ or simply as ‘liberalism’. … During the nineteenth and twentieth centuries, as humanism gained increasing social credibility and political power, it sprouted two very different offshoots: socialist humanism, which encompassed a plethora of socialist and communist movements, and evolutionary humanism, whose most famous advocates were the Nazis.”

Having unburdened himself of all this vegetarianism and religious flavouring, Harari spends the second part of “Homo Deus” considering the future of our species, and on this terrain he recovers the nimble sure-footedness which made “Sapiens” such a great book.

Free will is an illusion

free-will

He starts by attacking our strong intuitive belief that we are all unitary, self-directing persons, possessing free will. “To the best of our scientific understanding, determinism and randomness have divided the entire cake between them, leaving not even a crumb for ‘freedom’. … The notion that you have a single self … is just another liberal myth, debunked by the latest scientific research.” This dismissal of personal identity (the “narrating self”) as a convenient fiction plays an important role in the final third of the book.

A curious characteristic of “Homo Deus” is that Harari assumes there is no need to persuade his readers of the enormous impact that new technologies will have in the coming decades. Futurists like Ray Kurzweil, Nick Bostrom, Martin Ford and others (including me) spend considerable effort getting people to comprehend and take into account the astonishing power of exponential growth. Harari assumes everyone is already on board, which is surprising in such a mainstream book. I hope he is right, but I doubt it.

Stop worrying and learn to love the autonomous kill-bot

killer-robot

Harari is also quite happy to swim against the consensus when exploring the impact of these technologies. A lot of ink is currently being spilled in an attempt to halt the progress of autonomous weapons. Harari considers it a waste: “Suppose two drones fight each other in the air. One drone cannot fire a shot without first receiving the go-ahead from a human operator in some bunker. The other drone is fully autonomous. Which do you think will prevail? … Even if you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves, they all too often kill civilians by mistake.”

The economic singularity and superfluous people

As well as dismissing attempts to forestall AI-enabled weaponry, Harari has no truck with the Reverse Luddite Fallacy, the idea that because automation has not caused lasting unemployment in the past it will not do so in the future. “Robots and computers … may soon outperform humans in most tasks. … Some economists predict that sooner or later, un-enhanced humans will be completely useless. … The most important question in twenty-first-century economics may well be what to do with all the superfluous people.”

My income is OK, but what am I for?

cropped-the_economic_singula_cover_for_kindle.jpg

Harari has interesting things to say about some of the dangers of technological unemployment. He is sanguine about the ability of the post-jobs world to provide adequate incomes to the “superfluous people”, but like many other writers, he asks where we will find meaning in a post-jobs world. “The technological bonanza will probably make it feasible to feed and support the useless masses even without any effort on their side. But what will keep them occupied and content? People must do something, or they will go crazy. What will they do all day? One solution might be offered by drugs and computer games. … Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences.”

Personally, I think he has got this the wrong way round. Introducing Universal Basic Income (or some similar scheme to provide a good standard of living to the unemployable) will probably prove to be a significant challenge. Persuading the super-rich (whether they be humans or algorithms) to provide the rest of us with a comfortable income will, I hope, be possible, but it may have to be done globally and within a very short time-frame. If we do manage this transition smoothly, I suspect the great majority of people will quickly find worthwhile and enjoyable things to do with their new-found leisure. Rather like many pensioners do today, and aristocrats have done for centuries.

The Gods and the Useless

gods-and-the-useless-1

I have more common ground with Harari when he argues that inequality of wealth and income may become so severe that it leads to “speciation” – the division of the species into completely separate groups, whose vital interests may start to diverge. “As algorithms push humans out of the job market, wealth might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social inequality.”

Pursuing this idea, he coins the rather brutal phrase “the Gods and the Useless”. He points out that in the past, the products of technological advances have disseminated rapidly through economies and societies, but he thinks this may change. In the vital area of medical science, for instance, we are moving from an era when the goal was to bring as many people as possible up to a common standard of “normal health”, to a world in which the cognitive and physical performance of certain individuals may be raised to new and extraordinary heights. This could have dangerous consequences: when tribes of humans with different levels of capability have collided it has rarely gone well for the less powerful group.

The technological singularity

surviving-cover-compressed

The technological singularity pops up briefly, and again Harari sees no need to expend much effort persuading his mainstream audience that this startling idea is plausible. “Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpasses human intelligence, it might simply exterminate humankind.”

Extreme algocracy

Harari leaves this idea hanging in the air, though, and finally we arrive at the main event, in which he predicts the dissolution of not only humanism, but of the whole notion of individual human beings. “The new technologies of the twenty-first century may thus reverse the humanist revolution, stripping humans of their authority, and empowering non-human algorithms instead.” “Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and finally into sovereigns.” As a consequence, “humans will no longer be autonomous entities directed by the stories their narrating self invents. Instead, they will be integral parts of a huge global network.” This seems to me to be an extreme version of an idea called algocracy, in which humans are governed by algorithms.

As an example of how this extreme algocracy could come about, “suppose my narrating self makes a New Year resolution to start a diet and go to the gym every day. A week later, when it is time to go to the gym, the experiencing self asks Cortana to turn on the TV and order pizza. What should Cortana do?” Harari thinks Cortana (or Siri, or whatever they are called then) will know us better than we do and will make wiser choices than we would in almost all circumstances. We will have no sensible option other than to hand over almost all decision-making to them.

Two new religions

new-religions

Given his religious turn of mind, it is perhaps inevitable that Harari sees this extreme algocracy as leading to the birth of not one, but two new religions. “The most interesting place in the world from a religious perspective is not the Islamic State or the Bible Belt, but Silicon Valley.” Algocracy, he thinks, will generate two new “techno-religions … techno-humanism and data religion”, or “Dataism”.

Techno-humanism agrees that Homo Sapiens as we know it has run its historical course and will no longer be relevant in the future, but concludes that we should therefore use technology in order to create Homo Deus – a much superior human model.” Harari thinks that techno-humanism is incoherent because if you can always improve yourself then you are no longer an independent agent: “Once people could design and redesign their will, we could no longer see it as the ultimate source of all meaning and authority. For no matter what our will says, we can always make it say something else.” “What”, he asks, “will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”

Dataism trumps Techno-humanism

Hence a bolder techno-religion seeks to sever the humanist umbilical cord altogether. … The most interesting emerging religion is Dataism, which venerates neither gods nor man – it worships data. … According to Dataism, King Lear and the flu virus are just two patterns of data flow that can be analysed using the same basic concepts and tools. … Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”  

“Dataism isn’t limited to idle prophecies. Like every religion, it has its practical commandments. First and foremost, a Dataist ought to maximise data flow by connecting to more and more media, and producing and consuming more and more information.” So people who record every aspect of their lives on Facebook and Twitter are not clogging up the airwaves with junk after all; they are simply budding Dataists.

I am un-persuaded by the idea that mere data, undifferentiated by anything beyond quantity and complexity could become sovereign on this planet and throughout the universe. I think Harari has missed an interesting opportunity – if he replaced the notion of data with the notion of consciousness, I think he might be onto something important. It would not be the first time that a thinker proposed that mankind’s destiny (a religiously loaded word which he would perhaps approve) was to merge its individual minds into a single consciousness and spread it throughout the cosmos, but it might be the first time that a genuinely mainstream book did so.

In any case, Harari deserves great credit for staring directly at the massive transformations heading our way, and following his intuitions to their logical conclusions.

harari

* TL;DR = Too Long, Didn’t Read

** Sapiens does have its critics, including the philosopher John Danaher, who thinks it over-rated.

—————

Book review: “The Age of Em” by Robin Hanson

Em cover

I can’t remember ever reading a book before which I liked so much, while disagreeing with so much in it. This partly because the author is such an amiable fellow. He really wants you to like him, and despite being quite eminent in his field, he displays a disarming humility: his acknowledgements extends to a page and a half, and includes many equally eminent people, but in spite of that he says:

I’ve never felt as intellectually isolated or at risk as when writing this book, and I hope my desert days end now, as readers like you join me in discussing The Age of Em.”

The writing style is direct, informal and engaging:

If you can’t see the point in envisioning the lives of your descendants, you’d best quit now, as that’s mostly all I’ve got.”

And the book addresses an important subject: the future:

If the future matters more than the past, because we can influence it, why do we have far more historians than futurists?”

Well said!

Heroic forecasting

The book is essentially a forecast of what life will be like for the first artificial general intelligences, which he calls ‘ems’ – short for emulations. That may sound like a heroic undertaking for a non-fiction writer, and indeed it is. And Robin does have a heroic faith in the power of forecasts. On page 34 he says, “make no mistake, it is possible to forecast the future” and a few pages later, foragers could have better anticipated the industrial era if they had first understood the intervening farmer era”.

Scepticism about progress in AI

Robin may be right to say that brain emulation will produce an artificial general intelligence (AGI) before the continuing progress in machine learning, or other types of AI research. Reverse engineering a human brain by slicing it thinly and tracing the connectome, and reproducing that in silico does seem like a more guaranteed route to AGI than the alternative, which may require conceptual leaps which are as yet unknown.

But Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic. Because of this claim, he is highly critical of the view that technological unemployment will be widespread in the next few decades. Fair enough, he might be right, but obviously I doubt it. He is also rather dismissive of major changes in society being caused by virtual reality, augmented reality, the internet of things, 3D printing, self-driving cars, and all the other astonishing technologies being developed and introduced as we speak.

It would take too long to list all the other things in the book I disagree with, but here are a few. He seems to think that when the first ems are created, they will very quickly be perfect replications of the target human minds. It seems to me more likely that we will create a series of approximations of the target person, and some of these early creations may raise very awkward ethical considerations. Greg Egan’s novel Zendegi is a good exploration of this.

Intelligence explosion

Robin is a long-time sceptic of the idea of an intelligence explosion – the idea that the first AGI will be recursively improved to create a superintelligence, and that this will happen within months if not weeks. Admittedly, The Age of Em only covers a period of two years at the subjective speed of humans, although it takes many thousands of years from the point of view of the ems. Given that his world, as described, has an astonishing amount of computational capacity available, I found this wholly implausible. The incentive to enhance the intelligence of an entity which works for you is irresistible, and once we have models of minds in silico, it will be much easier to do so.

Personal identity

The ems are all quite happy workaholics – largely because they are emulations of workaholics. They are also happy to be copied, and for the copies of themselves to be shut down again. A remark on page 49 is telling in this regard:

the concepts of “identity” and “consciousness” … play little role in the physical, engineering, social, and human sciences that I will rely on in this book. So I will now say little more on those topics”

It seems to me that if and when we create artificial people they will care a lot about their consciousness and their identity!

The humans in this world are all happy to be retired, and have the ems create everything they need. I think the scenario of radical abundance is definitely achievable, but I don’t think it’s a slam dunk, and I would imagine much more interaction – good and bad – between ems and humans than Robin seems to expect.

Religion and going multi-planetary

A couple of smaller but important comments. Robin thinks ems will be intellectually superior to most humans, not least because they will be modelled on the best of us. He therefore thinks they will be religious. Apart from the US, always an exceptional country, the direction of travel in that regard is firmly in the other direction.

And space travel. Robin argues that we will keep putting off trying to colonise the stars because whenever you send a ship out there, it would always be overtaken by a later, cheaper one which benefits from better technology. This ignores one of the main reasons for doing it: to improve our chances of survival by making sure all our eggs aren’t in the one basket that is this pale blue dot. It doesn’t matter if you are overtaken, because one day the overtaking might well be stopped by a nuclear war, asteroid strike, gamma ray burst, global pathogen or some other disaster. And it might be stopped the day you leave.  

TL;DR  

A fascinating and engaging book, containing much to enjoyably disagree with.

 —————

UK cover 2Book review: “Ghost Fleet” by P.W Singer and August Cole

There’s a war in progress. Large numbers of people have taken sides, and each side thinks the other is mad, bad, and irresponsible. The vitriol is intense, and it shows no sign of letting up.

I’m not talking about the war between China and the USA which is depicted in Ghost Fleet, but the war between the Sad Puppies and the Social Justice Warriors over the Hugo Award, the most prestigious awards in the science fiction literary world. Bear with me – there’s a link.

If you’re remotely interested in science and speculative fiction, you’ll know that the Sad Puppies felt that the Hugos had been captured and colonised by Social Justice Warriors, a tribe which values political correctness over storytelling. They cited as an example the “Ancillary” series by Ann Leckie, which they claimed was lauded more because of its gender politics than because of any literary or storytelling merit. They drew up a “slate” of authors who they felt represented a fairer cross-section of the science fiction which “ordinary” SF fans like to read, and promoted this slate among the people who vote for the awards.

The reaction from their opponents has been furious. The Sad Puppies have been labelled reactionary, racist, sexist, and pretty much every other “-ist” you can think of.

But here comes something that perhaps – just perhaps – both sides in this vicious row could agree on. Ghost Fleet is a hugely successful new novel which combines kinetic storytelling with a careful balance of gender roles, even at the front line of global warfare.  Ghost Fleet was published in June, and its PR team deserve enormous praise for securing detailed coverage in prestigious media like The Economist and The Atlantic. Its authors are a defense policy analyst and a defense journalist, so perhaps they could bring valuable diplomatic skills to the task of mediating between the sides in the Hugos war.

Ghost Fleet is a techno-thriller, and its authors are explicitly seeking to claim the mantle of Tom Clancy. Although Michael Crichton, the other great exponent of the techno-thiller genre, diligently swerved the label of science fiction, this is certainly fiction powered by an interest in science, and what technology will do to us.

The book’s premise is that China, having overthrown the Communist Party, is gripped by the same sense of “manifest destiny” which drove American settlers across their continent. Fulfilling this destiny means hobbling the US navy and air force, which it achieves with a devastating attack on Pearl Harbor in Hawaii, and other Pacific locations. The success of this attack is predicated on China’s ability to knock out America’s surveillance capabilities, starting with its satellites and other space-based assets. It also turns out that for decades, China has been inserting treacherous algorithms inside many of the silicon chips that it has supplied to the US military, both directly to defense contractors, and indirectly in off-the-shelf commercial components.

This is such an important aspect of the book that it seems the authors are trying to send a message to the top of the defense establishment which they both inhabit: beware over-reliance on digital equipment. They buttress their message by peppering the novel with endnotes, very unusual in a novel, which provide references for all the semi-futuristic technologies they describe. I’ll take their word for it that the 300-odd endnotes are accurate.

With the US Pacific Fleet demolished from Japan to Pearl Harbor, and Hawaii and Guam occupied, America’s only way to fight back is to bring a motley assortment of ships and planes – the ghost fleet – out of mothballs and deploy them with lashings of good old-fashioned grit and self-sacrifice.

Despite the ghost fleet’s geriatric status, the book’s action remains futuristic. The combatants all survive on “stims” instead of coffee, and most of them – especially the younger ones – spend much of their time in augmented reality, wearing “viz glasses”. Drones and battle bots play important roles too. The most interesting technology deployed, though, is a brain-computer interface which is used to interrogate, torture and then execute a Russian ally of the Chinese who is suspected of treason. The extent to which human brains can be stimulated and simulated is a fascinating subject which I explore further in my novel, Pandora’s Brain, and my non-fiction review, Surviving AI.

The book outlines the geo-political circumstances of the conflict. Russia and Japan have thrown their lot in with China, and NATO has collapsed as the spineless Europeans decided there was no point coming to America’s rescue. The plucky Brits are the sole exception, but they play no active role. (The Scots have jumped ship, stealing the blue from the Union flag.) US politicians are mercifully absent from the narrative, apart from the Secretary of Defense.

The book’s title, “Ghost Fleet” is suggestive of piracy on the high seas, and there is an echo of that in the shape of a buccaneering Australian billionaire who obtains a letter of marque from the Americans in return for some space-based derring-do which helps to balance the scales.

So is Ghost Fleet a book that could enable the Sad Puppies and the Social Justice Warriors to come together in literary harmony?

It is certainly kinetic from the get-go, when we see an American astronaut condemned to a cold and lonely death outside the international space station. As the story unfolds we meet a large cast of characters, and we track a few of them in detail through the course of the war. Overall this is a very enjoyable romp, and most definitely a page-turner. It succeeds in providing a vivid insight into what warfare might be like in the coming decades. Maybe it will also serve what feels like its primary purpose too: prompting the US defense establishment to think carefully about the wisdom of relying on sensitive technologies supplied by foreign powers.

————————————–

Ex MachineMovie review: Ex Machina

Ex Machina is the sort of movie that is enjoyable and intelligent, but earns few stars. It has great production values and a reasonably good plot, but it is a slight affair. It is like a short story that over-ate.

The setup is simple. Caleb is a clever and likeable young programmer at an erzatz Google called Blue Book. The films opens with him winning the prize of spending a week with Nathan, the revered billionaire founder of the company, at his mountain retreat. The house is fairly unimpressive but its setting is magnificent – a stunning Norwegian valley, in real life – and we are told it takes a helicopter two hours to cross Nathan’s land to get there.

Nathan welcomes Caleb by imploring him to ignore their employment relationship, but proceeds to spend the rest of the week reminding him, and playing mind games on his guest. The purpose of the visit is revealed to be a Turing Test: Caleb is to determine whether Nathan’s pet project fembot is a person. Fembot Ava is played by Alicia Vikander, whose fragile, yearning, beautiful and yes, sexy performance steals the show – notwithstanding some determined scenery-chewing by Oscar Isaac, who plays Nathan as part-Frankenstein, part-Colonel Kurz from Conrad’s Heart of Darkness.

These are the only three significant speaking parts, but director Alex Garland manages to avoid a claustrophobic theatrical feeling. This is his directorial debut, but he was the writer behind Danny Boyle’s Sunshine and 28 Days Later, and he started his career as a novelist, as the author of The Beach.

The film is perhaps best thought of as a sort of prequel to Blade Runner, playing with the same questions of what it means to be a person, and how do you decide what level of consciousness and moral worth to confer on another entity.

Nathan is determinedly obnoxious, Caleb is decent and naïve, Ava is vulnerable and sweet. We are meant to want Caleb and Ava to fall in love and escape together, and most of the film is spent teasing us about how this menage will pan out. To its considerable credit, the ending is not telegraphed, even if it doesn’t add anything much new to the canon of science fiction stories.

Even given its modest ambitions, the film is not without its flaws. Not only has Nathan apparently beaten the entire rest of the world to create an artificial general intelligence, but he has done it single-handed, despite his prodigious financial resources. He has also managed to endow his creation with skin and body parts which are miracles of science, being indistinguishable from the real thing.

Ava has clunky, robotic movements at times, to remind us that she is a robot – a redundant decision, given that her torso is made of see-through plastic. Yet her facial muscles convey exquisitely refined emotional transitions, and she has deeper psychological insights than an experienced therapist.

There is a fair amount of prurience, including graphic commentary on what’s between Ava’s artificial legs, and a generous helping of stylised nudity and violence – both implied and witnessed.

Despite these caveats, the science and the philosophy is taken seriously, and the writer seems genuinely interested in the ideas. Ex Machina is a good, if slight, science fiction movie, and that is certainly enough to be thankful for.

Ava, lying

———-

Book review: “A Calculated Life” by Anne Charnock

CalculatedLife_FINALiconjpg

Anne Charnock’s training and experience as a journalist pays off in her debut novel. She has a spare, precise style which makes for comfortable reading. You feel from the start that you are in the hands of a pro.

She also pulls off the neat trick of writing at least two types of book at the same time. A Calculated Life is a coming-of-age story, but it is also a mild dystopia, set in Manchester, England, a city which is recovering from a near-apocalyptic collapse which is never explained. The protagonist, Jayna, is a genetically manipulated human who starts the book in ignorance of many important facts about the world she lives in, and ends it with a much deeper understanding. The reader’s understanding of her world expands in synch with Jayna’s because although it is written in the third person, the viewpoint is generally Jayna’s.

Jayna is a Simulant, a human who has been genetically endowed with formidable analytical powers. She spends her days trawling through massive data sets on apparently unconnected phenomena, and finding patterns of correlation and causality. These connections can be very lucrative for her employers, a research and consulting firm.

Jayna lives in a hostel with a group of her peers, who envy her because she works in the private sector, which affords more perks and more interesting work than their government jobs. Normal humans provide their food and other hotel services, and on the surface, the Simulants have comfortable, orderly lives and want for nothing. They appear heavily autistic, and are discouraged from seeking experiences beyond their work and their narrow social lives; it gradually becomes apparent that straying too far can have severe consequences, with transgressors being returned to the labs which made them to have their brains wiped.

As well as the Simulants, there are two classes of “normal” humans. The fortunate ones have implants which enhance their native intelligence, although they have much less intellectual horsepower than the Simulants. They live in suburbs, and their lives seem like those of today’s aspirational urban middle class. They work hard, have happy nuclear families, and host ebullient dinner parties in their spacious designer homes.

The less fortunate have not had implants, sometimes because they were not medically suitable, sometimes because of some personal or family transgression. They live in the “enclaves”, much further out from the centre of town than the suburbs. Their accommodation is cramped and noisy, and allocated by government fiat. Their lives are disordered and violence is common.

In some ways the book feels like a mild version of Brave New World, and Charnock is a good and subtle world-builder, although several aspects of the one she presents here are slightly discordant. Jayna and her friends are different from other people, and there are repeated hints of resentment against them. But they are indisputably human, and Jayna forms several relationships of real affection with members of the other groups. It therefore jars that the normal humans readily assign the Simulants the status of non-humans with no rights whatsoever. Of course this has happened many times in human history – holocausts have happened, usually in times of war or great unrest. Perhaps this is why Charnock set the book in a time when society is recovering from some kind of major disruption.

Some readers have found the ending (which I won’t reveal) too abrupt, but for me it was an apt conclusion to an intriguing tale whose brevity is one of its many charms.

———-

Superintelligence book coverBook Review: “Superintelligence” by Nick Bostrom

Nick Bostrom is one of the cleverest people in the world.  He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine.  He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.

I hope this book finds a huge audience.  It deserves to.  The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.

It’s not an easy read.  Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:

“This has not been an easy book to write.  I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading.  This could prove a narrow demographic.”

This passage demonstrates that Bostrom can write very well indeed.  Unfortunately the search for precision often lures him into an overly academic style.  For this book at least, he might have thought twice about using words like modulo, percept and irenic without explanation – or at all.

Superintelligence covers a lot of territory, and here I can only point out a few of the high points.  Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050.  90% of the researchers think it will arrive by 2100.  Bostrom thinks these dates may prove too soon, but not by a huge margin.

He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals.  What obsesses Bostrom is what those goals will be, and whether we can determine them.  If the goals are human-unfriendly, we are toast.

He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves.  Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).

The book’s middle chapter is titled “Is the default outcome doom?”  Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set.  The second half of the book addresses these challenges in great depth.  His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle.  His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in.  There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective.  Forever.

Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation.  A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do.  It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at.  Surely our instructions will quickly become redundant.  But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.

In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right.  Towards the end of book he issues a powerful rallying cry:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult.  [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible.  … Nor is there a grown-up in sight.  [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”

Amen to that._67022341_nickbostrom

———-

Movie review: Transcendence

Will and Evelyn

It was keenly awaited by people interested in the possible near-term arrival of super-intelligence, but Transcendence has opened to half-empty cinemas and terrible reviews – at the time of writing it has a 20% “fresh” rating on Rotten Tomatoes.

The distributors kept changing the release date, which might indicate they realised the film wouldn’t open with a splash, but that they hope it could grow to become a cult classic.  Sadly, I doubt it.  Sadly, because in some ways Transcendence is a fine film with great ambitions.  For me it is one of the best science fiction films of recent years.  But it is severely flawed.

Before I start, there are spoilers ahead.  And a declaration of interest: by sheer coincidence, Transcendence shares some key plot points with my novel, Pandora’s Brain.

The Good stuff

The film looks great.  You can see the money on the screen.  You would expect no less when the director is a hugely talented and experienced cinematographer like Wally Pfister.

More importantly, the film asks fascinating and vital questions:  Will an artificial super-intelligence be created soon, and if so, will we like it?  To its credit, it makes a serious and honest attempt to explore some of the possible answers.  In a powerful opening scene, Johnny Depp declares that an artificial brain will soon be built whose “[cognitive] power will be greater than the collective intelligence of every person born in the history of the world.”  Asked by an appalled audience member, “So you want to make a god? Your god?”, he replies candidly, “Isn’t that what humans have always done?”

Many futurists seem to have misunderstood this, viewing the movie as yet another Hollywood essay against technology.  One reviewer says it is a quasi-cerebral film about a man who wants to rule the world: “control technology or it will control you”   Like many reviews, this completely misses the point.

Transcendence is much more nuanced than that.  It isn’t simply good guys and bad guys slugging it out.  Some of the scientists believe that the super-intelligence they have created remains under the control of a good man, and they observe it accomplishing marvels with nanotechnology.  Others – equally well-intentioned – fear the super-intelligence will lose its empathy for humans, and will come to regard us as obsolete.  One of the scientists changes sides and asks whether the AI is still the man who was uploaded into the computer in the first place.  He gets a sensible answer from one of his new allies: the AI has grown so far beyond the human that it doesn’t matter any more.

OK, so the film plays technological hopscotch in places. The idea that a mind could be uploaded by capturing data with a handful of electrodes placed on the skull is daft.  But this is science fiction: you’re allowed a few bits of technological legerdemain to get the story rolling.

The bad stuff

The trouble is that the story never really does get rolling.  We never really care about the outcome.  There are several reasons for this.

One is Pfister’s curious decision to give the ending away by setting the whole film as a flashback from a technological wasteland where the internet has been destroyed.

Secondly, the film’s pace is oddly slow.  That needn’t be a problem in itself – witness “Her”, the year’s other intriguing film about super-intelligence.  But there is a lack of dramatic tension in the movie, a lack of urgency.  At times it almost seems like a terribly reasonable debate – on the one hand this, but on the other hand that.

But the biggest problem lies with the characters.  Johnny Depp seems pretty bored throughout, and despite Rebecca Hall’s best efforts, the relationship with his wife Evelyn is never compelling.  Several other excellent actors (Cillian Murphy and Kate Mara in particular) are criminally one-dimensional.  Is this what happens when a cinematographer becomes a director?  The tragedy of the film is that you don’t feel awed by the super-intelligence’s wondrous achievements – nor do you feel the fear of its opponents

The film also has some vices which really should have been avoided.  In a throwback to what should be a bygone age when women simply decorated films and screamed a lot, the leading woman, a brilliant scientist, is suddenly and unreasonably freaked out by the amount of knowledge the super-intelligence has about her.  She is then, feeble-mindedly, reconciled when it finds a way to take human form.  And surely to goodness Hollywood should by now have found less hackneyed ways to kill off powerful aliens and super-intelligences than infecting them with a virus.

The film closes with an ambiguous ending straight out of the Chris Nolan Inception playbook.  The image of the falling drips is so beautiful that you can forgive Pfister for using it twice.

At the other end of the film it was interesting to see Elon Musk in one of the opening scenes – very appropriate given the recent announcement of his sizeable investment in AI firm Vicarious.

TL;DR:

Transcendence is a brave and ambitious film which tries to get its audience to consider some important questions.  Failures in its execution means it is unlikely to succeed.  Tragically, it could even end up being counter-productive.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s