Book review: “Homo Deus” by Yuval Harai
A rather plodding first half may deter some fans of “Sapiens” (Harari’s previous book), but it is worth persevering for the extreme views about algocracy which he introduces in the final third.
Clear and direct
Yuval Harari’s book “Sapiens” was a richly deserved success. Full of intriguing ideas which were often both original and convincing, its prose style is clear and direct – a pleasure to read.** His latest book, “Homo Deus” shares these characteristics, but personally, I found the first half dragged a little, and some of the arguments and assumptions left me unconvinced. I’m glad that I persevered, however: towards the end he produces a fascinating and important suggestion about the impact of AI on future humans.
Because Harari’s writing is so crisp, you can review it largely in his own words.
From famine, plague and war to immortality, happiness and divinity
Harari opens the book with the claim that for most of our history, homo sapiens has been preoccupied by the three great evils of famine, plague and war. These have now essentially been brought under control, and because “success breeds ambition… humanity’s next targets are likely to be immortality, happiness and divinity.” In the coming decades, Harari says, we will re-engineer humans with biology, cyborg technology and AI.
The effects will be profound: “Once technology enables us to re-engineer human minds, Homo Sapiens will disappear, human history will come to an end, and a completely new kind of process will begin, which people like you and me cannot comprehend. Many scholars try to predict how the world will look in the year 2100 or 2200. This is a waste of time.”
There is, he adds, “no need to panic, though. At least not immediately. Upgrading Sapiens will be a gradual historical process rather than a Hollywood apocalypse.”
Vegetarianism and religion
At this point Harari indulges in a lengthy argument that we should all become vegetarians, asking “is Homo sapiens a superior life form, or just the local bully?” and concluding with the unconvincing (to me) warning that if “you want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans [you should] start by investigating how humans treat their less intelligent animal cousins.” He doesn’t explain why super-intelligent beings would follow the same logic – or lack of logic – as us.
I also find myself uncomfortable with some of his linguistic choices, and two in particular. First, his claim that “now humankind is poised to replace natural selection with intelligent design,” seems to me to pollute an important idea by associating it with a thoroughly discredited term.
Secondly, he is (to my mind) overly keen to attach the label “religion” to pretty much any system for organising people, including humanism, liberalism, communism and so on. For instance, “it may not be wrong to call the belief in economic growth a religion, because it now purports to solve many if not most of our ethical dilemmas.” To many people, a religion with no god is an oxymoron. This habit of seeing most human activity as religious might be explained by the fact that Harari lives in Israel, a country where religious fervour infuses everyday life like smog infuses an industrialising city.
Science escapes being labelled as religion, but Harari has a curious way of thinking about it too: “Neither science nor religion cares that much about the truth, … Science is interested above all in power. It aims to acquire the power to cure diseases, fight wars and produce food.”
A longish section of the book is given over to exploring humanism, which Harari sees as a religion that supplanted Christianity in the West. “Due to [its] emphasis on liberty, the orthodox branch of humanism is known as ‘liberal humanism’ or simply as ‘liberalism’. … During the nineteenth and twentieth centuries, as humanism gained increasing social credibility and political power, it sprouted two very different offshoots: socialist humanism, which encompassed a plethora of socialist and communist movements, and evolutionary humanism, whose most famous advocates were the Nazis.”
Having unburdened himself of all this vegetarianism and religious flavouring, Harari spends the second part of “Homo Deus” considering the future of our species, and on this terrain he recovers the nimble sure-footedness which made “Sapiens” such a great book.
Free will is an illusion
He starts by attacking our strong intuitive belief that we are all unitary, self-directing persons, possessing free will. “To the best of our scientific understanding, determinism and randomness have divided the entire cake between them, leaving not even a crumb for ‘freedom’. … The notion that you have a single self … is just another liberal myth, debunked by the latest scientific research.” This dismissal of personal identity (the “narrating self”) as a convenient fiction plays an important role in the final third of the book.
A curious characteristic of “Homo Deus” is that Harari assumes there is no need to persuade his readers of the enormous impact that new technologies will have in the coming decades. Futurists like Ray Kurzweil, Nick Bostrom, Martin Ford and others (including me) spend considerable effort getting people to comprehend and take into account the astonishing power of exponential growth. Harari assumes everyone is already on board, which is surprising in such a mainstream book. I hope he is right, but I doubt it.
Stop worrying and learn to love the autonomous kill-bot
Harari is also quite happy to swim against the consensus when exploring the impact of these technologies. A lot of ink is currently being spilled in an attempt to halt the progress of autonomous weapons. Harari considers it a waste: “Suppose two drones fight each other in the air. One drone cannot fire a shot without first receiving the go-ahead from a human operator in some bunker. The other drone is fully autonomous. Which do you think will prevail? … Even if you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves, they all too often kill civilians by mistake.”
The economic singularity and superfluous people
As well as dismissing attempts to forestall AI-enabled weaponry, Harari has no truck with the Reverse Luddite Fallacy, the idea that because automation has not caused lasting unemployment in the past it will not do so in the future. “Robots and computers … may soon outperform humans in most tasks. … Some economists predict that sooner or later, un-enhanced humans will be completely useless. … The most important question in twenty-first-century economics may well be what to do with all the superfluous people.”
My income is OK, but what am I for?
Harari has interesting things to say about some of the dangers of technological unemployment. He is sanguine about the ability of the post-jobs world to provide adequate incomes to the “superfluous people”, but like many other writers, he asks where we will find meaning in a post-jobs world. “The technological bonanza will probably make it feasible to feed and support the useless masses even without any effort on their side. But what will keep them occupied and content? People must do something, or they will go crazy. What will they do all day? One solution might be offered by drugs and computer games. … Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences.”
Personally, I think he has got this the wrong way round. Introducing Universal Basic Income (or some similar scheme to provide a good standard of living to the unemployable) will probably prove to be a significant challenge. Persuading the super-rich (whether they be humans or algorithms) to provide the rest of us with a comfortable income will, I hope, be possible, but it may have to be done globally and within a very short time-frame. If we do manage this transition smoothly, I suspect the great majority of people will quickly find worthwhile and enjoyable things to do with their new-found leisure. Rather like many pensioners do today, and aristocrats have done for centuries.
The Gods and the Useless
I have more common ground with Harari when he argues that inequality of wealth and income may become so severe that it leads to “speciation” – the division of the species into completely separate groups, whose vital interests may start to diverge. “As algorithms push humans out of the job market, wealth might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social inequality.”
Pursuing this idea, he coins the rather brutal phrase “the Gods and the Useless”. He points out that in the past, the products of technological advances have disseminated rapidly through economies and societies, but he thinks this may change. In the vital area of medical science, for instance, we are moving from an era when the goal was to bring as many people as possible up to a common standard of “normal health”, to a world in which the cognitive and physical performance of certain individuals may be raised to new and extraordinary heights. This could have dangerous consequences: when tribes of humans with different levels of capability have collided it has rarely gone well for the less powerful group.
The technological singularity
The technological singularity pops up briefly, and again Harari sees no need to expend much effort persuading his mainstream audience that this startling idea is plausible. “Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpasses human intelligence, it might simply exterminate humankind.”
Harari leaves this idea hanging in the air, though, and finally we arrive at the main event, in which he predicts the dissolution of not only humanism, but of the whole notion of individual human beings. “The new technologies of the twenty-first century may thus reverse the humanist revolution, stripping humans of their authority, and empowering non-human algorithms instead.” “Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and finally into sovereigns.” As a consequence, “humans will no longer be autonomous entities directed by the stories their narrating self invents. Instead, they will be integral parts of a huge global network.” This seems to me to be an extreme version of an idea called algocracy, in which humans are governed by algorithms.
As an example of how this extreme algocracy could come about, “suppose my narrating self makes a New Year resolution to start a diet and go to the gym every day. A week later, when it is time to go to the gym, the experiencing self asks Cortana to turn on the TV and order pizza. What should Cortana do?” Harari thinks Cortana (or Siri, or whatever they are called then) will know us better than we do and will make wiser choices than we would in almost all circumstances. We will have no sensible option other than to hand over almost all decision-making to them.
Two new religions
Given his religious turn of mind, it is perhaps inevitable that Harari sees this extreme algocracy as leading to the birth of not one, but two new religions. “The most interesting place in the world from a religious perspective is not the Islamic State or the Bible Belt, but Silicon Valley.” Algocracy, he thinks, will generate two new “techno-religions … techno-humanism and data religion”, or “Dataism”.
“Techno-humanism agrees that Homo Sapiens as we know it has run its historical course and will no longer be relevant in the future, but concludes that we should therefore use technology in order to create Homo Deus – a much superior human model.” Harari thinks that techno-humanism is incoherent because if you can always improve yourself then you are no longer an independent agent: “Once people could design and redesign their will, we could no longer see it as the ultimate source of all meaning and authority. For no matter what our will says, we can always make it say something else.” “What”, he asks, “will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”
Dataism trumps Techno-humanism
“Hence a bolder techno-religion seeks to sever the humanist umbilical cord altogether. … The most interesting emerging religion is Dataism, which venerates neither gods nor man – it worships data. … According to Dataism, King Lear and the flu virus are just two patterns of data flow that can be analysed using the same basic concepts and tools. … Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”
“Dataism isn’t limited to idle prophecies. Like every religion, it has its practical commandments. First and foremost, a Dataist ought to maximise data flow by connecting to more and more media, and producing and consuming more and more information.” So people who record every aspect of their lives on Facebook and Twitter are not clogging up the airwaves with junk after all; they are simply budding Dataists.
I am un-persuaded by the idea that mere data, undifferentiated by anything beyond quantity and complexity could become sovereign on this planet and throughout the universe. I think Harari has missed an interesting opportunity – if he replaced the notion of data with the notion of consciousness, I think he might be onto something important. It would not be the first time that a thinker proposed that mankind’s destiny (a religiously loaded word which he would perhaps approve) was to merge its individual minds into a single consciousness and spread it throughout the cosmos, but it might be the first time that a genuinely mainstream book did so.
In any case, Harari deserves great credit for staring directly at the massive transformations heading our way, and following his intuitions to their logical conclusions.
* TL;DR = Too Long, Didn’t Read
** Sapiens does have its critics, including the philosopher John Danaher, who thinks it over-rated.
Book review: “The Age of Em” by Robin Hanson
I can’t remember ever reading a book before which I liked so much, while disagreeing with so much in it. This partly because the author is such an amiable fellow. He really wants you to like him, and despite being quite eminent in his field, he displays a disarming humility: his acknowledgements extends to a page and a half, and includes many equally eminent people, but in spite of that he says:
“I’ve never felt as intellectually isolated or at risk as when writing this book, and I hope my desert days end now, as readers like you join me in discussing The Age of Em.”
The writing style is direct, informal and engaging:
“If you can’t see the point in envisioning the lives of your descendants, you’d best quit now, as that’s mostly all I’ve got.”
And the book addresses an important subject: the future:
“If the future matters more than the past, because we can influence it, why do we have far more historians than futurists?”
The book is essentially a forecast of what life will be like for the first artificial general intelligences, which he calls ‘ems’ – short for emulations. That may sound like a heroic undertaking for a non-fiction writer, and indeed it is. And Robin does have a heroic faith in the power of forecasts. On page 34 he says, “make no mistake, it is possible to forecast the future” and a few pages later, “foragers could have better anticipated the industrial era if they had first understood the intervening farmer era”.
Scepticism about progress in AI
Robin may be right to say that brain emulation will produce an artificial general intelligence (AGI) before the continuing progress in machine learning, or other types of AI research. Reverse engineering a human brain by slicing it thinly and tracing the connectome, and reproducing that in silico does seem like a more guaranteed route to AGI than the alternative, which may require conceptual leaps which are as yet unknown.
But Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic. Because of this claim, he is highly critical of the view that technological unemployment will be widespread in the next few decades. Fair enough, he might be right, but obviously I doubt it. He is also rather dismissive of major changes in society being caused by virtual reality, augmented reality, the internet of things, 3D printing, self-driving cars, and all the other astonishing technologies being developed and introduced as we speak.
It would take too long to list all the other things in the book I disagree with, but here are a few. He seems to think that when the first ems are created, they will very quickly be perfect replications of the target human minds. It seems to me more likely that we will create a series of approximations of the target person, and some of these early creations may raise very awkward ethical considerations. Greg Egan’s novel Zendegi is a good exploration of this.
Robin is a long-time sceptic of the idea of an intelligence explosion – the idea that the first AGI will be recursively improved to create a superintelligence, and that this will happen within months if not weeks. Admittedly, The Age of Em only covers a period of two years at the subjective speed of humans, although it takes many thousands of years from the point of view of the ems. Given that his world, as described, has an astonishing amount of computational capacity available, I found this wholly implausible. The incentive to enhance the intelligence of an entity which works for you is irresistible, and once we have models of minds in silico, it will be much easier to do so.
The ems are all quite happy workaholics – largely because they are emulations of workaholics. They are also happy to be copied, and for the copies of themselves to be shut down again. A remark on page 49 is telling in this regard:
“the concepts of “identity” and “consciousness” … play little role in the physical, engineering, social, and human sciences that I will rely on in this book. So I will now say little more on those topics”
It seems to me that if and when we create artificial people they will care a lot about their consciousness and their identity!
The humans in this world are all happy to be retired, and have the ems create everything they need. I think the scenario of radical abundance is definitely achievable, but I don’t think it’s a slam dunk, and I would imagine much more interaction – good and bad – between ems and humans than Robin seems to expect.
Religion and going multi-planetary
A couple of smaller but important comments. Robin thinks ems will be intellectually superior to most humans, not least because they will be modelled on the best of us. He therefore thinks they will be religious. Apart from the US, always an exceptional country, the direction of travel in that regard is firmly in the other direction.
And space travel. Robin argues that we will keep putting off trying to colonise the stars because whenever you send a ship out there, it would always be overtaken by a later, cheaper one which benefits from better technology. This ignores one of the main reasons for doing it: to improve our chances of survival by making sure all our eggs aren’t in the one basket that is this pale blue dot. It doesn’t matter if you are overtaken, because one day the overtaking might well be stopped by a nuclear war, asteroid strike, gamma ray burst, global pathogen or some other disaster. And it might be stopped the day you leave.
A fascinating and engaging book, containing much to enjoyably disagree with.
There’s a war in progress. Large numbers of people have taken sides, and each side thinks the other is mad, bad, and irresponsible. The vitriol is intense, and it shows no sign of letting up.
I’m not talking about the war between China and the USA which is depicted in Ghost Fleet, but the war between the Sad Puppies and the Social Justice Warriors over the Hugo Award, the most prestigious awards in the science fiction literary world. Bear with me – there’s a link.
If you’re remotely interested in science and speculative fiction, you’ll know that the Sad Puppies felt that the Hugos had been captured and colonised by Social Justice Warriors, a tribe which values political correctness over storytelling. They cited as an example the “Ancillary” series by Ann Leckie, which they claimed was lauded more because of its gender politics than because of any literary or storytelling merit. They drew up a “slate” of authors who they felt represented a fairer cross-section of the science fiction which “ordinary” SF fans like to read, and promoted this slate among the people who vote for the awards.
The reaction from their opponents has been furious. The Sad Puppies have been labelled reactionary, racist, sexist, and pretty much every other “-ist” you can think of.
But here comes something that perhaps – just perhaps – both sides in this vicious row could agree on. Ghost Fleet is a hugely successful new novel which combines kinetic storytelling with a careful balance of gender roles, even at the front line of global warfare. Ghost Fleet was published in June, and its PR team deserve enormous praise for securing detailed coverage in prestigious media like The Economist and The Atlantic. Its authors are a defense policy analyst and a defense journalist, so perhaps they could bring valuable diplomatic skills to the task of mediating between the sides in the Hugos war.
Ghost Fleet is a techno-thriller, and its authors are explicitly seeking to claim the mantle of Tom Clancy. Although Michael Crichton, the other great exponent of the techno-thiller genre, diligently swerved the label of science fiction, this is certainly fiction powered by an interest in science, and what technology will do to us.
The book’s premise is that China, having overthrown the Communist Party, is gripped by the same sense of “manifest destiny” which drove American settlers across their continent. Fulfilling this destiny means hobbling the US navy and air force, which it achieves with a devastating attack on Pearl Harbor in Hawaii, and other Pacific locations. The success of this attack is predicated on China’s ability to knock out America’s surveillance capabilities, starting with its satellites and other space-based assets. It also turns out that for decades, China has been inserting treacherous algorithms inside many of the silicon chips that it has supplied to the US military, both directly to defense contractors, and indirectly in off-the-shelf commercial components.
This is such an important aspect of the book that it seems the authors are trying to send a message to the top of the defense establishment which they both inhabit: beware over-reliance on digital equipment. They buttress their message by peppering the novel with endnotes, very unusual in a novel, which provide references for all the semi-futuristic technologies they describe. I’ll take their word for it that the 300-odd endnotes are accurate.
With the US Pacific Fleet demolished from Japan to Pearl Harbor, and Hawaii and Guam occupied, America’s only way to fight back is to bring a motley assortment of ships and planes – the ghost fleet – out of mothballs and deploy them with lashings of good old-fashioned grit and self-sacrifice.
Despite the ghost fleet’s geriatric status, the book’s action remains futuristic. The combatants all survive on “stims” instead of coffee, and most of them – especially the younger ones – spend much of their time in augmented reality, wearing “viz glasses”. Drones and battle bots play important roles too. The most interesting technology deployed, though, is a brain-computer interface which is used to interrogate, torture and then execute a Russian ally of the Chinese who is suspected of treason. The extent to which human brains can be stimulated and simulated is a fascinating subject which I explore further in my novel, Pandora’s Brain, and my non-fiction review, Surviving AI.
The book outlines the geo-political circumstances of the conflict. Russia and Japan have thrown their lot in with China, and NATO has collapsed as the spineless Europeans decided there was no point coming to America’s rescue. The plucky Brits are the sole exception, but they play no active role. (The Scots have jumped ship, stealing the blue from the Union flag.) US politicians are mercifully absent from the narrative, apart from the Secretary of Defense.
The book’s title, “Ghost Fleet” is suggestive of piracy on the high seas, and there is an echo of that in the shape of a buccaneering Australian billionaire who obtains a letter of marque from the Americans in return for some space-based derring-do which helps to balance the scales.
So is Ghost Fleet a book that could enable the Sad Puppies and the Social Justice Warriors to come together in literary harmony?
It is certainly kinetic from the get-go, when we see an American astronaut condemned to a cold and lonely death outside the international space station. As the story unfolds we meet a large cast of characters, and we track a few of them in detail through the course of the war. Overall this is a very enjoyable romp, and most definitely a page-turner. It succeeds in providing a vivid insight into what warfare might be like in the coming decades. Maybe it will also serve what feels like its primary purpose too: prompting the US defense establishment to think carefully about the wisdom of relying on sensitive technologies supplied by foreign powers.
Movie review: Ex Machina
Ex Machina is the sort of movie that is enjoyable and intelligent, but earns few stars. It has great production values and a reasonably good plot, but it is a slight affair. It is like a short story that over-ate.
The setup is simple. Caleb is a clever and likeable young programmer at an erzatz Google called Blue Book. The films opens with him winning the prize of spending a week with Nathan, the revered billionaire founder of the company, at his mountain retreat. The house is fairly unimpressive but its setting is magnificent – a stunning Norwegian valley, in real life – and we are told it takes a helicopter two hours to cross Nathan’s land to get there.
Nathan welcomes Caleb by imploring him to ignore their employment relationship, but proceeds to spend the rest of the week reminding him, and playing mind games on his guest. The purpose of the visit is revealed to be a Turing Test: Caleb is to determine whether Nathan’s pet project fembot is a person. Fembot Ava is played by Alicia Vikander, whose fragile, yearning, beautiful and yes, sexy performance steals the show – notwithstanding some determined scenery-chewing by Oscar Isaac, who plays Nathan as part-Frankenstein, part-Colonel Kurz from Conrad’s Heart of Darkness.
These are the only three significant speaking parts, but director Alex Garland manages to avoid a claustrophobic theatrical feeling. This is his directorial debut, but he was the writer behind Danny Boyle’s Sunshine and 28 Days Later, and he started his career as a novelist, as the author of The Beach.
The film is perhaps best thought of as a sort of prequel to Blade Runner, playing with the same questions of what it means to be a person, and how do you decide what level of consciousness and moral worth to confer on another entity.
Nathan is determinedly obnoxious, Caleb is decent and naïve, Ava is vulnerable and sweet. We are meant to want Caleb and Ava to fall in love and escape together, and most of the film is spent teasing us about how this menage will pan out. To its considerable credit, the ending is not telegraphed, even if it doesn’t add anything much new to the canon of science fiction stories.
Even given its modest ambitions, the film is not without its flaws. Not only has Nathan apparently beaten the entire rest of the world to create an artificial general intelligence, but he has done it single-handed, despite his prodigious financial resources. He has also managed to endow his creation with skin and body parts which are miracles of science, being indistinguishable from the real thing.
Ava has clunky, robotic movements at times, to remind us that she is a robot – a redundant decision, given that her torso is made of see-through plastic. Yet her facial muscles convey exquisitely refined emotional transitions, and she has deeper psychological insights than an experienced therapist.
There is a fair amount of prurience, including graphic commentary on what’s between Ava’s artificial legs, and a generous helping of stylised nudity and violence – both implied and witnessed.
Despite these caveats, the science and the philosophy is taken seriously, and the writer seems genuinely interested in the ideas. Ex Machina is a good, if slight, science fiction movie, and that is certainly enough to be thankful for.
Book review: “A Calculated Life” by Anne Charnock
Anne Charnock’s training and experience as a journalist pays off in her debut novel. She has a spare, precise style which makes for comfortable reading. You feel from the start that you are in the hands of a pro.
She also pulls off the neat trick of writing at least two types of book at the same time. A Calculated Life is a coming-of-age story, but it is also a mild dystopia, set in Manchester, England, a city which is recovering from a near-apocalyptic collapse which is never explained. The protagonist, Jayna, is a genetically manipulated human who starts the book in ignorance of many important facts about the world she lives in, and ends it with a much deeper understanding. The reader’s understanding of her world expands in synch with Jayna’s because although it is written in the third person, the viewpoint is generally Jayna’s.
Jayna is a Simulant, a human who has been genetically endowed with formidable analytical powers. She spends her days trawling through massive data sets on apparently unconnected phenomena, and finding patterns of correlation and causality. These connections can be very lucrative for her employers, a research and consulting firm.
Jayna lives in a hostel with a group of her peers, who envy her because she works in the private sector, which affords more perks and more interesting work than their government jobs. Normal humans provide their food and other hotel services, and on the surface, the Simulants have comfortable, orderly lives and want for nothing. They appear heavily autistic, and are discouraged from seeking experiences beyond their work and their narrow social lives; it gradually becomes apparent that straying too far can have severe consequences, with transgressors being returned to the labs which made them to have their brains wiped.
As well as the Simulants, there are two classes of “normal” humans. The fortunate ones have implants which enhance their native intelligence, although they have much less intellectual horsepower than the Simulants. They live in suburbs, and their lives seem like those of today’s aspirational urban middle class. They work hard, have happy nuclear families, and host ebullient dinner parties in their spacious designer homes.
The less fortunate have not had implants, sometimes because they were not medically suitable, sometimes because of some personal or family transgression. They live in the “enclaves”, much further out from the centre of town than the suburbs. Their accommodation is cramped and noisy, and allocated by government fiat. Their lives are disordered and violence is common.
In some ways the book feels like a mild version of Brave New World, and Charnock is a good and subtle world-builder, although several aspects of the one she presents here are slightly discordant. Jayna and her friends are different from other people, and there are repeated hints of resentment against them. But they are indisputably human, and Jayna forms several relationships of real affection with members of the other groups. It therefore jars that the normal humans readily assign the Simulants the status of non-humans with no rights whatsoever. Of course this has happened many times in human history – holocausts have happened, usually in times of war or great unrest. Perhaps this is why Charnock set the book in a time when society is recovering from some kind of major disruption.
Some readers have found the ending (which I won’t reveal) too abrupt, but for me it was an apt conclusion to an intriguing tale whose brevity is one of its many charms.
Nick Bostrom is one of the cleverest people in the world. He is a professor of philosophy at Oxford University, and was recently voted 15th most influential thinker in the world by the readers of Prospect magazine. He has laboured mightily and brought forth a very important book, Superintelligence: paths, dangers, strategies.
I hope this book finds a huge audience. It deserves to. The subject is vitally important for our species, and no-one has thought more deeply or more clearly than Bostrom about whether superintelligence is coming, what it will be like, and whether we can arrange for a good outcome – and indeed what ” a good outcome” actually means.
It’s not an easy read. Bostrom has a nice line in wry self-deprecating humour, so I’ll let him explain:
“This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded. … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”
This passage demonstrates that Bostrom can write very well indeed. Unfortunately the search for precision often lures him into an overly academic style. For this book at least, he might have thought twice about using words like modulo, percept and irenic without explanation – or at all.
Superintelligence covers a lot of territory, and here I can only point out a few of the high points. Bostrom has compiled a meta-survey of 160 leading AI researchers: 50% of them think that an artificial general intelligence (AGI) – an AI which is at least our equal across all our cognitive functions – will be created by 2050. 90% of the researchers think it will arrive by 2100. Bostrom thinks these dates may prove too soon, but not by a huge margin.
He also thinks that an AGI will become a superintelligence very soon after its creation, and will quickly dominate other life forms (including us), and go on to exploit the full resources of the universe (“our cosmic endowment”) to achieve its goals. What obsesses Bostrom is what those goals will be, and whether we can determine them. If the goals are human-unfriendly, we are toast.
He does not think that intelligence augmentation or brain-computer interfaces can save us by enabling us to reach superintelligence ourselves. Superintelligence is a two-horse race between whole brain emulation (copying a human brain into a computer) and what he calls Good Old Fashioned AI (machine learning, neural networks and so on).
The book’s middle chapter is titled “Is the default outcome doom?” Uncharacteristically, Bostrom is coy about answering his own question, but the implication is yes, unless we can control the AGI (constrain its capabilities), or determine its motivation set. The second half of the book addresses these challenges in great depth. His conclusion on the control issue is that we probably cannot constrain an AGI for long, and anyway there wouldn’t be much point having one if you never opened up the throttle. His conclusion on the motivation issue is that we may be able to determine the goals of an AGI, but that it requires a lot more work, despite the years of intensive labour that he and his colleagues have already put in. There are huge difficulties in specifying what goals we would like the AGI to have, and if we manage that bit then there are massive further difficulties ensuring that the instructions we write remain effective. Forever.
Now perhaps I am being dense, but I cannot understand why anyone would think that a superintelligence would abide forever by rules that we installed at its creation. A successful superintelligence will live for aeons, operating at thousands or millions of times the speed that we do. It will discover facts about the laws of physics, and the parameters of intelligence and consciousness that we cannot even guess at. Surely our instructions will quickly become redundant. But Bostrom is a good deal smarter than me, and I hope that he is right and I am wrong.
In any case, Bostrom’s main argument – that we should take the prospect of superintelligence very seriously – is surely right. Towards the end of book he issues a powerful rallying cry:
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult. [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible. … Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.”
Movie review: Transcendence
It was keenly awaited by people interested in the possible near-term arrival of super-intelligence, but Transcendence has opened to half-empty cinemas and terrible reviews – at the time of writing it has a 20% “fresh” rating on Rotten Tomatoes.
The distributors kept changing the release date, which might indicate they realised the film wouldn’t open with a splash, but that they hope it could grow to become a cult classic. Sadly, I doubt it. Sadly, because in some ways Transcendence is a fine film with great ambitions. For me it is one of the best science fiction films of recent years. But it is severely flawed.
Before I start, there are spoilers ahead. And a declaration of interest: by sheer coincidence, Transcendence shares some key plot points with my novel, Pandora’s Brain.
The Good stuff
The film looks great. You can see the money on the screen. You would expect no less when the director is a hugely talented and experienced cinematographer like Wally Pfister.
More importantly, the film asks fascinating and vital questions: Will an artificial super-intelligence be created soon, and if so, will we like it? To its credit, it makes a serious and honest attempt to explore some of the possible answers. In a powerful opening scene, Johnny Depp declares that an artificial brain will soon be built whose “[cognitive] power will be greater than the collective intelligence of every person born in the history of the world.” Asked by an appalled audience member, “So you want to make a god? Your god?”, he replies candidly, “Isn’t that what humans have always done?”
Many futurists seem to have misunderstood this, viewing the movie as yet another Hollywood essay against technology. One reviewer says it is a quasi-cerebral film about a man who wants to rule the world: “control technology or it will control you” Like many reviews, this completely misses the point.
Transcendence is much more nuanced than that. It isn’t simply good guys and bad guys slugging it out. Some of the scientists believe that the super-intelligence they have created remains under the control of a good man, and they observe it accomplishing marvels with nanotechnology. Others – equally well-intentioned – fear the super-intelligence will lose its empathy for humans, and will come to regard us as obsolete. One of the scientists changes sides and asks whether the AI is still the man who was uploaded into the computer in the first place. He gets a sensible answer from one of his new allies: the AI has grown so far beyond the human that it doesn’t matter any more.
OK, so the film plays technological hopscotch in places. The idea that a mind could be uploaded by capturing data with a handful of electrodes placed on the skull is daft. But this is science fiction: you’re allowed a few bits of technological legerdemain to get the story rolling.
The bad stuff
The trouble is that the story never really does get rolling. We never really care about the outcome. There are several reasons for this.
One is Pfister’s curious decision to give the ending away by setting the whole film as a flashback from a technological wasteland where the internet has been destroyed.
Secondly, the film’s pace is oddly slow. That needn’t be a problem in itself – witness “Her”, the year’s other intriguing film about super-intelligence. But there is a lack of dramatic tension in the movie, a lack of urgency. At times it almost seems like a terribly reasonable debate – on the one hand this, but on the other hand that.
But the biggest problem lies with the characters. Johnny Depp seems pretty bored throughout, and despite Rebecca Hall’s best efforts, the relationship with his wife Evelyn is never compelling. Several other excellent actors (Cillian Murphy and Kate Mara in particular) are criminally one-dimensional. Is this what happens when a cinematographer becomes a director? The tragedy of the film is that you don’t feel awed by the super-intelligence’s wondrous achievements – nor do you feel the fear of its opponents
The film also has some vices which really should have been avoided. In a throwback to what should be a bygone age when women simply decorated films and screamed a lot, the leading woman, a brilliant scientist, is suddenly and unreasonably freaked out by the amount of knowledge the super-intelligence has about her. She is then, feeble-mindedly, reconciled when it finds a way to take human form. And surely to goodness Hollywood should by now have found less hackneyed ways to kill off powerful aliens and super-intelligences than infecting them with a virus.
The film closes with an ambiguous ending straight out of the Chris Nolan Inception playbook. The image of the falling drips is so beautiful that you can forgive Pfister for using it twice.
At the other end of the film it was interesting to see Elon Musk in one of the opening scenes – very appropriate given the recent announcement of his sizeable investment in AI firm Vicarious.
Transcendence is a brave and ambitious film which tries to get its audience to consider some important questions. Failures in its execution means it is unlikely to succeed. Tragically, it could even end up being counter-productive.