Talks and Interviews

A one-minute video


This excerpt from a video of an event hosted at London’s Hospital Club by the nice people at Creative Capital is a very quick introduction to what I talk and write about.

Discussing AI with George Osborne



One of the many worrying aspects of the Brexit referendum in the UK and the Trumpularity in the US is that most politicians are not yet talking about the challenges posed by the coming impact of powerful artificial intelligence.  This needs to change.

A conversation I had in November 2016 with George Osborne (until recently the UK’s Chancellor of the Exchequer) gave modest grounds for hope.

The video (16 minutes) contains excerpts from a recent panel discussion called “Ask Me Anything About the Future”.  Hosted by Bloomberg, it was organised by Force Over Mass, an early-stage investment fund manager.  It was very ably chaired by David Wood, who runs the London Futurists meetup group.

Beringea talk, November 2016


Beringea is one of the UK’s leading private equity companies, investing in high-growth businesses.  They invited me to address their annual investors’ meeting.

PwC / ICAEW talk, July 2016


This is a talk I gave in July 2016 at an event hosted jointly by PwC and the ICAEW, the Institiute of Chartered Accountants of England and Wales.

ThinkNation talk, December 2015

ThinkNationThinkNation is a brilliant initiative. The brainchild of Lizzie Hodgson, it is “where young people, artists and thought leaders tackle how technology is impacting everyday life and shaping our futures.” Given the profound importance of the changes sweeping through our lives in the coming years and decades, there aren’t many more important subjects to address.

So I was very pleased to deliver this talk (11 minutes) to an impressive group of 14-18 year-olds at a ThinkNation conference in December.  It was followed by this discussion. (20 minutes)

London Futurists talk, October 2016


Run by David Wood, London Futurists is London’s premier meetup group for people interested in the future.  This talk was entitled, “How I learned to stop worrying and love the AI.”  The sound quality is fair, and I am largely in darkness – an ideal combination.

Discussion of the machine revolution at Fondacion BankInter, Madrid, November 2015

Calum at Fondacion

Fondaction BankInter is a leading global think tank based in Madrid.  In 2015 it investigated the idea that machine intelligence may lead to technological unemployment.  A workshop in June lead to a report which was published in November, and the Fondacion asked me and Juan Francisco Blanes, a roboticist, to give talks at the launch.

With splendid irony, my computer crashed during the presentation, so fans of schadenfreude will particularly enjoy the section at 23 minutes 37 seconds.  Fortunately, the Fondacion staff came to the rescue with great efficiency and aplomb, and the talk re-starts at 28 minutes 27 seconds.

Debate at the Science Museum IMAX, November 2015

Debate Still with SAICIPA, the Chartered Institute of Patent Lawyers, organised a debate on the motion “This House believes that within 25 years, a patent will be filed and approved without human intervention.”  Together with Chrissie Lightfoot, author of The Naked Lawyer *, I spoke in favour of the motion.  Although the audience was mostly patent lawyers, the motion passed, 80 votes to 60.

The event was ably chaired by Tom Clarke, Channel 4’s Science Editor, and our valiant but trounced opponents were Nigel Hanley and Ilya Kazi.

A YouTube clip of edited highlights is here, and this is a review of the event by James Nurton, the UK’s go-to man for IP.

Nurton mastheadNurton Header

* Essential reading for lawyers!

Talk and discussion at the launch of “Surviving AI”, September 2015 at Google’s London Campus


Calum and Kenn

Compered by Kenn Cukier of The Economist and chaired by David Wood of London Futurists.

Interview on Robot Overlordz, October 2015

Robot Overlordz

Discussion with podcasters Mike and Matt.

Interview on Review the Future, September 2015


Discussion with podcaster Jon Perry.

Panel discussion at Playfair Capital event, June 2015

Discussion with Ben Medlock, co-founder of Swiftkey, moderated by Sally Davies of the FT.

A talk at the Future Trends Forum organised by the Fundacion Innovacion BankInter, Madrid, June 2015

A talk about the possibility of advancing AI causing technological unemployment


Calum video

Talk at the launch of Fast Future’s book, “The Future of Business”, June 2015

A talk based on a chapter I contributed to the book.

CC Photo

Interview on Singularity 1 on 1, April 2015

Discussion with Nikola Danaylov, the creator of Singularity Weblog.


Promotional video, March 2015

Presentation to the London Futurists Conference, April 2014

I grew up reading science fiction.  This was handy when I got to university, because my subject was philosophy, and I discovered that science fiction is essentially philosophy in fancy dress.  But it wasn’t until the year 2000, two decades after I graduated, that I came across Ray Kurzweil, who made me consider the astounding idea that conscious, super-intelligent machines might be created in my lifetime. As a science fiction fan I’d long thought they would arrive one day, but I’d assumed it would be centuries away, long after my death.  I’ve been thinking about the implications of that idea ever since, and wondering how to get other people thinking about it too.

Another decade later I semi-retired, which gave me the time to try to do something about it, by writing a novel on the subject, called Pandora’s Brain.

These are the six questions I’ve been thinking about all that time.

  1.        Can we create a human-level artificial intelligence, an artificial general intelligence (AGI)?
  2.        If so, when?
  3.        Will AGI lead to super-intelligence?
  4.        If super-intelligence arrives, will we like it?
  5.        Can we upload our minds into computers?
  6.        Can we de-risk the arrival of super-intelligence?

Let’s start with some definitions.  I take the terms “human-level Artificial Intelligence”, “strong AI”, and “artificial general intelligence” to mean pretty much the same thing: an AI which has all the cognitive abilities that a human has.

Intelligence is essentially about solving various kinds of problems, whereas consciousness is about personal experience. It’s the sense you have of blue, of heat, and of your personal identity as a consistent thing over time.  Our intuition is that intelligence and consciousness are correlated, but we don’t know how far.

We can’t define consciousness, but we all know the test.  The Turing Test is a brilliant idea from a brilliant man.  I think the people who deny its validity are wrong: it is after all how we determine whether other people are conscious persons or not.

The potential development of greatest interest to me is Super-intelligence.  If and when an AGI is built and then becomes a super-intelligence everything will change, and the world will become either wonderful or disastrous.  The arrival of super-intelligence will be the most important development since the invention of agriculture.  It will quite literally determine the future of humanity.  It could happen slowly, or very quickly, in the form of an intelligence explosion.  And the incredible things is that people in this room today may witness it.

I don’t use the word Singularity because it means many different things to different people, and also because it has acquired unfortunate pseudo-religious overtones.

1. Can we create a human-level artificial intelligence, an artificial general intelligence (AGI)?

Let’s not kid ourselves: it is very, very hard.  Imagine giving every inhabitant of New York City 1,000 strings.  Each string is attached to another inhabitant, and carries up to 200 signals every second, travelling at 300 metres a second.  Now multiply that city by ten thousand.  That is a reasonable model of a single human brain – the amazing 3 1/2 pounds of gloop inside your skull.

And that’s just the hardware.  If consciousness was simply an emergent property of sufficiently complicated information processing then the internet should have woken up by now – as it does in Robert Sawyer’s Wake, Watch and Wonder trilogy.  In addition to the hardware we also need the software – the wiring diagram, and instructions for how to initiate the firing.

There are three ways to get at this.  The first is Reverse Engineering, in which an existing brain is analysed either destructively – by cutting it into very thin slices – or non-destructively, by increasingly advanced scanning technologies.  This is what the EU-funded Human Brain Project led by Henry Markram is doing, and they have just announced an important collaboration with the American version, President Obama’s BRAIN project.

The second approach is Incremental Development, where you assemble the best existing AI systems that do search, natural language processing, deep learning and so on, you feed them more and more data, and you use trial and error to improve them.  This is what Google is doing, it’s what IBM is doing with Watson, and it is happening in many other labs around the world.

An important feature of both these approaches is that you do not need to know exactly how a human brain produces a mind in order to make your own artificial one.  Some people, including Marvin Minsky and Gary Marcus, claim that you can’t manufacture a mind unless you pretty much know how one works beforehand.  I think this is an unproven assertion.

The third approach does involve understanding how the brain works.  You develop a comprehensive theory of mind and then build your brain.  This looks even harder.

Will any of these approaches work?  I can’t tell you for certain, and I don’t believe anyone else can either.  To coin a phrase, the science of brain building is most definitely not settled.

Here are some of the reasons that people give for why it might fail.  Maybe brain builders have to capture time-series data, or model the brain in more granular detail than the neuronal level.  But these requirements would simply delay AGI rather than prevent it forever.  The third and fourth reasons seem implausible to me.

So I’ve yet to hear a good reason why we can’t build an AGI, and there certainly is a lot of money being spent trying.

2. If so, when?

The whole project, both hardware and software, is driven by Moore’s Law – the observation that the performance of computers doubles every 18 months.  I don’t think we should be too upset by the fact that Gordon Moore himself is sceptical about AGI being created any time soon.

The only thing we know about forecasts is that they are wrong; we just don’t know by how much, or in which direction.  History’s most famous exaggerated forecast was by Robert Malthus, and of course there have also been famous under-forecasts, such as Alexander Bell’s suggestion that there might one day be a telephone in every city, and Thomas Watson’s comment that the global market for computers might be as big as five.  As if to underline how unreliable this whole forecasting business is, neither of these men actually said what has been attributed to them.

People argue about how much longer Moore’s Law will continue.  Well we know the brain processes information at the exaflop scale – that’s 1 with 18 noughts after it – we will soon have exaflop scale computing.  The Square Kilometre Array telescope system, with dishes in Australia and South Africa, and processing at Jodrell Bank near Manchester, will operate at that scale.  According to one report it will process half as much data when it comes online as the entire internet does today.  That’s a nice illustration of the power of exponentials.

Quantum computing has long been seen as a potential saviour for Moore’s Law.  There are some important caveats, but it’s looking increasingly likely that D-Wave is manufacturing genuine quantum computers.

Ray Kurzweil has been saying for a long time now that 2029 is the due date for AGI, with uploading coming along 16 years later.  A poll by the Oxford Institute for the Future of Humanity found that half the people working in the field think it will be before 2050.  We should have observable progress by 2025, and a lot more people will have woken up to the likelihood of it happening soon.  That increase in awareness will itself be very important, as I’ll argue shortly.

3. Will AGI lead to super-intelligence?

Almost certainly, yes.  An AGI can be expanded, speeded up and improved in ways that a human brain cannot.  What we don’t know is whether it will be fast or slow.  The novelist and computer scientist Ramez Naam has recently argued that it will be slow, taking years rather than days or weeks.  But he doesn’t really know, and neither does anyone else.

4. If super-intelligence arrives, will we like it?

The impact will certainly be dramatic.  The arrival of super-intelligence will determine the future of humanity.  The range of possible outcomes is extreme, ranging from the sublime:

  •          We upload and become godlike
  •          They help us, solving all our material and psychological problems

…to the disappointing:

  •          They leave
  •          We become pets

… to the pretty grim:

  •          We become farm animals
  •          We become zoo animals

… to the really bad:

  •          We become slaves
  •          We wilt

… to the absolutely horrifying:

  •          Humane extermination
  •          Brutal extermination
  •          Eternal torture

It amazes me how many people claim to know what the outcome will be.  Some argue that a super-intelligence will be more civilised than us, and therefore benign.  But that is mere supposition.  Maybe a super-intelligence will think that we are a potential threat and decide to get its retaliation in first with a devastating pre-emptive strike.  Or maybe, in Eliezer Yudkovsky’s classic phrase, it will neither like us nor dislike us, but will simply have better uses for the atoms that we are made up of.

Other people argue that a super-intelligence will necessarily be damaging to us unless we take steps to prevent that.  This too is mere supposition.

The truth is, we simply do not know what will happen, however brilliant the advocates of particular outcomes may be.

One thing we do know is that we cannot stop the march towards super-intelligence.  The advantage of owning one is simply too great for any business, any government, and above all, any army.  Relinquishment will not work.

It is therefore amazing how few people are thinking seriously about these risks.  One of the leading organisations engaged in the task, the Machine Intelligence Research Institute, is located exactly where you would expect to find it, in Northern California.  Surprisingly, the two other leading ones are in the UK – one at Oxford and the other at Cambridge.

Almost by definition, we can neither predict nor pre-determine the goals of a super-intelligence.  The whole point of Asimov’s stories was that his Three Laws wouldn’t work.  How could we hope to programme a system of ethical rules into an AGI when we’re no closer to agreement on ethics than were the ancient Greeks?

Personally I think our best bet may be an Oracle AI, or an AI in a box – meaning an AI which is unable to influence the outside world directly.  Some people think this is impossible, but there are some talented groups working on it.

5. Can we upload our minds into computers?

The most optimistic scenarios is that we upload our minds into computers, merge with the super-intelligence, and zoom off to explore the universe together, like gods.  Well, let’s hope so!

But uploading isn’t just an optimistic scenario.  It may be the only way that we humans can enjoy the full benefits of super-intelligence.  Unless we merge with it, we are simply creating our successors and then probably fading into oblivion – or worse.

Uploading certainly won’t be easy.  Reproducing an existing mind without destroying it will require advanced nanotechnology for starters.

But is uploading even possible – philosophically, never mind technologically?  Some people argue that uploading doesn’t preserve you as a person – it merely copies you.  In which case it would be better to call it sideloading.  Maybe beaming Captain Kirk up is actually killing him here and giving life to a copy of him over there.

This debate actually goes back to the ancient Greeks, who called it the Ship of Theseus problem.  For my money the debate is more-or-less settled by the thought experiment where your carbon brain is gradually replaced, neuron by neuron, with silicon.  The resulting entity would certainly insist it was you, and it is impossible to say at what point during the process it would have stopped being you.

But this leaves us with some paradoxes.  If a mind is successfully uploaded without destroying the original brain you end up with two people, both claiming to be the same person.  Which one gets the kids, the house, and the record collection?  There are serious questions here, but I think there are satisfactory answers.

To sum up so far, we don’t know whether we can create an artificial general intelligence and then a super-intelligence.  The science is not settled.  But they do seem plausible developments, which may arrive relatively soon.  What we can say is that if it is possible, super-intelligence is very, very risky.

6. Can we de-risk the arrival of super-intelligence?

Let’s review what could go wrong.  If it turns out that AGI is impossible, or doesn’t lead to super-intelligence, well there will be a lot of disappointed geeks – including me, despite my concerns about super-intelligence – but the human race will carry on.  The other two scenarios are the ones we really need to avoid.  We don’t want the super-intelligence to harm us either deliberately or accidentally.  And if the only way for us humans to fully the enjoy the benefits of super-intelligence is to merge with it, it would also be tragic if unloading turns out to be impossible.  Or if it takes so long to become affordable that a whole generation dies while waiting.  How awful to be one of the last mortal humans, when you know that indefinite life lies just beyond your reach!

So I believe that we need to launch two new Apollo Projects.  We already have one in effect that is trying to create an AGI and then a super-intelligence.  The first additional one is obvious: we need to work on making the arrival of the super-intelligence beneficial for humans.  This is known as Friendly AI, or FAI.

The second Apollo Project is to abolish death, or rather to make it optional.  The gap between the creation of the first AGI and the development of uploading is likely to be years.  During those years, millions of humans will die – 150,000 every single day, just like today.  To bridge that gap we must achieve longevity escape velocity and develop brain preservation techniques (cryonics or brain plastination or both).  We should be doing this anyway, but we should especially be doing it considering that uploading may be – in historical terms – just around the corner.

These are expensive projects.  Some of the money will come from very rich individuals and businesses.  Google’s Calico and Craig Venter’s Human Longevity Institute are exciting ventures.  But most of it must come from governments.  Governments will only provide it if people demand it.  People will only demand it if they understand it.  They will only understand it if they talk about it.

So we need to get people talking about super-intelligence.  Not as some kind of rapture for nerds, but as a future which is distinctly possible, but highly uncertain, and full of risk.  A future that we need as many smart people as possible to be thinking about and working on.  Super-intelligence may well be the future of humanity – and it is a future we need to prepare for.

17 thoughts on “Talks and Interviews

  1. Nice overview. One topic that I am currently trying to get my mind around is: how do we move beyond the current mode of predicting the future based on individual opinions, to: model-based predictions. I cannot find any decent model beyond purely econometric models and maybe the model used for Limits to Growth by Meadows
    Discussions and predictions currently focus very much on the technically feasible. But that is not sufficient for a technology breaking through. The development must get funded, there must be some economies of scale, and the technological use-cases must be cheaper than non-technological options.
    So a crucial question needs to be: Why will this technological breakthrough happen? Fascinating stuff that requires a lot of work and thinking.

    • Thanks Folkert. I don’t know of any reliable way of forecasting technological developments, let alone economic and social ones. If you find one, you will probably become exceedingly wealthy!
      The arrival of AI is significantly different from most technology developments, however, since it does not need to be adopted by consumers, nor integrated into any existing economic or social infrastructure. Vast resources are now being applied to it, so the likelihood is that if it is technically possible, it will happen. And probably sooner than most people think.

    • AI breakthroughs will be very difficult to quantify until something very revolutionary happens. For example, we’ve come out the other side of the last AI winter over the last decade, and a lot of people haven’t really caught on. Nowadays, we use AI everywhere. It’s just the AI isn’t so obvious – but it’s still very pervasive.

      the AI breakthrough is when one company produces a general AI which is immediately capable of not only bettering itself, but of applying its own brand of lightning-fast iterative improvements to existing products.

      As Calum says, AI doesn’t need to be adopted in people’s homes to make a huge change to life everywhere. The question really is, how capable will it be when it first gets here, how long will it be before it improves enough to be truly useful, and how long after that until the company or companies that have such a powerful AI unfetter it to better their competitors or each other, and subsequently what will happen afterwards.

      Sure, it’s really pie in the sky stuff at the moment, but the predictions for computational growth put the capability for human-level intellect emerging well before the end of the century, and at that point, having it happen through sheer, bloody-minded brute force seems… likely.

  2. I believe that AGI development is the new arms race and as such there will be no constraint from the various countries involved in the attempt to build a mind that is capable in beating other minds at international ‘Chess’. International openness and collaboration is going to be required between the US, Europe, China and other countries to ensure that what we create doesn’t dominate all of us in the attempt to get the upper hand over one another.
    What ever we as a species develop in this regard, as parents, we need to stay at least as smart as our children until we have installed human values or we won’t like the solutions our children come up with.

    • Thanks Murray. I heard Ben Goertzel talking about this a while ago, and he thinks the Chinese are way behind on AGI at present, but that they may end up taking radically different approaches which might lead to unexpected breakthroughs. The EU’s main play seems to be Henry Markram’s project, which, bizarrely, is in non-EU Switzerland!

      As regards the parent analogy, if and when someone does create an AGI, the consequent intelligence explosion will probably mean that we have no chance of staying as smart as our children – not even in the same ballpark.

  3. Thanks calumchace.
    What I meant with the parent analogy is that we will have to upload, making ourselves smarter than any non-human AGI we parent if we are to improve the probability for a positive outcome for mankind from it. Of course this may make the AGI irrelevant for our purpose.

    • Ah, I see. And I agree completely. The best outcome for humans is surely to merge with our AI “children”. The trouble is, it will probably be possible to create an AGI quite a few years before it is possible to upload a living human. The interregnum could be a very dangerous phase. Hence the suggestion of an oracle AI, tasked with developing brain preservation and uploading technologies.

  4. In the not so far future, we might be able to create super-intelligent strong AI, but I see it as unnecessary extrapolation to assume that this AI will have anything to do with… lets call it – DNA based intelligence. 🙂
    It might have a sentience and processing power exceeding ours in orders of magnitude, but it might be incapable of everything a human can do.
    And I am not talking about the truly marvellous feats a human “meat processor” can accomplish, which lie beyond quantitative intelligence. My point is that we cannot, yet, explain the basic driving force behind life itself. While we might soon answer the “How Life?” question, we might be very far from answering the “Why Life?” one. And the latter might be as technical as it is spiritual.
    If we hope to achieve Strong AI by biomimicry – an emulation of the human mind on silicon – how far we must go into this emulation? Is the brain simply a classical system – is modelling the neural network sufficient? If so, what I/Os will this modelled network have? Our neural networks are built in a quite sophisticated 9 month process, essentially repeating 3.5 billions years of evolution, to form the basis of what we then develop and train for years to become an adult human being.

    • Hi Boris, thanks for your comment. If I understand you correctly, you are making two points.

      One is to ask whether whole brain emulation (WBE) can work. It’s a good question – perhaps the biggest question of our times. It might be that WBE requires scanning and modelling down to such fine-grained levels (sub-atomic particles? Quantum effects within nanotubules, as Hammeroff and Penrose suggest?) that it is impossible with any technology that will be available for hundreds of years. It might also be the case that you have to track the behaviour of every active component of the brain (neuron, molecule, nanotube..) for several hours to capture the necessary information.

      I guess we’ll only know the answer to this when someone scans a brain to the neuronal level. Most researchers in the field seem to think that the neuron level should suffice.

      Your other point, I think, is that we don’t know what an AGI would be capable of, or would want – if anything. I agree. And that makes the enterprise a very risky one, as well as very exciting.

      • Have you read David Zindell’s trilogy “Requiem for Homo Sapiens”? It explores in much detail questions related to counciousness, artificial intelligence and spirituality. Its a very good read. 🙂

        • No, I hadn’t come across it. I generally prefer hard sci-fi to fantasy, but one reviewer on Wiki says it contains “the most striking writing, vivid spectacles, memorable characters and insightful presentations of philosophy and religion seen in SF for many a year”, so I’ll put it on my list. Thanks!

  5. Well, you overlooked the possibility that Nanotechnology or Biotechnology will also advance to a point where we can make our physical bodies perpetually young or immortal, which would kind of make uploading our minds into an advanced computer system unnecessary. Also, if you did upload your mind like that, a duplication of an original is NOT the original; you could still die, and it would still only be a replicated version of your mind, not your totality of a person.

  6. You’re right: I didn’t address that specific possibility. But I’m not sure it’s a viable future long-term. Perpetuating our existing bodies and minds would leave us enormously and increasingly outclassed as our AGI successors expanded their capabilities in a way that we could not. I doubt we would thrive under those circumstances. I still think our best option is to merge with them.

    I did address the question of whether uploading is preservation or just duplication, using the incremental upload example. Much more could be said about that, of course, but I think that an upload of me which had all my memories and ways of thinking, and also had a solid causal connection to my past would have a good claim to be me. Even if it wasn’t the only me.

  7. Muy interesante el artículo…pero veamos, una cosa es lo biológico, el mapeo del cerebro, conocer todas las funciones de las regiones del cerebro, células, las neuronas, las dendritas, los axones, etc., pueden detectar las zonas donde se alojan los recuerdos, la zona del olfato, la vista, el gusto, habilidades, lenguaje, equilibrio, memorización, etc,.. pero que pasa?, se podrá emular el comportamiento del cerebro en las máquinas al momento de darles la parte intangible como son: los pensamientos, las emociones, los recuerdos, la idiosincracia, la personalidad única y diferente de cada ser humano, por mencionar algunos?, otro atributo mas complejo del ser humano: la fé…podrán emularla una vez conocido como funciona la parte biológica del cerebro????

    Como se podrá alimentar a la máquina para que tenga y haga uso de ésto?

    Cuando se logre dar a una máquina la capacidad de pensar por si misma, sin necesidad de programarla y ésta sea capáz de dar el salto creativo, estaremos en el umbral de ese temido futuro que bién explica en su articulo.

    No será que la IA ya dió lo que tenía que dar y lo que falta para lograr esa ansiada “máquina” es verla desde otro ángulo?
    Un cordial saludo para Ustedes.

  8. well… it’s another way to prove that the IA is wrong to try to translate literally … I try to explain it better…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s