Talks, Debates and Interviews

(Testimonials here)


A 30-second video

AI screengrab

A 30-second animation introducing what I talk and write about.


A one-minute video


This excerpt from a video of an event hosted at London’s Hospital Club by the nice people at Creative Capital is a very quick introduction to what I talk and write about.

And here’s another short video, this time made to accompany a PwC event in Paris:

Calum headshot from PwC Paris, June 2017


A recent talk

Opening image

A recent talk and discussion in Barcelona, at an event sponsored by Telefonica and LaVanguardia.



Whither humanity?  Discussion with Gerd Leonard


Gerd and I disagree on a lot.  He fears that we may lose our humanity as our machines become more powerful, while I argue that if we are smart, our technology can improve our lives immeasurably, even if we change radically in the process.  The two things we agree on are that these are vital issues, and that we will only resolve them through honest but considerate discussion.

As Gerd explains here, you can engage us to discuss these issues at your next event.


Discussing AI with George Osborne


One of the many worrying aspects of the Brexit referendum in the UK and the Trumpularity in the US is that most politicians are not yet talking about the challenges posed by the coming impact of powerful artificial intelligence.  This needs to change.

A conversation I had in November 2016 with George Osborne (until recently the UK’s Chancellor of the Exchequer) gave modest grounds for hope.

The video (16 minutes) contains excerpts from a recent panel discussion called “Ask Me Anything About the Future”.  Hosted by Bloomberg, it was organised by Force Over Mass, an early-stage investment fund manager.  It was very ably chaired by David Wood, who runs the London Futurists meetup group.


Previous talks and interviews


Annual investors meeting


Beringea is one of the UK’s leading private equity companies, investing in high-growth businesses.  They invited me to address their annual investors’ meeting.


A meeting of chartered accountants


This is a talk I gave in July 2016 at an event hosted jointly by PwC and the ICAEW, the Institute of Chartered Accountants of England and Wales.




ThinkNation is a brilliant initiative. The brainchild of Lizzie Hodgson, it is “where young people, artists and thought leaders tackle how technology is impacting everyday life and shaping our futures.” Given the profound importance of the changes sweeping through our lives in the coming years and decades, there aren’t many more important subjects to address.

So I was very pleased to deliver this talk (11 minutes) to an impressive group of 14-18 year-olds at a ThinkNation conference in December 2015.  It was followed by this discussion. (20 minutes)


Fundacion BankInter

Calum at Fondacion

Fondaction BankInter is a leading global think tank based in Madrid.  In 2015 it investigated the idea that machine intelligence may lead to technological unemployment.  A workshop in June lead to a report which was published in November, and the Fondacion asked me and Juan Francisco Blanes, a roboticist, to give talks at the launch.

With splendid irony, my computer crashed during the presentation, so fans of schadenfreude will particularly enjoy the section at 23 minutes 37 seconds.  Fortunately, the Fondacion staff came to the rescue with great efficiency and aplomb, and the talk re-starts at 28 minutes 27 seconds.


AI at the IMAX

Debate Still with SAI

CIPA, the Chartered Institute of Patent Lawyers, organised a debate on the motion “This House believes that within 25 years, a patent will be filed and approved without human intervention.”  Together with Chrissie Lightfoot, author of The Naked Lawyer *, I spoke in favour of the motion.  Although the audience was mostly patent lawyers, the motion passed, 80 votes to 60.

The event was ably chaired by Tom Clarke, Channel 4’s Science Editor, and our valiant but trounced opponents were Nigel Hanley and Ilya Kazi.

A YouTube clip of edited highlights is here, and this is a review of the event by James Nurton, the UK’s go-to man for IP.

Nurton mastheadNurton Header

* Essential reading for lawyers!


The launch of “Surviving AI”


Calum and Kenn

Compered by Kenn Cukier of The Economist and chaired by David Wood of London Futurists, September 2015


Robot Overlordz podcast

Robot Overlordz

Discussion in October 2015 with podcasters Mike and Matt, hosts of The Robot Overlordz


Review the Future podcast


Discussion in September 2015 with podcaster Jon Perry, host of Review the Future



Panel discussion in June 2015 with Ben Medlock, co-founder of Swiftkey, moderated by Sally Davies of the FT.




Singularity 1 on 1

Discussion with Nikola Danaylov, the creator of Singularity Weblog, in April 2015.




17 thoughts on “Talks, Debates and Interviews

  1. Nice overview. One topic that I am currently trying to get my mind around is: how do we move beyond the current mode of predicting the future based on individual opinions, to: model-based predictions. I cannot find any decent model beyond purely econometric models and maybe the model used for Limits to Growth by Meadows
    Discussions and predictions currently focus very much on the technically feasible. But that is not sufficient for a technology breaking through. The development must get funded, there must be some economies of scale, and the technological use-cases must be cheaper than non-technological options.
    So a crucial question needs to be: Why will this technological breakthrough happen? Fascinating stuff that requires a lot of work and thinking.

    • Thanks Folkert. I don’t know of any reliable way of forecasting technological developments, let alone economic and social ones. If you find one, you will probably become exceedingly wealthy!
      The arrival of AI is significantly different from most technology developments, however, since it does not need to be adopted by consumers, nor integrated into any existing economic or social infrastructure. Vast resources are now being applied to it, so the likelihood is that if it is technically possible, it will happen. And probably sooner than most people think.

    • AI breakthroughs will be very difficult to quantify until something very revolutionary happens. For example, we’ve come out the other side of the last AI winter over the last decade, and a lot of people haven’t really caught on. Nowadays, we use AI everywhere. It’s just the AI isn’t so obvious – but it’s still very pervasive.

      the AI breakthrough is when one company produces a general AI which is immediately capable of not only bettering itself, but of applying its own brand of lightning-fast iterative improvements to existing products.

      As Calum says, AI doesn’t need to be adopted in people’s homes to make a huge change to life everywhere. The question really is, how capable will it be when it first gets here, how long will it be before it improves enough to be truly useful, and how long after that until the company or companies that have such a powerful AI unfetter it to better their competitors or each other, and subsequently what will happen afterwards.

      Sure, it’s really pie in the sky stuff at the moment, but the predictions for computational growth put the capability for human-level intellect emerging well before the end of the century, and at that point, having it happen through sheer, bloody-minded brute force seems… likely.

  2. I believe that AGI development is the new arms race and as such there will be no constraint from the various countries involved in the attempt to build a mind that is capable in beating other minds at international ‘Chess’. International openness and collaboration is going to be required between the US, Europe, China and other countries to ensure that what we create doesn’t dominate all of us in the attempt to get the upper hand over one another.
    What ever we as a species develop in this regard, as parents, we need to stay at least as smart as our children until we have installed human values or we won’t like the solutions our children come up with.

    • Thanks Murray. I heard Ben Goertzel talking about this a while ago, and he thinks the Chinese are way behind on AGI at present, but that they may end up taking radically different approaches which might lead to unexpected breakthroughs. The EU’s main play seems to be Henry Markram’s project, which, bizarrely, is in non-EU Switzerland!

      As regards the parent analogy, if and when someone does create an AGI, the consequent intelligence explosion will probably mean that we have no chance of staying as smart as our children – not even in the same ballpark.

  3. Thanks calumchace.
    What I meant with the parent analogy is that we will have to upload, making ourselves smarter than any non-human AGI we parent if we are to improve the probability for a positive outcome for mankind from it. Of course this may make the AGI irrelevant for our purpose.

    • Ah, I see. And I agree completely. The best outcome for humans is surely to merge with our AI “children”. The trouble is, it will probably be possible to create an AGI quite a few years before it is possible to upload a living human. The interregnum could be a very dangerous phase. Hence the suggestion of an oracle AI, tasked with developing brain preservation and uploading technologies.

  4. In the not so far future, we might be able to create super-intelligent strong AI, but I see it as unnecessary extrapolation to assume that this AI will have anything to do with… lets call it – DNA based intelligence. 🙂
    It might have a sentience and processing power exceeding ours in orders of magnitude, but it might be incapable of everything a human can do.
    And I am not talking about the truly marvellous feats a human “meat processor” can accomplish, which lie beyond quantitative intelligence. My point is that we cannot, yet, explain the basic driving force behind life itself. While we might soon answer the “How Life?” question, we might be very far from answering the “Why Life?” one. And the latter might be as technical as it is spiritual.
    If we hope to achieve Strong AI by biomimicry – an emulation of the human mind on silicon – how far we must go into this emulation? Is the brain simply a classical system – is modelling the neural network sufficient? If so, what I/Os will this modelled network have? Our neural networks are built in a quite sophisticated 9 month process, essentially repeating 3.5 billions years of evolution, to form the basis of what we then develop and train for years to become an adult human being.

    • Hi Boris, thanks for your comment. If I understand you correctly, you are making two points.

      One is to ask whether whole brain emulation (WBE) can work. It’s a good question – perhaps the biggest question of our times. It might be that WBE requires scanning and modelling down to such fine-grained levels (sub-atomic particles? Quantum effects within nanotubules, as Hammeroff and Penrose suggest?) that it is impossible with any technology that will be available for hundreds of years. It might also be the case that you have to track the behaviour of every active component of the brain (neuron, molecule, nanotube..) for several hours to capture the necessary information.

      I guess we’ll only know the answer to this when someone scans a brain to the neuronal level. Most researchers in the field seem to think that the neuron level should suffice.

      Your other point, I think, is that we don’t know what an AGI would be capable of, or would want – if anything. I agree. And that makes the enterprise a very risky one, as well as very exciting.

      • Have you read David Zindell’s trilogy “Requiem for Homo Sapiens”? It explores in much detail questions related to counciousness, artificial intelligence and spirituality. Its a very good read. 🙂

        • No, I hadn’t come across it. I generally prefer hard sci-fi to fantasy, but one reviewer on Wiki says it contains “the most striking writing, vivid spectacles, memorable characters and insightful presentations of philosophy and religion seen in SF for many a year”, so I’ll put it on my list. Thanks!

  5. Well, you overlooked the possibility that Nanotechnology or Biotechnology will also advance to a point where we can make our physical bodies perpetually young or immortal, which would kind of make uploading our minds into an advanced computer system unnecessary. Also, if you did upload your mind like that, a duplication of an original is NOT the original; you could still die, and it would still only be a replicated version of your mind, not your totality of a person.

  6. You’re right: I didn’t address that specific possibility. But I’m not sure it’s a viable future long-term. Perpetuating our existing bodies and minds would leave us enormously and increasingly outclassed as our AGI successors expanded their capabilities in a way that we could not. I doubt we would thrive under those circumstances. I still think our best option is to merge with them.

    I did address the question of whether uploading is preservation or just duplication, using the incremental upload example. Much more could be said about that, of course, but I think that an upload of me which had all my memories and ways of thinking, and also had a solid causal connection to my past would have a good claim to be me. Even if it wasn’t the only me.

  7. Muy interesante el artículo…pero veamos, una cosa es lo biológico, el mapeo del cerebro, conocer todas las funciones de las regiones del cerebro, células, las neuronas, las dendritas, los axones, etc., pueden detectar las zonas donde se alojan los recuerdos, la zona del olfato, la vista, el gusto, habilidades, lenguaje, equilibrio, memorización, etc,.. pero que pasa?, se podrá emular el comportamiento del cerebro en las máquinas al momento de darles la parte intangible como son: los pensamientos, las emociones, los recuerdos, la idiosincracia, la personalidad única y diferente de cada ser humano, por mencionar algunos?, otro atributo mas complejo del ser humano: la fé…podrán emularla una vez conocido como funciona la parte biológica del cerebro????

    Como se podrá alimentar a la máquina para que tenga y haga uso de ésto?

    Cuando se logre dar a una máquina la capacidad de pensar por si misma, sin necesidad de programarla y ésta sea capáz de dar el salto creativo, estaremos en el umbral de ese temido futuro que bién explica en su articulo.

    No será que la IA ya dió lo que tenía que dar y lo que falta para lograr esa ansiada “máquina” es verla desde otro ángulo?
    Un cordial saludo para Ustedes.

  8. well… it’s another way to prove that the IA is wrong to try to translate literally … I try to explain it better…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s