In the previous post, I mentioned that there are, broadly speaking, two main ways for us to build a human-level, conscious AI. One is to assemble the most advanced AI systems available and have them learn, and the other is to build a model of a human brain which is as precise as we can manage, and see if it appears to exhibit consciousness.
Henry Markram is doing exactly that at the Human Brain Project in Lausanne, funded to the tune of €1bn by the EU and others. But in an interview on the Singularity 1 on 1 website, AI veteran Marvin Minsky declares that this project has a 98% chance of failure, because it is intent on mapping the neuronal connections rather than developing a theory of mind which explains how brains make minds.
What’s more, Minsky worries that if this huge project fails despite its enormous resources, there will follow another “AI winter”, in which disappointment in AI research drains all resources away from the field for many years. (That said, Minsky claims the field is already short of resources for basic research as opposed to development projects.)
Who is right? Will modelling a human brain (down to the neuronal, or molecular, or even the atomic level) result in a working mind? Or must we understand exactly what each neuron does to every other neuron and why? The answer to this question may well determine whether we see the first conscious machine within the lifetimes of people alive today.