Researchers claim to have modelled one percent of a human brain, taking 40 minutes to replicate one second of brain activity. If this is true it is startling, and should be making much bigger headlines than it is.
You might wonder why, given that it was only 1% of a brain, and it took so long to model just one second. But that would be to ignore the power of exponential increase, as recorded in Moore’s Law. As a comment on Reddit pointed out, applying Moore’s Law generates this forecast:
- Jan 2015 – 20 minutes for 1 second of 1% neurons
- Jun 2016 – 13.3 minutes for 1 second of 1.5% neurons
- Jan 2018 – 8.86 minutes for 1 second of 3% neurons
- Jun 2019 – 5.9 minutes for 1 second of 4.5% neurons
- Jan 2021 – 3.93 minutes for 1 second of 6.75% neurons
- Jun 2022 – 2.62 minutes for 1 second of 10.125% neurons
- Jan 2024 – 1.74 minutes for 1 second of 15.19% neurons
- Jun 2025 – 1.15 minutes for 1 second of 22.78% neurons
- Jan 2027 – 0.77 minutes for 1 second of 34.17% neurons
- Jun 2028 – 0.51 minutes for 1 second of 51.26% neurons
- Jan 2030 – 0.34 minutes for 1 second of 76.88% neurons
- Jun 2031 – 0.23 minutes for 1 second of 115.3% neurons
- Jan 2033 – 0.115 minutes for 1 second of 115.3% neurons
- Jun 2034 – 0.057 minutes for 1 second of 115.3% neurons
- Jan 2036 – 0.029 minutes for 1 second of 115.3% neurons
- Jun 2037 – 0.0145 minutes for 1 second of 115.3% neurons
- Jan 2039 – 0.007 minutes for 1 second of 115.3% neurons
So in just 25 years there could be a computer able to think faster than you can.
The researchers, from the Japanese research group RIKEN and the German Forschungszentrum Jülich centre, brought some big guns to bear on the problem. They used the world’s fourth fastest supercomputer, the K computer, (shown in the photo) with 705,024 processor cores and 1.4 million GB of RAM. So it’s a bit bigger than your laptop.
Now there are caveats. First, this relies on a simplification of Moore’s Law. Frankly, that’s a quibble. Second, Moore’s Law might not continue that long. There are people today predicting that it will expire shortly. But that has happened regularly for the last 25 years, and although we cannot cram many more transistors on a flat silicon chip, there are plenty of alternative approaches which could take it forward.
Third, can the researchers be sure that they actually modelled what happens inside a brain? As far as I know, no-one has yet tracked all the activity that goes on within a whole 1% of a brain for a whole second.
Fourth, there may be second-order complications (and third-order, fourth-order, and so on) which arise when you try to model how the 1% of the brain’s neurons represented today interact with the other 99%.
These are important questions, but it seems to me that we should take seriously the claim made by Markus Diesmann, one of the lead researchers, that “simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers – hopefully available within the next decade.”
“Hopefully” is a very interesting word to use in that context.