I am not surprised that the person making this prediction is Ray Kurzweil. It seems like he may be the last person stumping for the singularity. As a step along the way, he is claiming that we’ll have successfully reverse engineered and run the human brain on a computer by 2030. Appropriately enough this latest prediction was delivered at the Singularity Summit.
His back of the envelope figuring is a bit suspect if you ask me.
Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
This reason demands a computer with a capacity of 36.9 petaflops and a memory capacity of 3.2 petabytes. He arrives at his date by drawing a line between the beefiest computers of today out towards this notional system. If raw computing power were the sole determinant, I’d be inclined to agree.
DNA is a blueprint for developing a brain, not “running” one or even building a fully realized one. Brains are a product of biological development that bootstraps through stages that I believe are necessary to the end product. More than that, I suspect that for a fully functioning human brain to develop, it must be embedded in an environment throughout that maturation.
A more realistic use of Kurzweil’s DNA figures is to continue to extrapolate what it would take to simulate a full human at least through pre-natal development. If that were even possible, then you could tackle the training and knowledge acquisition challenges which is his sole and thin concession to skeptics.
I get far more excited by modest advances with bottom up approaches like the Avida article I wrote spoke about on the last podcast and the swarm intelligence article I wrote up yesterday. I think that bottom up systems stand a better chance of capturing the emergent complexity of a developing mind. Better yet, reducing such a distributed, complex system to its barest requirements is more likely to be fruitful than blindly compress or stripping out parts of a blindly ignorant simulation, even assuming it were possible. Many studies have explored the minimal amount of complexity for emergent behaviors to evolve. Typically such systems can be winnowed out quantitatively with less impact of the qualitative end result.
In case that doesn’t convince you, PZ Myers has an excellent critique of the same talk at Pharyngula. He expands on those necessary intermediate complexities, at least the ones we have some inkling are necessary, in the form of proteomics.