Skip to content


Naive Prediction on Simulating a Human Brain

I am not surprised that the person making this prediction[1] is Ray Kurzweil. It seems like he may be the last person stumping for the singularity. As a step along the way, he is claiming that we’ll have successfully reverse engineered and run the human brain on a computer by 2030. Appropriately enough this latest prediction was delivered at the Singularity Summit.

His back of the envelope figuring is a bit suspect if you ask me.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

This reason demands a computer with a capacity of 36.9 petaflops and a memory capacity of 3.2 petabytes. He arrives at his date by drawing a line between the beefiest computers of today out towards this notional system. If raw computing power were the sole determinant, I’d be inclined to agree.

DNA is a blueprint for developing a brain, not “running” one or even building a fully realized one. Brains are a product of biological development that bootstraps through stages that I believe are necessary to the end product. More than that, I suspect that for a fully functioning human brain to develop, it must be embedded in an environment throughout that maturation.

A more realistic use of Kurzweil’s DNA figures is to continue to extrapolate what it would take to simulate a full human at least through pre-natal development. If that were even possible, then you could tackle the training and knowledge acquisition challenges which is his sole and thin concession to skeptics.

I get far more excited by modest advances with bottom up approaches like the Avida article I wrote spoke about on the last podcast and the swarm intelligence article I wrote up yesterday. I think that bottom up systems stand a better chance of capturing the emergent complexity of a developing mind. Better yet, reducing such a distributed, complex system to its barest requirements is more likely to be fruitful than blindly compress or stripping out parts of a blindly ignorant simulation, even assuming it were possible. Many studies have explored the minimal amount of complexity for emergent behaviors to evolve. Typically such systems can be winnowed out quantitatively with less impact of the qualitative end result.

In case that doesn’t convince you, PZ Myers has an excellent critique[2] of the same talk at Pharyngula. He expands on those necessary intermediate complexities, at least the ones we have some inkling are necessary, in the form of proteomics.


1. Reverse-Engineering of Human Brain Likely by 2030, Wired
2. Ray Kurzweil doesn’t understand the brain, Pharyngula (HT Glyn Moody)

Posted in Technology.

Tagged with .


6 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

  1. Jeremy Wilson says

    “Here’s how that math works, Kurzweil explains: The design of the brain is in the genome.”
    “DNA is a blueprint for developing a brain, not “running” one or even building a fully realized one.”

    Is design not equatable to blueprint in your opinion? Is the reason you disagree with Kuzeweil because of your different use of terminology? I would agree that Kuzweil is wrong if he said: A.I. can be developed using the human genome, but he said brain not A.I.

    • Thomas Gideon says

      Not, it isn’t just terminology. Kurzweil’s observation about the role of DNA is overly simplistic, that it can be read like a technical specification and a functioning, thinking brain assembled atom by atom.

      Biological development is inherently complex and non-linear. DNA is a bootstrapping network where each stage of the biological systems it governs provide feedback. Read anything that discusses how fetal development progresses. You cannot just start with DNA and skip to the end of an infant organism. Each intermediate step is contingent on the effects in the womb and within the fetus itself wrought by preceding developmental steps. The environment of the womb and the prenatal organism are the media through which DNA and living matter communicate with each other . That communication is critical to how DNA works and life develops, it cannot be omitted as far as we know to get from DNA to life.

      As I said, the blueprint is more for the process, not the end result. Big difference. And blueprint is a pretty poor choice of term as the process it governs is highly contingent and emergently complex. A blueprint is static, DNA is made up of both regulatory genes that active and suppress depending on circumstance and those genes which encode for phenotypic expression, that is observable traits.

      I cannot recommend PZ Myers’ take on this highly enough. He has a strong background in the relevant field and makes a much more detailed but still coherent and accessible argument.

      • Jeremy Wilson says

        I think Kuzweil knows that chemical biology is inherently non-linear. And I think he knows how fetal development works. I agree that blueprints are static but so is DNA without a cellular mechanisms to start the process. In sperm and egg banks the DNA is practical frozen. The blueprint is there but the process is stalled and hasn’t event begun until the sperm and egg come together.

        PZ Myers doesn’t understand that Kuzweil is not talking about compressing “the process” but that the DNA information can be compressed then uncompressed. As I said before this is only for biological brains.

        • Thomas Gideon says

          I am fairly certain that Myers gets that the compression would be applied to the information content of the DNA itself. It is irrelevant whether that information is compressed as the argument is that Kurzweil is drastically underestimating the complexity involved in taking DNA as a starting point and somehow ending up with a functioning brain. Compressing the input doesn’t compress the time complexity of the process, in fact it will only increase it as you’ll need to decompress the information to actually do anything with it. The process in question isn’t a direct translation of DNA into atoms or bits in a computer model. The only method about which we have any clue for getting from DNA to an organ, let alone an organism, is to “execute” the DNA which is the non-linear, emergently complex process I was talking about.

          This is where Myers’ discussion of proteomics is relevant as that is what a DNA program does when executed. It produces proteins that then fold in non-trivially complex ways as they are synthesized. To get from proteins and how they fold to organs requires simulating layers on layers of complexity. We haven’t even fully mapped out the full set of proteins and necessary folding, let alone getting to the next layer up that depends on them. Simply put, there isn’t a 1:1 mapping between DNA and the neurons in a living brain and lacking that, I doubt there is any shortcut past simulating from protein expression all the way up through cellular processes and neuronal development and interaction. That is a ton more computation than just simulating the end result.

          In algorithmic terms, Kurzweil wants to assume that the order of complexity for simulating a brain is a constant or lower order (linear, log-linear or logarithmic) of the assumed input, DNA. I doubt that is anywhere near correct, I would say we don’t even have a good first order approximation on the inherent complexity in producing a brain from DNA, computationally for the reasons I stated above.

          Granted, once you have that result, reproducing a digital copy is trivial in theory though the practicalities at that scale of information density may be prohibitive. Bear in mind that the best general purpose lossless compression averages about 2:1. If a living brain requires yottabytes of storage, cutting that in half doesn’t make the problem of copying it any more tractable.

          I would also argue that the brain is dependent on the body in which it is embedded. So I think there is a basis to reject Kurzweil’s argument even earlier on, that you could somehow separate out the portion of DNA that ultimately results in the development of a brain and have that fragment even work in any meaningful fashion.

          I am not trying to say that we may never achieve machine based intelligence (either wholly synthetic or simulating existing minds), rather I reject Kurzweil’s naive predictions about how and when it will happen. In my opinion I think that Kurzweil’s folly lately is just an all too common fear of mortality leading to willfully taking on many flawed assumptions as long as they might lead to his personal wish fulfillment of practical immortality.

          • Jeremy Wilson says

            “Simply put, there isn’t a 1:1 mapping between DNA and the neurons in a living brain”

            I think this is exactly what he says in his book.

            “and lacking that, I doubt there is any shortcut past simulating from protein expression all the way up through cellular processes and neuronal development and interaction. That is a ton more computation than just simulating the end result.”

            When I read “The Singularity Is Near” I never saw Kuzweil say that we need to simulate the brain atom by atom, protein by protein. I also would reject the how and when if it was brassed on simulating protein folding but its not in the book.

            I would recommend Chapter 4: Achieving The software of Human Intelligence: How to Reverse Engineer the Human Brain

          • Thomas Gideon says

            Citing his book doesn’t prove anything nor does it persuade me. You are restating his assumptions which I have tried to argue against as faulty.

            All the same, I will borrow the book from the library and read the suggested chapter so I can at least have a better understanding of his argument, in the absence of anything other than a mere citation on your part. I doubt very much he’ll persuade me especially if it is any variation on his part arguments. I have read him argue for being able to perform very detailed scans, often destructive, of an existing brain and I am skeptical that the resulting information leads any more easily to a workable simulation of a human brain by 2030.



Some HTML is OK

or, reply to this post via trackback.



Creative Commons License
The Command Line by Thomas Gideon
is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.