Reconstructing a brain

Every once in a while, I get some glib story from believers in the Singularity and transhumanism that all we have to do to upload a brain into a computer is make lots of really thin sections and reconstruct every single cell and every single connection, put that data into a machine with a sufficiently robust simulator that executes all the things that a living brain does, and presto! You've got a virtual simulation of the person! I've explained before how that overly trivializes and reduces the problem to an absurd degree, but guess what? Real scientists, not the ridiculous acolytes of Ray Kurzweil, have been working at this problem realistically. The results are interesting, but also reveal why this work has a long, long way to go.

In a paper from Jeff Lichtman's group with many authors, they revealed the results of taking many ultrathin sections of a tiny dot of tissue from mouse cortex, scanned them, and then made 3-D reconstructions. There was a time in my life when I was doing this sort of thing: long hours at the ultramicrotome, using glass knives to slice sequential sections from tissue imbedded in an epoxy block, and then collecting them on delicate copper grids, a few at a time, to put on the electron microscope. One of the very cool things about this paper was reading about all the ways they automated this tedious process. It was impressive that they managed to get a complete record of 1500 µm3 of the brain, with a complete map of all the cells and synapses.

But hey, I don't have to explain it: here's a lovely video that leads you through the whole thing.

I have to squelch the fantasies of any Kurzweilians who are orgasming over this story. It was a colossal amount of work to map a very tiny fraction of a mouse brain, and it's entirely a morphological study. This was a brain fixed with osmium tetroxide -- it's the same stuff I used to fix fish brains for EM 30 years ago. It's a nasty toxin, but very effective at locking down membranes and proteins, saturating them with a high electron density material that is easily visualized with high contrast in the EM. But obviously, the chemistry of the tissue is radically changed, and there's a lot of molecular-level information lost.

So the shape of the cellular processes is preserved, and we can see important features like vesicles and post-synaptic densities and connexins (although they didn't find any gap junctions in this sample), but physiological and pharmacological information is only inferred.

But it's beautiful! And being able to do a quantitative analysis of connectivity in a piece of the brain is fascinating.

(A) Cortical neuronal somata reconstruction to aid in cortical layer boundaries (dotted lines) based on cell number and size. Large neurons are labeled red; intermediate ones are labeled yellow; and small ones are labeled blue. The site of the saturated segmentation is in layer V (pink arrow). These two layer VI pyramidal cell somata (red and green arrows) give rise to the apical dendrites that form the core of the saturated cylinders. (B) A single section of the manually saturated reconstruction of the high-resolution data. The borders of the cylinders encompassing the ‘‘red’’ and ‘‘green’’ apical dendrites are outlined in this section as red and green quadrilaterals. This section runs through the center of the ‘‘green’’ apical dendrite. (C) A single section of a fully automated saturated reconstruction of the high-resolution data. Higher magnification view (lower left inset) shows 2D merge and split errors. (D) The two pyramidal cells (red and green arrows) whose apical dendrites lie in the centers of the saturated reconstructions. Dendritic spines reconstructed in the high-resolution image stack only. (A) Cortical neuronal somata reconstruction to aid in cortical layer boundaries (dotted lines) based on cell number and size. Large neurons are labeled red; intermediate ones are labeled yellow; and small ones are labeled blue. The site of the saturated segmentation is in layer V (pink arrow). These two layer VI pyramidal cell somata (red and green arrows) give rise to the apical dendrites that form the core of the saturated cylinders. (B) A single section of the manually saturated reconstruction of the high-resolution data. The borders of the cylinders encompassing the ‘‘red’’ and ‘‘green’’ apical dendrites are outlined in this section as red and green quadrilaterals. This section runs through the center of the ‘‘green’’ apical dendrite. (C) A single section of a fully automated saturated reconstruction of the high-resolution data. Higher magnification view (lower left inset) shows 2D merge and split errors. (D) The two pyramidal cells (red and green arrows) whose apical dendrites lie in the centers of the saturated reconstructions. Dendritic spines reconstructed in the high-resolution image stack only.

It's also because the authors have a realistic perspective on the magnitude of the problem.

Finally, given the many challenges we encountered and those that remain in doing saturated connectomics, we think it is fair to question whether the results justify the effort expended. What after all have we gainedfrom all this high density reconstruction of such a small volume? In our view, aside from the realization that connectivity is not going to be easy to explain by looking at overlap of axons and dendrites (a central premise of the Human Brain Project), we think that this ‘‘omics’’ effort lays bare the magnitude of the problem confronting neuroscientists who seek to understand the brain. Although technologies, such as the ones described in this paper, seek to provide a more complete description of the complexity of a system, they do not necessarily make understanding the system any easier. Rather, this work challenges the notion that the only thing that stands in the way of fundamental mechanistic insights is lack of data. The numbers of different neurons interacting within each miniscule portion of the cortex is greater than the total number of different neurons in many behaving animals. Some may therefore read this work as a cautionary tale that the task is impossible. Our view is more sanguine; in the nascent field of connectomics there is no reason to stop doing it until the results are boring.

That's beautiful, man.


Kasthuri N, Hayworth KJ, Berger DR, Schalek RL, Conchello JA, Knowles-Barley S, Lee D, Vázquez-Reina A, Kaynig V, Jones TR, Roberts M, Morgan JL, Tapia JC, Seung HS, Roncal WG, Vogelstein JT, Burns R, Sussman DL, Priebe CE, Pfister H, Lichtman JW. (2015) Saturated Reconstruction of a Volume of Neocortex. Cell 162(3):648-61. doi: 10.1016/j.cell.2015.06.054.

More like this

I think you are way off, and here's in a nutshell why. Kurzweilians do not pretend (at least the fair ones, unlike you) that they've solved the human brain yet, or anything like it. They would be appreciative that a tiny portion of a mouse's brain had been analyzed, but they are quite aware of the difficulties with solving the enormous challenges of the human brain. But, again unlike you, they are aware that in thirty years time, with 1000x more horsepower to throw at this problem, and many advances over the years, it is highly possible that they are going to solve it for real. It's a new way to look at this problem, powered by Moore's Law and several others -- and I think you have overlooked that basic but true fact.

By Peter Marshall (not verified) on 06 Aug 2015 #permalink

PZ, major thanks for taking on the Singularitarians. The San Francisco Bay area and Silicon Valley are infested with them, notably in companies such as Google (where Kurzweil has been given a blank check as chief of engineering), Facebook, and the like.

I'd be interested in your feedback on my layperson critique of Singularitarianism:

1) Neurons are not binary switches: they also use analog computing via neurochemicals. Chemical interactions with cells might be simulated in algorithms, but simulation is not duplication, and the result will not be human-like consciousness.

2) Per material monism, if consciousness is identical with the functioning of the brain, then "uploading" at best would be similar to cloning. The clone lives, you die, and dead is still dead. There is no silicon immortality.

3) If you can reincarnate into a computer, you can also reincarnate into a cat. Reincarnation is a widespread religious belief, but that does not make it science.

This just occurred to me, wonder what you think of it:

We also have a lot of engineers here from India, where the Hindu belief in reincarnation is part of the common culture. I'm wondering if the background of cultural acceptance of reincarnation in general, might not be providing a basis for an emotional inclination to accept reincarnation via upload, on the part of these engineers? In that case, their acceptance of it would tend to make it more acceptable among their American and other nationality peers in these companies, who didn't grow up with the same cultural background.

Science doesn't claim to provide answers to moral questions, but scientists are free to weigh in on moral issues, and the following are worthy of consideration.

These items start from the perspective, "what if the Singularitarians are right, and we can build human-level and godlike AIs, and also 'upload' our souls er uh minds to them?" By starting there, the moral contradictions of Singularitarianism become apparent, and Singularitarians have not yet addressed these points:

1) Making a conscious AI has the same moral implications as making a baby. The result is a person, regardless of inhabiting a different type of body. Any such entity should have the full set of "human" and civil rights as any other person in a civilized society.

2) Using conscious AIs as tireless worker-bots is morally equivalent to a new form of slavery, with the same rationalization as the oldfashioned form: "they don't look like us."

3) Using conscious AIs as receptacles for "uploaded" minds is morally equivalent to growing cloned babies for use as sources of transplant parts.

4) Regardless of one's answer to (3), what becomes of a society in which a very few people have the wealth to afford immortality via "upload," while the rest of us die as we always have?

5) Related to (4), Singularitarianism does not place any moral limits on _how_ one goes about accumulating the wealth, power, prestige, or other means by which to secure a place on the Immortality Express.

6) Even if "upload" is merely equivalent to cloning, and does not produce immortality of the original mind: Do we really want to live in a world where the likes of Kurzweil, Larry & Sergey, Zuckerberg, et.al., continue to exercise active influence long after they have died? This is not the same scenario as occurs with the works of others whose influences are considered "timeless." Plato, Jesus, Newton, DaVinci, and others did not choose to become "immortalized" through their works: their status in history was not self-appointed, but occurred as the result of the acclaim of others.

7) If these points are taken seriously, then the only reason to produce conscious human-level or superhuman-level AIs is to populate the world with a new species of geniuses, who would have the same rights as any of us. While that might be worthwhile for the same reason as having more geniuses in the biological human population, it also calls up exactly that question:

Why not improve global society in all the ways we already know will bring forth the geniuses who are alive now and who are being born every day? Improved prenatal, infant, and child nutrition, improved public health, improved public education, greater access to post-secondary education, and equality for girls and women worldwide, will have that effect. Building an army of silicon geniuses while ignoring existing human geniuses is at best a warped set of priorities.

----

BTW, PZ please feel free to bluntly say if/where any of my points here and in my preceding comment, are just plain wrong or "not even wrong." Strong critique is much appreciated.

I don't associated with the term "singulatarian" myself, nor do I favor the faster timelines of predicted technological advancement, but I do take issue with claims that these ideas are not only unlikely "soon", but rather are philosophically and fundamentally flawed such that they could never occur, even with another ten thousand years of scientific advancement. That said, I disagree with essentially everything 'G' wrote above.

G wrote: Neurons are not binary switches: they also use analog computing via neurochemicals.

Essentially no one has claimed neurons are binary switches. Must we belabor ourselves refuting your strawmen, or can you not offer them to begin with please?

G wrote: simulation is not duplication

Depends on your philosophical stance. If you believe in a biological underpinning to consciousness, then you can make claims like the one above. But many people prefer a psychological or functional view of both consciousness and of personal identity. By such views, and relying on the methdology of system identification, then the only thing that matters is some functional behavior of the neuron. By far the most popular such level of analysis is the stastitical patterns of efferent action potentials in the context of afferents, and taking into account environmental factors like chemical and electrical gradients. With that level of system identification stated, then once we have captured and reproduced that neuron's stochastic characteristics, we can say we have truly replicated the neuron, not just "simulated" it -- even if we don't reimplement those patterns of behavior with proteins and lipids. Electronics can do the same job.

G wrote: the result will not be human-like consciousness

This is an entirely unsubstantiated claim. You are welcome to believe it since people believe all sorts of things, but it is based on no evidence whatsoever. We don't have an adequate understanding of the nature of consciousness yet to state from what systems consciousness can or cannot arise.

G wrote: “uploading” at best would be similar to cloning

This is a common claim, more generally phrased as "uploads are just copies". Pretty much the only tenuous argument that would support this claim is the need for an unbreakable "stream of conscious continuity", which is precisely what is often claimed in such statements, but modern medicine has shown that the mind and the notion of personal identity can survive levels of neural shutdown never seen in antiquity, such as by modern anesthesia or in rare cases of "rapid frigid drowning". Similar notions underly the ongoing pursuit of medically induced hibernation or suspended animation. Once we shed the need for continuity, we can then reasonably conceive of procedures in which the brain is essentially halted, and then restarted with an arbitrary delay (which could in theory span thousands of years). Once *that* position is adopted, and in concert with the claim above that the mind is a functional entity as opposed to materially dependent on a particular set of ribosomes or other idiosyncratic matter, we can then make the reasonable next step to replicating those functions in another system instead of the original brain before the system is reawakened, and we can do so with the conclusion of a preservation of identity, not a "clone" as you put it.

At this point, it is often refuted: "but what if the original is awakened too? Who is who in this case?" Such questions are philosophically fascinating but have no bearing on the preceding paragraph in terms of technical utility. The best philosophical resolution to uploading without destroying the original is "branching identity" as presented by both Michael Cerullo and myself (Keith Wiley) in independent publications. Google to your satisfaction.

G wrote: If you can reincarnate into a computer, you can also reincarnate into a cat

This is getting silly. The functional theory of mind summarized above claims you can instantiate a mind in any physical system that is sufficient to replicate the functional behavior of a human brain. It has never been claimed that a feline brain could do any such thing for a human brain, so you are once again raising strawmen, which I rather resent in mature debate. A computer could reasonably satisfy these requirements if it is built specifically toward replicating crucial human neural functions. This is not remotely the case for cat brains, not the least reason being that they don't even have enough neurons for the job. Don't be ridiculous.

Keith Wiley
Author of A Taxonomy and Metaphysics of Mind-Uploading

By Keith Wiley (not verified) on 07 Aug 2015 #permalink

Hi Keith!;-) Fasten your seatbelts....

1) The claim that you can produce consciousness in a classical computing platform presupposes that binary switches + software = a brain. Nor can you make a duck out of clockwork (and I trust you recognize where that analogy comes from but if not, keyword search "automata").

2) No it doesn't depend on your philosophical stance, any more than what happens to your mind after you die. Reality is objective, and science deals in observables.

3) "But many people prefer a psychological or functional view of both consciousness and of personal identity." That's nice, and the adherents of more conventional religions prefer a spiritual or theological view of both consciousness and personal identity, but those views do not claim to be science.

The rest of that paragraph reads like a coder's version of New Age, but wishing doesn't make it so. See also my point about simulating ducks in clockwork.

4) "G wrote: 'the result will not be human-like consciousness.' This is an entirely unsubstantiated claim." Sorry but that's not how science works. The claim that the result _will be_ human-like consciousness, is an extraordinary claim that requires evidence. Burden of proof is on you, otherwise the null hypothesis stands. So far you haven't met it.

5) Uploading: If you clone yourself, the clone is not you. If the clone lives and you die, you're still dead. When you drop dead, one of two things will happen: either your mind will cease to exist, or you'll have some kind of afterlife. No amount of hand-waiving will change the outcome, any more than arm-flapping will enable me to fly.

Yes, neural shutdown and anaesthesia, etc. Let's not forget near-death experiences (NDEs). If from those things we can conclude that minds can function while their brains are flatlined, then all we've done is re-opened the door to some kind of Hereafter. If that's true, then you can spend eternity in a silicon prosthesis, and I can opt for a kitty. But first you have to demonstrate that it's true, so as I said, burden of proof is on you.

BTW, total irony department: the guy who is most likely to figure out the mechanism whereby volatile anaesthetics shut down consciousness, is none other than Stuart Hameroff, who along with Roger Penrose, created the Orch-OR theory that raises the complexity level of neural computation from approx. 10^16 to 10^24, and thereby puts The Singularity off by another couple of hundred years. Not to worry, since Ray the K can well afford two centuries on ice at Alcor. Whether the mortal masses are willing to tolerate an immortal-appearing oligarchy remains to be seen.

"Branching identity" sounds like another New Age import. One of two things are true: either a) the original and the clone each have their own independent minds, one running on bio neurons and the other on silicon neuron-equivalents (in which case, when you die, you're still dead), or b) one mind spread over two bodies, with their respectively different types of brains (in which case as long as one of your bodies lives, you have continuity of existence & experience). (b) is effectively equivalent to telepathy with an effect size far greater than anything that's ever been demonstrated in the peer-reviewed literature, or it's a migratory soul.

(I don't use Google, I use Ixquick and DuckDuckGo. Moral consistency & all.)

6) Instantiating minds in functionally equivalent physical systems: I don't doubt that if you could produce a truly functionally equivalent system, it would in turn produce a mind (or capture a soul, as the case may be). And the moral implications of doing that would be equivalent to those of having a baby. Which gets us right back to the beginning of my moral objections, points (1) through (7) in my preceding comment. Which you still have not addressed, but I'll gladly stick around in case you do.