There's an interesting conversation in the New York Times: a neuroscientist, Kenneth D. Miller, argues that brain uploading ain't gonna happen. I agree with him, only in part because of the argument from complexity he gives.
Much of the current hope of reconstructing a functioning brain rests on connectomics: the ambition to construct a complete wiring diagram, or “connectome,” of all the synaptic connections between neurons in the mammalian brain. Unfortunately connectomics, while an important part of basic research, falls far short of the goal of reconstructing a mind, in two ways. First, we are far from constructing a connectome. The current best achievement was determining the connections in a tiny piece of brain tissue containing 1,700 synapses; the human brain has more than a hundred billion times that number of synapses. While progress is swift, no one has any realistic estimate of how long it will take to arrive at brain-size connectomes. (My wild guess: centuries.)
Second, even if this goal were achieved, it would be only a first step toward the goal of describing the brain sufficiently to capture a mind, which would mean understanding the brain’s detailed electrical activity. If neuron A makes a synaptic connection onto neuron B, we would need to know the strength of the electrical signal in neuron B that would be caused by each electrical event from neuron A. The connectome might give an average strength for each connection, but the actual strength varies over time. Over short times (thousandths of a second to tens of seconds), the strength is changed, often sharply, by each signal that A sends. Over longer times (minutes to years), both the overall strength and the patterns of short-term changes can alter more permanently as part of learning. The details of these variations differ from synapse to synapse. To describe this complex transmission of information by a single fixed strength would be like describing air traffic using only the average number of flights between each pair of airports.
Underlying this complex behavior is a complex structure: Each synapse is an enormously complicated molecular machine, one of the most complicated known in biology, made up of over 1,000 different proteins with multiple copies of each. Why does a synapse need to be so complex? We don’t know all of the things that synapses do, but beyond dynamically changing their signal strengths, synapses may also need to control how changeable they are: Our best current theories of how we store new memories without overwriting old ones suggest that each synapse needs to continually reintegrate its past experience (the patterns of activity in neuron A and neuron B) to determine how fixed or changeable it will be in response to the next new experience. Take away this synapse-by-synapse malleability, current theory suggests, and either our memories would quickly disappear or we would have great difficulty forming new ones. Without being able to characterize how each synapse would respond in real time to new inputs and modify itself in response to them, we cannot reconstruct the dynamic, learning, changing entity that is the mind.
That's part of the problem: the brain is really, really complicated. That tiny scrap of brain tissue where they mapped out all the synapses? I wrote about that here; it was a tiny slice, 1500µm3, or a little dot about 12µm on a side…1/80th of a millimeter. It contained all of those synapses, took a huge effort (an effort that destroyed the tissue), and it recorded only a snapshot of cellular and subcellular structure. There was no information about those thousands of proteins, or the concentration of ions, or any of the stuff we'd need to know to reconstruct activity at a single synapse -- all that was also destroyed by the chemical processing required to preserve the structure of the cell.
We aren't even close to being able to take apart a brain at the level necessary. Miller is exactly right. And as he points out, one additional problem is that the brain isn't static -- it's not going to hold still long enough for us to get a full snapshot.
But as I said, complexity is only part of the problem, and if you focus on just that issue, it opens you up to this kind of rebuttal from Gary Marcus.
Two hundred years ago, people thought that flying machines were more or less impossible; cellphones were inconceivable as real-world artifacts. Nobody knew what a genome was, and nobody could have imagined sequencing one for a thousand dollars.
Mr. Miller’s articulation of the complexity of the brain is reasonable, but his extrapolation proceeds without any regard whatsoever to the pace of technological progress. It is the ratio between the two — complexity and progress — that matters.
Brain uploads won’t be here tomorrow, but there is a very good chance that they will be here within a century or two at most. And there is no real argument to the contrary.
We've got a problem with lots and lots of parts, and it's too complicated for us to even count the parts. But technology marches on, and we can expect that someday we'll have widgets that can track and count far more parts than we can even imagine now. It doesn't matter how many parts you postulate, that is a merely quantitative problem, and we've been really good at solving quantitative problems. Why, any day now we'll figure out how to squeeze enough energy into a teeny-tiny box so that we can build jet-packs.
As for that genome argument, that is correct: we're really good and getting better at sequencing a few billion nucleotides at a time. With a sufficiently simple definition of the constitution of the cell, you could claim that it's a solved problem: we can figure out the arrangement of the letters A, T, C, and G in a linear polymer just fine. Now telling me how that gets translated into a cell…well, that's a little more difficult. That's a whole 'nother problem we aren't even close to solving in detail. It's also not going to be solved by enumerating the bits.
Another problem here, beyond complexity, is specificity. My brain and your brain are equally complex, have about the same number of parts, and are arranged in roughly equivalent ways, but they differ in all the specifics, and it's those specifics that matter. If you were to disintegrate my brain molecule by molecule so you could attempt to reconstruct it in a computer, it does me no good if you build your brain in the machine, or Jeffrey Dahmer's brain, or a profoundly malfunctioning artifact with randomized cognitive connections, or a blank blob with a potential to learn. All the transhumanists want personal immortality by placing their personal, unique awareness in a box less likely to rot than our standard organic hardware. So not only do you have to build something immensely complicated, it's got to be a nearly exact copy of something just as complicated.
And the bits in this copy are specified right down to the arrangement of individual molecules and even the concentration of ions in tiny compartments…all of which are changing constantly to generate the mind. You would have to freeze my brain in place instantaneously, capture the position and state of every molecule in it, and then build a copy with astonishing accuracy at the molecular level -- all while the copy is locked down and not reacting in any way with it's components -- and then restart it all instantaneously as well. There are physical limits to how precisely individual molecules can be manipulated. This problem goes beyond building better mapping and inventory of a multitude of parts. It's bumping up against limitations of the physical universe.
I agree with Marcus that someday we might be able to build something as complicated as a brain -- we already do it, every time we make a baby. But making an exact copy of 1.5kg of insanely intricate meat, in a medium that isn't meat, somehow? Nah, that's not a realistic proposal.
Comparing to teleportation is actually a really good comparison. To "make a functional copy elsewhere" is basically what both of these things would do. And if we could do a brain I don't see why we couldn't do arms and legs and spines and livers - much simpler pieces.
We don't even know what state to look at for brains, Now that we see glial cells affect synapse activity, we can't restrict it to neurons and synapses. Much less figure out how to capture that state for the whole brain.
It is in principle possible to do brain uploading without freezing its state, as long as you have a device that can pretend to be a neuron and you can capture the state of a single neuron in sufficiently short time. Then you can simply replicate and replace neurons one at a time.
This still does not look like a near term possibility, and in any case enables teleportation since once you've got all this tech you can certainly rip someone apart and reassemble something that at least looks a lot like them somewhere else.
We really don't know what will be the minimum information that we need to extract from a physical structure of a brain to simulate well enough a mind. We have many cases of subjects with extensive brain damage that show almost no sign of that damage. Many neurons die every day in our brain but we remain 'our-self'.
I think that if we compare the quantity of (useful) information contained in a brain to the physical information of the structure of the brain needed to replicate it we can have a better idea of the problem. How many bits of information define our-self? Our memories?
I think that much more important than the realization of a perfect connectome of our brain will be the understanding of how our brain structure change when we learn something new. A good enough copy of our brain coupled with a very detailed simulation of how it change under external and internal stimuli could replicate our mind.
Thankfully someone is finally taking on the Singularitarians and Transhumanists and suchlike quacks and wackos.
Whether or not it's ever possible to produce a functioning human brain other than by making a baby remains to be seen. But if it ever became possible, then any such brain, or its cybernetic equivalent, would be a person with inherent rights (regardless of not having the rest of a body, see also paralysis victims). Using that brain as a receptacle for someone else's memories etc. would have the same moral implications as growing babies for transplant parts.
The other "philosophical issue" reduces to this: You make a clone of yourself. You die and your clone lives. You're still dead. There is no escaping that outcome. If you can reincarnate into a clone or a computer, you can also reincarnate into a cat.
Re. Anthony @ 2: If you replace 100 neurons per second, it will take 27 years to replace all (approx.) 85 billion neurons in an adult brain. BTW, I set up a spreadsheet specifically to deal with this "gradualist" nonsense, so tell me how many neurons per second you think you can replace, and I'll tell you how many years the patient is going to be stuck in a hospital bed having their brain slowly vivisected.
Re. MarkK @ 1: "Now that we see glial cells affect synapse activity,..." The findings that led to that conclusion being part of the canon of neuroscience, came from Stuart Hameroff, who is presently working on empirical tests of his and Roger Penrose's "orchestrated objective-reduction" theory of QM computation at the level of tubulin proteins in the cytoskeletons of neurons. Past success is no guarantee of future results, but the findings to date have been supportive, and there is also support from unrelated research in other areas.
Why that's important: If Hameroff is correct, then per his figures, the computational complexity of human brains is not 10 ^ 16 per the classical computing model of neurons, but 10 ^ 28 per his theory. Now you see why Singularitarians and Transhumanists hate Hameroff & Penrose: if he's right, their dreams of immortality are literally a dozen decimal places further from reality.
All of this would be no big deal, and Singularitarians & Transhumanists would be just another weirdo cult, but for one thing: They have attracted True Believers in high places, whose wealth and status have given this pseudoscientific twaddle more credibility than it could ever possibly deserve.
Specifically, Sergey & Larry at Google are Singularitarians, and they've hired the original SIngularity prophet Ray Kurzweil as chief of engineering, with an unlimited budget. Mark Zuckerberg and Larry Ellison are also True Believers, and there is a quote from the latter to the effect of "I don't see any reason why I should have to die."
IMHO they should go back to seances and crystal balls. At least they won't be wasting shareholders' money, misleading the public, and creating a panoptic dystopia along the way.
I'm a transhumanist. No, not everyone is a tech bro nutcase who wants personal immortality for themselves. The logical extension of working toward our children not dying from cancer is working toward our great-grandchildren not dying, fullstop. Humanist transhumanism is a major factor in the awesomeness of atheism: unlike theists, whose method of dealing with death is basically the worst case of Stockholm syndrome, we can actually acknowledge body failure sucks and fight it.
The most fantastical method of achieving immortality I find remotely plausible is gradually replacing natural brain tissue of a living brain with something else. The most plausible method is robots. This means the first descendant of mine to become immortal will be adopted -- so what? They won't be the first adoptee among my descendants. Brain upload solves exactly nothing in the big picture.
That being said, a techbro-style immortal "clone" of myself is a perfectly satisfactory outcome. I'm not going to weep for my "original" body any more than I'm weeping for dead cells. Again, I think the cloning scenario is supremely implausible, that investing disproportionate amounts of money into researching it to the exclusion of other, more realistic research topics, is a crime, but I'm not going to cry sour grapes over the implausibility. I don't play the lottery either, but I'm not going to pretend it'd suck if I somehow won.
@G: any 'gradualist' design will of necessity involve large numbers of parallel workers; in order to make it possible at all, you need a number of parallel workers equal to the number of neurons on the surface of the transition zone, which will probably be in the millions (depending on details of how you run the the scan), and since replacement needs to be completed on a time scale short enough to not cause breakdowns, the process has to be fairly fast. This is a formidable technological challenge, but I'm not yet ready to assert impossibility.
I assume that doing this would only be beneficial if the resulting brain would be the same conscious "me" when turned on. If that's the case, what if it were turned on when I still existed? Or, what if it was turned on in multiple replacement systems? Which one would be me? None of them right? As my friend says, they'll be more like children than "us."
zephyrean @ 5:
If your great-grandchildren are planning to not die, are they also planning to not have children themselves? Or do you have a plan for doubling the Earth's resource supply in one generation, especially given the locked-in impacts of climate change? Keep in mind that it's not the most-plentiful resource that governs (e.g. fusion >> solves for energy), but the least-plentiful, and energy does not get you transmutation of elements or the cessation of disease transmission in crowded living conditions.
If a clone is "satisfactory," you won't be able to weep over your existing body dying, because you'll be dead, and dead is dead. Either there's an afterlife or there isn't, but as I said before, if you can reincarnate into a clone or computer, you can also reincarnate into a cat. The fact that reincarnationist beliefs are popular in the tech subculture these days (whether from Hinduism or from Singularitarianism) doesn't determine whether they're true or false. (And BTW, I'm a technologist as well, 30 years' worth.)
Anthony @ 6:
OK, so parallel workers: then by all means put a number on it, what's the replacement rate of neurons per second? Core rule for science debates: no miracles or magic. So the lower limit for the size of connections between any two points (e.g. neurons and sensors) is a few atoms in diameter last time I checked. (Any physicists here are eagerly invited to tell us what's the smallest size of a conductor that can be achieved without incurring QM uncertainties, and add to that the insulator that encloses it.)
Achieving space travel at 0.1c or higher is also a "formidable technological challenge," which is why I use 0.01c - 0.03c as being nonetheless a viable path to interstellar migration.
In any case, even if you can clone the classical and quantum state of a brain in a few seconds, that doesn't give your mind an information superhighway to salvation. Your mind is still either solely and only the product of the brain you have now, or it's that plus some other state of existence conventionally called the soul. The rapid clone of a brain produces a result no different to the rapid clone of an entire body: you're still you, not that guy in the other hospital bed, and when you die, he gets to weep for you, but either you cease to exist or you're off to the wild blue yonder.
BTW, even if it was possible to transplant a mind into a god-box, what happens when there's a power failure? Nothingness is nothingness, regardless of whether biological or electrical.
This is what I don't get: why the fear of nothingness? All factors equal it's better to exist than to not-exist, but if you stop existing you won't know it. Think about this when you go to sleep at night. Does the thought that you might not wake up, keep you awake?
There are meditation exercises (coming from Buddhism which at root is non-theistic) that are aimed toward the goal of stopping all conscious mental activity: in other words, producing "nothingness." As a motivated teenager it took me less than two weeks to catch a brief glimpse of that state, so I'll say that a motivated adult should be able to do it in a month or less to be safe. Guess what? When it comes to "nothingness," there's nothing to fear.
The cessation of fear is a far more worthy goal than some kind of immortalist fantasy that resembles nothing so much as fundamentalist Christian eschatology.
@G: are you down to the arguments about irreducible complexity already? To avoid the metaphysical questions of "is it really you", I define "uploading" as "looks like the same person to a third party, or close enough that we would accept them as the same person". Given that we accept people using a variety of mind-altering substances as still the same person, that may be an overly generous definition. Whether brain uploading is actually possible depends to a fair degree on what aspects of the brain actually need to be copied -- quantum states, for example, aren't getting copied, but probably don't need to be copied either, given that you don't see people's brains crashing on exposure to ionizing radiation.
Replacement rate: what, you want me to provide hard statistics for a completely theoretical technology? There are possibilities ranging from hours to years, though the latter option only makes sense if the supporting hardware is small enough that you can enclose it within the skull.
As for life after death: even if brain uploading is possible, it's not happening on a time scale that will matter to me, and it's a basically philosophical question as to whether this qualifies. However, at a minimum it would qualify as a legacy, it's far more complete than, say, being immortalized in song.
Yes, the brain may be an overly complex piece of meat, but that does not necessarlly mean that the processing and data in the brain is overly complex. After all, the blueprint of the brain is defined in a single cell, the first cell after conception. Inside this first cell, many other details have to be defined as well. Our full genetic info fits on a few DVD's. Our close cousins, such as the almost brainless nematode, already share 50% of genetic information with us. So only little of the genetic info can dedicated to the design of our brains. Simple processes can result in very complicated outcomes, but that does not mean such processes can not be replicated relatively easily. As even a 10 dollar webcam is more sophisticated than our eyes, why should our brains not be replicable in principle? If we would look at every atom in the webcam, it would be impossible to replicate it, but ignoring that fact, perfectly good webcams are being created by the billions.
Karl @ 7: That's why the cloning analogy.
If Karl-in-a-Box is turned on while Karl-in-a-body is still alive, what you have is the clone scenario. He isn't you. When you die, your soul doesn't hop into his (silicon) body. If you walk into a room and die, and your clone walks out of the room, it's not you, even if your friends say "you look younger now." Same case if your brain is destroyed in the process of scanning its memories into a box. You walk in, the box walks out, your friends say "you look like a computer now." But either way, clone or computer, dead is dead is dead.
Anthony @ 9:
Not irreducible complexity, _irreproducible_ complexity, and that distinction makes a difference.
"Looks the same to a third party" doesn't get you eternal life. A wax museum statue of you, or a Disney "animatronic" automaton "looks the same" as you in a photo or a video.
The fact that brains don't crash on exposure to ionizing radiation does not falsify the hypothesis of quantum computing in the neurons, any more than it does for the optical systems of birds. Findings to date on that have been supportive. We'll see how it comes out. But with or without quantum computing in neurons, let's not forget _chemistry_, which is where emotions come from, and simulation is not replication (how many people here live by eating pictures of food?).
Replacement rate: What I want is for people to think critically about this rather than magically or wishfully. Replacement rate is one issue, neurochemistry is another, and there are many others.
As for legacies, I've heard that before, and usually it's covering for subconscious hopes that the immortality-magic will work.
Anneb @ 10:
"First cell at conception" doesn't get you the capacity to store a lifetime's worth of information. For that you need the rest of the meat. The fact that simple processes generate complex outcomes, also does not address the actual content of a mind, any more than the simple chemicals on a photographic film address the complexity of the object that is photographed.
The webcam analogy also fails in the same manner as the false teeth of George Washington's era relative to artificial hearts in the 21st century. Your brain runs on the equivalent of 35 watts of electricity and you could hold it in your hands. IBM's most capable supercomputer runs on 70 kilowatts (70,000 watts), it can only do one task at a time (e.g. play chess) at human or better level, and you could maybe fit it into a truck if it was dismantled first.
Moore's law is hitting its limits beyond which further miniaturization of components gets them down to the size where QM interference becomes an issue. There won't be any shrinking of a silicon platform down to the size or energy efficiency that's anywhere within a decimal place or three of a biological brain. But in any case, that silicon platform can't support consciousness any more than an electromechanical telephone switching system can (and it wasn't too long ago that telephone switches were used as analogies for brains: same as it ever was...).
And all this, for what? A shot at forever that in the end is pseudoscience and quackery. We may as well be devoting billions of dollars to developing "better" homeopathic "remedies."
It frankly shocks me that this subject is taken at all seriously in science-oriented forums. We may as well be discussing the future development of more effective magic spells. "The eye of a newt, and a dollop of dragon dung dropped while the Moon is in PIsces!"
Alright then. What do we need to copy the "soul?"
Frankly, why would we even want to?
I mean, sure, we like to fantasize about living in a Gibsonesque world. But ultimately there's nothing particularly useful about uploading a person's brain. Why invest that much effort and money into developing something that our own meaty brains do perfectly well, and not on developing something that is a better fit for the technology (i.e. AI of some kind)?
Karl @ 12:
The soul is a hypothetical "nonphysical" (I don't like that word either;-) or supernatural (above or outside of nature) entity that is the subject of traditional religion and not accessible to empirical testing. So I don't know that there's anything we can do within the range of science or technology to address it, much less copy it. If traditional religion is correct, then it's the intrinsic, irreducible, and eternal essence of a person, that can't be copied or anything else. But again, that's outside the scope of science, just as melody and rhythm are outside the scope of sculpture.
Outeast @ 13:
The reason people want to "upload" their minds to artificial brains of one kind or another (e.g. conscious AI machines) is to achieve immortality. Some people are downright terrified of death. Most of us manage to make peace with it one way or another, either via traditional or new religions, or via scientific or naturalistic explanations. Those who live in terror of death should probably seek whatever kind of counseling and anti-anxiety medications if needed, to enable themselves to live without the constant fear. Fear sucks, and it can be treated.
AI is a loaded term with two forks in the road:
There's legitimate AI research such as into pattern recognition, learning algorithms, and the like, that promises to substantially improve the ability of computers to analyze data. This has major positive implications for e.g. medicine, space exploration, and so on. A decent AI system could streamline the diagnosis of illnesses, improve detection of planets in other star systems, etc.
Ai systems could also assist in formulating positions in international diplomacy, or when diplomacy fails, military AIs could improve the accuracy of weapons systems to minimize civilian casualties. They could help detect terrorist activity such as by spotting relevant patterns of communication without having to spy on innocent people. Or on the downside, they could be used to build a panoptic dystopia of limitless surveillance, on behalf of government or corporate interests. So there is much promise there, but also substantial peril, and there is no substitute for an informed electorate who exercise their right to vote.
Then there's the other fork in the road, the Singularity and suchlike, that aim to build human-level conscious AIs: a task that is simply not possible with classical computing platforms. Most obviously, silicon platforms lack the capacity for the chemistry of emotion, that is essential to consciousness as such (and algorithms won't do the job: simulation is not replication). Theoretically, if neurons utilize quantum mechanical computation (Penrose & Hameroff), then the absence of that element in a computing platform also means no consciousness in the box.
All the claims of uploading minds to computers are pure pseudoscientific quackery, being promoted as the wishful thinking of people who are unable to come to terms with the prospect of death. However: many are the atheists who have made peace with the prospect of nothingness; and many are the adherents of conventional religions who have made peace with the prospect of divine judgement or karma or (etc.). Those who can't deal with the reality of death need counseling and possibly medication to live without being in fear. What they don't need is someone trying to sell them a ride on the Quackery Express, especially given the large sums of money that are involved (e.g. the cost of having one's head frozen cryogenically at death).
The whole Singularity thing is fraught with ethical/moral problems beyond the issue of fraud. For example the prophets of Singularity have never addressed the issue of moral limits in the means to the end: if uploading was possible, what would be OK or not-OK for someone to do, in order to pay for the cost? Is it OK to lie, cheat, manipulate others, break the law, etc., in order to get a shot at immortality?
This is just scratching the surface and there's much more to be said if anyone's interested.
Trying to simulate every detail of the brain may not even be the best approach. It's sufficient, but is clearly not feasible. Is it necessary?
The brain is complex, but it's almost certainly much more complex than it needs to be. Almost every other biological system seems to be more complex than necessary, the brain is unlikely to be an exception.
When ever I read about how some biological system is regulated I always end up thinking "What an ugly hack! This has way too many moving parts. Any engineer who designed something like this would be fired." But that is expected, because it *wasn't* designed by an engineer. From what we know about how evolution works, we *expect* things to be that way.
Evolution doesn't optimize (if it doesn't have to). The genome is full of "junk" DNA, and the brain probably contains lots of junk synapses, and even junk neurons which either do nothing useful at all, or worse cancel out the effects of other junk components. ("Junk" may be a smaller fraction of the brain than of the genome, and still be significant.) Even if every single part is functional, expect it to be a horrible hack with way too many moving parts, because (with only a few exceptions) everything that has evolved is.
If you could understand how it all worked, and stripped it down the simplest system performing the same function, a complete functional simulation of a human brain (in general, or a specific individual) might fit on existing cluster computers. But that is a big if. The understanding required to do that is much more difficult than mapping every synapse.
Will it be be possible to "upload" people in the foreseeable future? Don't be absurd.
It seems as though who we are is encompassed entirely in the brain. Just basing this on how brain injuries seem to change who people are. Have an in-law that suffered brain damage in a car accident. Many would not know that she is brain damaged, but by all accounts, she is a different person. Can't hold a job, behaves like a teenager, etc. I'm sure that if the option was available, they'd love to restore her brain back to the way it was before the accident.
Ralph @ 15:
Re. strip down the brain to the basics: The idea that there are superfluous "moving parts" in the brain has been tested again and again, and failed utterly.
1) Brain damage such as through physical injury, is known to produce substantial changes in personality and capabilities.
2) The glial cells were once thought to be inert structural tissue, but were found to have a role in information processing. Credit to Stuart Hameroff for that one, his first major successful hypothesis, now part of the canon of neuroscience (and now he's turned his attention to QM processing in proteins in the cytoskeleton of neurons: another case of "structural" material in the brain that may be found to have a role in information-processing).
3) Frontal lobotomies and other forms of psychiatric brain surgery in the 1950s and early 1960s, that were found ultimately to produce highly undesirable changes in moods, personality, cognition, etc. etc. Today we look back on that period in time with horror, at the number of people who were needlessly subjected to procedures (especially lobotomies) that substantially impaired their quality of life.
We also have:
4) A report in a major medical journal of a case in France of an individual who was able to function successfully as a father and as a municipal employee (probably a laborer), who was found to have a large area of fluid in his skull, such that his brain tissue was reduced to about 10% of normal quantity, and that was distributed as a layer on the inside of the skull. He tested at IQ approximately 70.
So if you want to "strip it down to the bare essentials," you could strip it 90% and you'd wake up with an IQ of 70. I don't think you'd want to volunteer for that. One could also achieve the same result by drinking large quantities of alcohol over a period of years.
Bottom line is: We do not know what elements of the brain are essential and which ones we can safely do without. But all the findings from medical science point in the direction that all of it plays some part in our moods, personality, and cognition.
At least we agree on the conclusion, which is that upload is absurd.
Karl @ 16:
Yes, per medical science, the entirety of the mind is a product of the functioning of the brain. (And science does not purport to address the question of the soul, which is not empirically testable.)
Per philosophy, the material monist position is that there is no soul and the brain is sufficient in and of itself. The interactionist position (Chalmers et.al.) is that information as-such has some ontological standing in this, and a soul or equivalent is not ruled out. The dualist position is that the soul exists and occupies the body during one's lifetime.
Sorry to hear about what happened to your family member. And this is where the efforts of medical neuroscience need to be focused: on finding ways to repair damaged brain tissue, and re-grow lost brain tissue, to enable people who have suffered injuries or relevant types of illnesses, to recover fully. If anything, the Singularitarians and suchlike are causing much diversion and waste of resources, that should instead be aimed directly at real medical advances.
I wonder what a computer analog to hormones would be? Lots of chemicals make changes different from synapse structure.
"1) Brain damage such as through physical injury, is known to produce substantial changes in personality and capabilities."
Just because the brain has excess complexity, doesn't mean you can smash major sections of it and still expect it to work.
The vast majority of the human genome is junk or filler, but deleting even one chromosome out of 46 is lethal before birth (exception: missing an X only causes Turner's syndrome).
There is a big difference between "strip down the brain to the basics" and reducing its excess complexity. The latter involves a complete redesign, which requires a *thorough* understanding of the existing functionality.
Here is an analogy, don't take it too literally. Birds fly by flapping their wings using a complex system of bones, muscles, tendons, feathers, nerves etc. Copying that mechanism is not practical. A simplified version needs only a fixed wing and a propeller, and flies just as well (actually better). If you want to duplicate the exact flight abilities of a particular bird, you would have to add back some complexity, but not as much as the bird has.