Way back in 1843, John Stuart Mill wrote this:
When the laws of the original agent cease entirely, and a phenomenon makes its appearance, which, with reference to those laws, is quite heterogeneous; when, for example, two gaseous substances, hydrogen and oxygen, on being brought together, throw off their peculiar properties, and produce the substance called water---in such cases the new fact may be subjected to experimental inquiry, like any other phenomenon; and the elements which are said to compose it may be considered as the mere agents of its production; the conditions on which it depends, the facts which make up its cause.
The effects of the new phenomenon, the properties of water, for instance, are as easily found by experiment as the effects of any other cause. But to discover the cause of it, that is, the particular conjunction of agents from which it results, is often difficult enough. ... many substances, though they can be analysed, cannot by any known artificial means be recompounded. Further, even if we could have ascertained, by the Method of Agreement, that oxygen and hydrogen were both present when water is produced, no experimentation on oxygen and hydrogen separately, no knowledge of their laws, could have enabled us deductively to infer that they would produce water. We require a specific experiment on the two combined.
Mill is in effect saying that the fluidity of water is something we could not have predicted from a knowledge of its consituent elements hydrogen and oxygen. This is nowadays called emergence, though Mill called it heteropathy. Ever since, water has been the philosophical exemplar of emergent properties. But, as often happens, science overtakes philosophy - even metaphysics. A new study has shown that the microstructural properties of water are not only predictable, but predictive, based solely on the quantum mechanical physics of atoms. That is, without doing experiment, one can find out how the properties of liquid water affect, say, protein folding and other molecular processes.
What is most interesting about this is that it goes to support the claims made by Alex Rosenberg back in the early 1990s, that what counted as "emergent" depended largely on the computational limitations of the investigators. Imagine trying to resolve this issue using pen and paper! But with modern computers, we can not only model the liquidity of water, but predict what it will do in unobserved situations, with no experiment at all.
This goes to highlight a crucial fallacy often expressed by both scientists and philosophers. I call it the fallacy of reification - the projection of our own cognitive categories and limitations onto the world. On the traditional reductionist account, all higher level explanations, or all higher level causal relations, are in fact epiphenomena of actual physical causes. We might diagram it thus:
The properties of emergent phenomena are effects of the microlevel phenomena, the physical realities. Emergent phenomena under this view are phenomena that are observer-dependent for their salience. All causation happens at the physical, microlevel, and emergent phenomena do not causally interact separately. But emergentists hold that not only is there higher causality between emergent phenomena, but emergent phenomena also "downwardly cause" microlevel phenomena to behave differently.
The fact that we can now compute the so-called "ineliminable" properties of water suggests, inductively, that we should be in principle able to do this for other phenomena, but as things get more complex, the computation gets massively more complex too. There has to be a physical limit to how much we can compute. Often, all we can do is to find that something rather like the emergent phenomena can come out of a simple toyworld computational model, and trust that this is warrant enough for concluding that we have a handle on the ontology. So denying for metaphysical reasons that there are any realities to emergent phenomena doesn't automatically mean that we can stop talking about them, or employing the ontologies of higher level sciences in explanation.
For example, many biologists object to the reduction of organism-level phenomena entirely to molecular causes, but the history of biology is against them. Molecular properties do indeed turn out to be crucial, so long as, and this is the important point, we can set up the boundary conditions of those systems appropriately. Often we cannot do this, either for a lack of knowledge or ability to measure precisely, or because the complexity is simply intractable.
So what does this mean about higher-level entities? It's not that we must stop talking about them. It's not that they are unreal (in whatever sense biological objects are real - scientific realism is a topic for another day). It's that we have at best rules of thumb in our explanations of biology if we lack a fuller account of the lower level phenomena. Ultimately, it's all about Schrödinger's Wave and quantum effects, but that is no help to us in dealing with the contingent facts of biology.
Rosnberg has a number of papers on how to be a reductionist, and a book Darwinian Reductionism: Or, How to Stop Worrying and Love Molecular Biology which I'm working my way through right now. More on this as it hits me...
- Log in to post comments
This might explain why I've had difficulty understanding precisely what emergent behaviour is: I never thought about is as a result of epistemic surprise.
Excellent. One less thing I'm confused about.
Bob
I think there's more to it than epistemic surprise. There is also the phenomenon of convergence - where radically different underlying lower-level models yield the same higher-level behavior. Perhaps the most celebrated example is the theory of universality in critical phenomena in physics - where such widely disparate phenomena as ferromagnetism, superconductivity, the liquid-vapor transition, and the isotropic-nematic transition in liquid crystals obey the same scaling laws near the critical point. The emergent behavior is determined not by the details of the microscopic models, but by certain general features of how the microscopic parts fit together into the larger picture. (It's noteworthy that condensed matter theoretical physicists, such as P.W. Anderson and R. B. Laughlin, are among the strongest champions of "emergence" and opponents of naive reductionism in the scientific community. Check out Anderson's 1970-ish essay, "More is Different.")
Even there, it is a matter of surprisal. What counts as "convergent similarity" depends mostly on what is salient to the observers. This goes to the question of how formal models (such as phase transition equations) explain and apply to realworld cases, but I think that they are epistemic aids, and the real work is done on finding out how each phenomenon differs from others.
Being pretty agnostic about where science might wind up, I've tended to neglect such grand themes of the philosophy of science as reductionism, but of course I ought not to and so the following is less an argument (despite its appearance) and more in the way of a request for clarification (e.g. links, or a Basic). (And apologies for the length of the following, but I'm still a novice.)
For the sake of argument (so to speak) suppose that we (or some aliens) have a materialistic theory that could explain everything about the world (as observed by scientists) in terms of fundamental physics. When you drink a cup of coffee, for example, all those physical movements, and all the chemical reactions involved in that action, would be accommodated.
But what could not be accommodated is the fact that there are such observations (not just interactions), that you taste the coffee, that you feel its warmth (and choose to drink it), that you see (and understand) the data that indicates that reactions take place, etc. Even if all the functioning of the brain had been accommodated by our theory, and even if a plausible story of how such structures could evolve by natural selection was also included, how could there be any place for our awareness of the world, for our individual minds?
Sensory organs could evolve, and a biochemical structure might be selected for acting as though it was an individual (with a social conscience, and religious beliefs etc.), but why would that structure need to have subjective experiences? Not to enhance its fitness, if all its behaviour could be explained in terms of what its neurones do (and thence what its particles do). While its fitness might be enhanced by it behaving (e.g. by it computing neurologically) as though it was an individual, why would it also have to have a mind? Such subjectivity would appear to be entirely superficial. And yet our minds certainly exist, as every scientist (and non-scientist) knows, in an incontrovertibly direct way.
By hypothesis our theory explains all our mental faculties, e.g. our perceptions and thought processes (insofar as they result in anything observable), so what in the world would not be associated with something of the sort of (superficial and ineffective) subjectivity that remains unexplained by our theory?
Would plants have, not minds, and perhaps not even perceptions (as we have them), but some primitive individual feelings? But if so then, since our brains are composed of interacting neurones (which presumably also lack minds like ours), why should a great forest not have something akin to a mind? If our feelings of choosing to drink coffee, for example, are only a superficial companion to biochemical processes in our brains, then why should the biochemical processes of complex ecosystems not also be accompanied by subjective feelings of, for example, choosing to do what occurs?
Why would something like our subjectivity not be associated with everything, and (such subjectivity being superficial and ineffective) every subset of everything? Although we naturally draw a line at the human brain (or the primate, or mammalian, or vertebrate brain), given our social and linguistic evolution, such a line does not seem to be indicated by our materialistic theory. And although our concept of subjectivity may well require that subjects are individuals (the points, of our points of view), whereas our regarding ourselves as individuals might be due to how our brains function (having evolved that way), I'm only talking about something akin to our subjectivity.
And of course, there is one especially complex and yet well-defined individual associated with the physical universe (the subject of our theory). So even if we were to draw a line at the brain, and consider subjectivity to be absent beneath it, still our theory indicates that there would be, above that line, something that was to us more or less as we are to our neurones, something that would know itself to be choosing all that occurs (as we would choose to have a cup of coffee), would be everywhere (if also, being subjective, nowhere) and which might even know everything (why not, since our neurones know nothing?) so that it seems (to me, prima facie) that materialism amounts to an unjustified belief about an existing God!
It is unjustified because we had no scientific reason for supposing that such a materialistic theory was even possible; and it is about God because (i) a way of doing science that was more agnostic about its primitives could accommodate the same scientific observations and (ii) that was indeed the right word because, for example, what you drink out of is a cup even if our concept of a cup is of a classical object in Euclidean space (whereas the object being drunk from is more likely a fuzzy set of wavefunctions in more than three relativistic dimensions).
The motivations of materialism and of atheism being, I imagine, very closely related, I would reject such beliefs as that as incoherent and choose something more agnostic, such as (to be going on with) Cartesian dualism; but I also presume that my argument is flawed...
...sorry for that long philosophical digression (I've put a slightly tidied-up version of that question on my new blog here)!