Why do scientific theories work? The inherent problem

In an interesting post, Think Gene poses what they call "the inherent problem" of scientific theories:

The inherent problem of scientific theories is that there exists an infinite equally valid explanations. Why? Because unlike in mathematics, we never have perfect information in science. ...

OK, so our world understanding improves as we verify models, like if the Large Hadron Collider finds the Higgs… right? Theoretically, no. An infinite number of theories that are just as “probable” as the others still exist to be tested. All that was done was eliminate some of the theories. Subtracting anything from infinity is still infinity.

This is an interesting question. Here are my conjectures.

I don't think that the question here has much to do with imperfect information. Even if we had perfect information, there are still a very large (infinite? I guess so if the data can be conintuous) number of compatible models. So, how do we make progress?

I think that while we may be left with an infinite, or at least Very Large, number of possible hypotheses, we still make progress. Here is how:

Suppose at t we have a set of possible models that are consistent with the data. Then we make a finer measurement (say, for c), with a smaller uncertainty. The volume of possible values has reduced. We now have a more accurate and precise value to the speed of light. How is that not progress? Okay, we may have a formal infinity of possible models, but if we take the volume of the metric space of alternatives at t to be 1, now at t+n that volume is reduced. If the remaining volume is infinitely divisible, or there are an infinite number of possible models of that metric space, we are less uncertain about the speed of light than we were before.

It doesn't matter than we can come up with an infinite number of models for the remaining metric volume. We have reduced the volume in which we must search for a solution. And to a certain degree of precision, we have a result. At one time we might have thought light travelled instantaneously. Learning that it has a speed tells us something, even if we cannot accurately measure it at first. Learning that it is around 300,000 km/s tells us something (and something we can use in telecommunications. Learning that it is invariant of the speed of the transmitter tells us something, etc. While this may leave an infinite number of possible metrics at infinite resolution, at ordinary scales it is a definite outcome.

Explanation is, in my view, still very like the old deductive nomological account, at least in most sciences. If a model delivers, reliably and consistently and even predictively, results that are close enough to the observed results, depending on the resolution of the assay techniques - that is, if the model hits the right volume of metric space - then we can say the model is explanatory. Yes, there may be an infinite number of alternative models - that is pretty well guaranteed by the properties of model math - but science like evolution doesn't consider possible competitors, only actual ones, and that set is greatly restricted by the mere fact that individuals cannot come up with infinite numbers of these models. Science explains relative to competing approaches, not against all possible models.

In a way, this is like trying to count the possible bald men in that doorway; there are an infinite number of possible bald men, but all we can count are the actual observed bald men. As assays are precisified (a horrible word, like "incentivised", needed neither by logic nor English), so too the models that compete must either hit the volume of that smaller metric space or at worst be not inconsistent with it. And that is progress.

So the inherent problem is not so inherent. It is, I think, a trick of the light. If you introduce infinities into any historical process, absurdities result. Yes, logically we can generate an infinite number of slightly different or radically different models that are consistent with the data so far. But logical competitors are not real competitors. If we have several equally empirically adequate models then we seek to find a way to discriminate between them. If one is consistently better, then it gets adopted. This applies also if the model is merely semantic (like "continental plates move around"), awaiting metrication.

The problem known as the Pessimistic Meta-Induction (or the Pessimistic Induction - PI) goes like this:

Every scientific theory of the past has been proven wrong. By induction, every scientific theory we know now will also be proven wrong. Therefore we know nothing [Or, therefore we cannot believe our best scientific theories].

Is PI worrisome, or is there a way around it? The PI was first proposed by Henri Poincaré at the turn of the twentieth century. It has become a major argument against scientific realism - which is the view that the objects posited by the theories of science can be said to exist. Antirealists hold that if one is a scientific realist, one has to hold that our best theories refer to real things, but this either means that past scientific theories failed to, in which case there's no sense in which our theories are getting more accurate about the world, or that they did, but were wrong about the details, in which case you end up with a "metaphysical zoo" in which, for example, dephlogistinated air refers to oxygen. Either way, scientific theories then were referentially inadequate, and we have no confidence that our present theories are any better.

There's a lot of literature on this, but I am moved to comment by a paper by blogger P. D. Magnus, who has put up a draft paper on the topic.

To start with, I find arguments about the reality of theories burdensome. If one holds that one knows anything about the world, then what one knows is real enough for ordinary purposes. Antirealism and scientific realism are, I think, extremes from an epistemic perspective. But the issue here is whether we do know anything from the success of scientific theories. Or, to put it another way, if the success of science is good enough to argue that the ontology of scientific theories - the things they posit to exist - is correct.

This resolves to a question of the reference of theoretical objects, and whether theories that are eliminated make the remaining players likely to be true. The inherent problem is therefore the same issue as the PI. How can we take cognitive heart from our best theories (for simplicity here I'm equating theories with models)? The answer is that we have learned things, things that are true no matter what subsequent models or theories we might come to adopt. No cosmology that requires the pre-Cartesian universe, with the stars as points of light on a sphere, is ever going to be viable again. It's been ruled out of contention. We know this, come what may. We know that inheritance involves molecules rather than an élan vital. We know that modern species did not come into existence magically, without prior ancestral species. We know that objects are composed of atoms, and not some undifferentiated goop, and so on. Nothing that contradicts all or most of these knowledge items we now possess is going to be (rationally) adopted by science ever again. As Koestler overstated it, we can add to our knowledge but not subtract from it. [It's not quite right - we can find mistakes, and we can have periods of loss of knowledge, but it'll do for this context.]

So I will once again make a plea for Hull's evolutionary view of science. As theories, along with techniques, protocols, and schools of thought, compete for attention and resources, those that are more empirically adequate will tend to be adopted, reducing the space of possible models. Who cares if it's still infinitely differentiable? We know more than we did.

More like this

They are only zero's, Eamon. Therefore they don't have any value and John can put them in or take them out as he pleases. Like the silent "q" in his name.

I'm not sure I really understood exactly what you were getting at. As best I can tell, the idea is that there are an infinite number of theories that are consistent with any particular set of empirical data, and that therefore, we can never make scientific progress, because no matter how many experiments we do, there are still an infinte number of possible theories.

I don't know about the philosophical answer, but it seem to me that the practical answer lies in Occam's Razor. That is, we basically only consider theories that posit the smallest possible number of conceptual elements.

Like in my laboratory, when we have an unexpected experimental result, less mature scientists tend to think up Rube Goldberg type explanantions that involve multiple novel unproven conceptual elements. More mature scientists have developed the mental discipline to focus on explanations that posit the fewest possible number of new conceptual elements.

Am I making any sense?

PP, what I say for data holds true for conceptual elements. We need to retain as many of the prior conceptual elements as we can (Bayesian priors, as mentioned in the Think Gene post) and add as few as we must to accommodate empirical data. We reduce the volume of concepts as we proceed, thus making the range of alternatives smaller. Yes, again there are an infinite number of alternatives possible, but the volume is reduced and we now have more precise knowledge of what must be correct.

Consider the change from geocentric to heliocentric to the modern view of the solar system. The geocentric view reduced all physics to a set of models in which the earth was the centre of the universe. When that was abandoned, it was replaced by the set of all models in which the relation of the earth to the sun was that of satellite, but left open all elaborations (in which the sun was the only such orbital focus for a planet; in which the sun was one of two, of three, etc...) that were consistent with the observations (such as the phases of Venus, and the interplanetary comet), etc. By including a positive assertion here, it excludes the logical space in which that assertion is regarded as false. This is progress.

Popper, as I recall, tried to run a line of "verisimilitude", in which new hypotheses were more "truthlike". Assuming that the true state exists as a coordinate in the conceptual space, then reducing the volume of that space is an increase in verisimilitude in my book.

Dividing infinities by infinities sometimes extricates you from a mathematical morass. Divide the number of possible theories by the total number of possible falsehoods, and perhaps you get some manageable numbers.

Quote: "those that are more empirically adequate will tend to be adopted"

True, but in the case of the Large Hadron Collider, do we wish to find empirical evidence that micro black holes might be safe to create by creating them and observing what happens?

The primary safety argument is currently that micro black holes are expected to evaporate in a fraction of a second based on the unobserved theory of Hawking Radiation.

It is interesting that Hawking Radiation is called a "discovered" phenomenon, even though it has never been observed and is strongly disputed.

* Dr. Adam D. Helfer: Do black holes radiate? no compelling theoretical case for or against radiation by black holes

* Dr. William G. Unruh and Prof. Ralf Schützhold: On the Universality of the Hawking Effect Therefore, whether real black holes emit Hawking radiation or not remains an open question

* Prof. V.A. Belinski: On the existence of quantum evaporation of a black hole quote the effect [Hawking Radiation] does not exist.

And unfortunately, without this "safety net", the safety of micro black hole creation is very much in doubt. Links to papers available at LHCFacts.org

Have you read the main page at LHCFacts.org?

I am only a humble armchair philosopher and failing scientist. But I think you miss something important (or, at least, I missed it in your writing if it was there). Which is the beautiful and amazing fact that the world and universe and everything is, apparently, understandable. And moreover, not only is it understandable, but describable by simple and even elegant mathematics.

The universe could (seemingly) very well have been one in which the laws and behaviors of stuff in the fine details is horrendously complex following extraordinarily inelegant rules. It's the beauty that Occam's razor seems to work: among the infinity of compatible explanations for our data, the most simple and elegant of the explanations is probably the one that will prove itself and survive over and over again, while the ugly explanations usually turn out wrong. It need not have been this way.

It is a mystery to me why it is this way. And maybe someday Occam's razor will be be shown to be just wrong and misguided. That down deep in quantum physics, maybe things will get messier and messier, such that among competing explanations for the data the elegant ones are quickly shown to be wrong over and over again, and it the ugly monstrosities that become the safe bet. That arbitrary constants and bizarre assumptions are the hallmarks of a good horse to bet on.

I sincerely hope not. I'd be let down. I'd blame it on ourselves, that it is just our human imaginations that haven't found the right mathematics or logic for describing it elegantly.

Note of course that I don't dispute what you said, that we do learn new things as we refine our models. But this is true even if we have the wrong models, or our models are getting "worse". A model can be both more precise but (relatively, in a way) less accurate. Simple example: true values are A=10, B=20, C=30. Measurements say A=12+/-5, B=18+/-5, C=33+/-5. Model says linear, and predicts A=11, B=22, C=33. New data comes in showing C=30+/-2. Old model is clearly wrong. New model says A=16 B=16 C=32. Did we learn something? Technically, I suppose so -- the new model fits the data better, and if we care about measuring C, it will make better predictions for C. But it stinks for A and B, and isn't even linear any more.

I'm not sure I really understood exactly what you were getting at. As best I can tell, the idea is that there are an infinite number of theories that are consistent with any particular set of empirical data, and that therefore, we can never make scientific progress, because no matter how many experiments we do, there are still an infinite number of possible theories.

Let me translate. Whoever came up with the initial idea is a berk.

Yes, there's still an infinitude of possible theories that still fit experimental data - but it's still [i]better[/i], because every one of that remaining infinitude has greater explanatory power: it incorporates the new data.

By Peter Ellis (not verified) on 18 Jun 2008 #permalink

Hmm, an infinite number of possible theories so scientific progress is impossible...

Reminds me of Zeno who explained to a simple mortal that motion was impossible because there were an infinite number of points between here and there and even if you went half way there was still an infinite number of points that had to be covered. At this point the simple mortal yawned and walked away.

Consider the change from geocentric to heliocentric to the modern view of the solar system. The geocentric view reduced all physics to a set of models in which the earth was the centre of the universe. When that was abandoned, it was replaced by the set of all models in which the relation of the earth to the sun was that of satellite, but left open all elaborations (in which the sun was the only such orbital focus for a planet; in which the sun was one of two, of three, etc...) that were consistent with the observations (such as the phases of Venus, and the interplanetary comet), etc. By including a positive assertion here, it excludes the logical space in which that assertion is regarded as false. This is progress.

Excludes the logical space in which the second view is regarded as false? Yes, but now it includes the logical space in which the first view is regraded as false. Surely it would be progress only if the geocentric view had not originally excluded the logical space in which the heliocentric view is regarded as false?

In other words, it doesn't appear that we haven't reduced the volume of possible explanations at all. Every view excludes the other views. In going from geocentric to heliocentric, we've just exchanged one set of inclusions/exclusions for another. I'm not seeing how we make net progress in excluding a formerly included view while including the formerly excluded view.

It seems to me that scientific theories are best measured in terms of usefulness. Usefulness to whom? Why, to us humans, of course. And in that, we do make progress.

Sorry for my sleepy proofreading. "In other words, it doesn't appear that we HAVEN'T reduced the volume of possible explanations at all" should be "In other words, it doesn't appear that we HAVE reduced the volume of possible explanations at all."

It is not that theories are necessarily wrong, but rather that they do not offer explainations as general as once hoped. I bet your house was built on a flat earth. (Curvature of the earth is @ 0.6 ft/mile.)

By Jim Thomerson (not verified) on 19 Jun 2008 #permalink

The Proceedings of the conference at which "Complexity in the Paradox of Simplicity" was presented should be published in the next few months. The paper was carved loose from a 100-page draft. The (shorter) paper has also been accepted to Interjournal, the Journal of the New England Complex Systems Institute, as part of a set by overlapping authors, listed at the end of this posting.

There is a VERY extensive literature in the Philosophy of Science as to the centuries of attempts to axiomatize "Occam's razor" (which, incidentally, was not from William of Occam).

My cursory reading of the admittedly interesting post in Think Gene seems disconnected from this history and literature, and mostly re-invents the wheel. The issues are important enough that this is fine, and the debates are worthwhile, in this and other venues.

The paradoxes are, indeed, at the root of the enterprise of Science; are wildly misunderstood by Creationists and other enemies of Science; profoundly influenced by the changes in infrastructure (especially computers and the internet); and meta-paradoxical in that there is no simple way to determine what criteria and justifications of simplicity there should be.

By Interjournal numbers:

[911] IMAGINARY MASS, FORCE, ACCELERATION, AND MOMENTUM (Accepted),
Revised, , JONATHAN VOS POST, Professor Christine M. Carmichael,
Andrew Carmichael Post

[913] The Evolution of Controllability in Enzyme System Dynamics
(Accepted), Revised, ,
JONATHAN VOS POST

[1001] Adaptation and Coevolution on an Emergent Global Competitive Landscape (Accepted), Revised, , Philip V. Fellman, JONATHAN VOS POST,
Roxana Wright, Usha Dasarari

[1696] Complexity, competitive intelligence and the "first mover" advantage (Accepted), Revised, , Philip Fellman, JONATHAN VOS POST
[note: despite rewrite by both authors, the paper is still well above the 8-page limit for the Proceedings as such]

[1013] The Nash Equilibrium Revisited: Chaos and Complexity Hidden in
Simplicity (Accepted), Revised, , Philip Vos Fellman [JONATHAN VOS POST co-presented this paper at the conference]

[1709] Path Dependence, Transformation and Convergence- A Mathematical Model of Transition to Market (Accepted), Revised, , Roxana Wright,
Philip Fellman, JONATHAN VOS POST

[1846] Nash equilibrium and quantum computational complexity
(Accepted), Revised, , Philip Fellman, JONATHAN VOS POST

[2026] Comparative Quantum Cosmology: Causality, Singularity, and Boundary Conditions (Accepted), Article, Philip Fellman, JONATHAN VOS POST, Professor Christine M. Carmichael, Andrew Carmichael Post

[2029] Quantum Computing and The Nash Bargaining Problem (Accepted),
Article, Philip Fellman, JONATHAN VOS POST

[2060] Disrupting Terrorist Networks - A Dynamic Fitness Landscape Approach (Accepted), Article, Philip Fellman, Jonathan Clemens, Roxana Wright, JONATHAN VOS POST

note also:
[2225] On a finite universe with no beginning or end (Accepted),
Article, Peter Lynds [JONATHAN VOS POST reviewed this, moderated the Physical Systems session where this was presented, and wrote a preface to the book-length expansion of this article, and was acknowledged in the arXiv version of the paper]

It seems that a great deal of this anti-/realist epistemological nonsense can be dismissed by taking things on from a pragmatic frame (which, it seems to me, is what you were offering). Whether or not the "objects" (or concepts, if you will) of scientific theories are real, and whether or not the world is understandable (whatever that means), it really comes down to utility for us, the know-ers. Check out Rorty's work (especially, Philosophy and the Mirror of Nature) for more on this imminently practical way of avoiding language (which is, in the end, how we come to know anything) like "progress," or "real." Not that these aren't fun conversations, but they tend to widen the perceived gap between scientists and those who talk about what they do and how science works. Terms like "progress" and "real" have serious baggage that accompany them, making any argument in which they appear a non-starter for lots of discussions.

I have to disagree that the posting was interesting. We've known for almost a century that that's true -- and it's not just a function of imperfect or limited knowledge. It's a mathematical truth -- just ask Goedel. There are infinite axiomatic sets under which any given statement can be true, which is the ultimate source of this little theoretical quandary.

But you can go further than simply sub-dividing the spaces by elimination. You can also show that the entire space has certain qualities -- you turn it into a meta-theory and you really have gained a truth about the world. It's just a fuzzy truth.

We may not know that Maxwell's equations are complete -- but we do know that any further theories we develop must be consistent with them; that's been well worked out, even if there are an infinite number of extensions and variations to them that may someday be developed.

On top of that you have simplicity and usefulness -- you can cut down that space quite a bit, by eliminating baroque and non-productive theories (even if they are consistent). It really seems to not be a practical issue, excepting that we should stay aware of the limitations of theories and data, and not fall into the trap of "believing" them in a dogmatic fashion. Why should I care about the schools of skepticism, when they do me no one any good other than funding philosophers?

Isaac Asimov said it very well: "[W]hen people thought the earth was flat, they were wrong. When people thought the earth was [perfectly] spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

I guess I'm confused as to what these "antirealists" are trying to do. Are they saying that 1) there is an ontologically "real" world out there (the moumenon), but we can never know exactly what it is, or 2) there is no noumenon. If it's (1), then we have no choice but to concede (without considering pragmatics, which of course is the realist argument). If it's (2), then we can't prove them wrong either, since all experience of reality is ultimately subjective. And even in between (1) and (2), they could always say that new scientific discoveries are not ontologically real until they are discovered (observed), in which case what we perceive as reality is actually defined by observation, and science could never tell us anything about the "true" nature of the universe (if such a thing exists). This model would be experimentally indistinguishable from the realist model.

Steve, I agree that a pragmatist approach is best - I have been a pragmatists for some time. But antirealism/realism is not thereby resolved so much as evaded. What Jeff says is right: the "pomo" approach to science, which declares that we construct not just concepts but reality itself, is a category error in my view (and few if any make exactly that claim if they've thought about it for any length of time). So the antirealist generally claims that all we have to go on are our best theories, and no ontological claims can be made absolutely on the basis of theories.

It's that last move I find interesting. Since science is a fallibilist epistemology, of course we can make no claims absolutely, but realists would agree. However, they would say there remains a relation of truth and falsity between the theory and the (noumenal) world. Structuralist realists think that the overall structure of the theory is what is true, not specific entity classes as such, although I find that hard to grok.

Realism is a motivation for doing science. We are finding out about the world, not making pleasant models like a toy train enthusiast. So I am a pragmatist about scientific claims, but a realist about truth relations. My problem is that I don't know what those relations are, and all I have access to is whether or not a theory works to some degree of precision.

As a pragmatist, I think I can compress the problem by considering any currently successful scientific theory to be the approximation of the set of all valid theories that fit the known information. Almost all theories today have replaced theories that were less accurate and will, in turn be replaced by newer theories that fit the data better and offer a tighter approximation. I doubt there are any scientists who think, in their particular field of study, that the reigning theory will not be improved or has no room for improvement.

In the silliest, most inanely technical sense, such an improvement proves that the prior theory was wrong, but in a practical sense it merely tells us that we have excluded a large number of other possibilities that were included in the older approximation. Saying that Newton was wrong in his mechanics may be technically true, but would be profoundly misleading. Newtoniam mechanics are still roughly correct and highly useful to engineers every day.

By freelunch (not verified) on 20 Jun 2008 #permalink

I believe that the issue goes deeper than "realist" versus "antirealist."

For one thing, SCIENCE is not a monolith. There is a social consensus enforced in Mathematics for alleged "elegance" in the one-century-old tradition of lemma-proof-theorem-poof format, quite different from the weay that Mathematics used to be written and published.

There is a social pattern of "parsimony" in a statistical structure in experimental PHYSICS, which is no longer universal in the theoretical side of that field (i.e. String Theory).

BIOLOGY breaks with these patterns, because Evolution by Natural Selection is not constrained by our social preferences for pattern generation.

Consider this comment from one of your colleagues in ScienceBlogs:

Biology Will Never Be the Same Again: Scott Lanyon

"To me, this is very interesting because it is an example of science swimming upstream against parsimony ... it turns out that the simplest explanation (a Cartesian gene-trait correspondence) is not correct, and a more subtle, nuanced, and complex set of relationships involving hierarchical effects and emergent properties is correct. As a firm disbeliever in the 'Law of Parsimony' as it is usually applied, I like this."

GEOLOGY and ASTRONOMY also have foundationally faced the way that Earth and the Cosmos are not structured in a way that parallels our prejudices about what constitutes a "simple" explanation.

As mentioned in the Wikipedia entry (pretty good intreoduction) on Occam's Razor:

Occam's razor is not equivalent to the idea that "perfection is simplicity". Albert Einstein probably had this in mind when he wrote in 1933 that "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience" often paraphrased as "Theories should be as simple as possible, but no simpler." Or even put more simply "make it simple, not simpler". It often happens that the best explanation is much more complicated than the simplest possible explanation because its postulations amount to less of an improbability. Thus the popular rephrasing of the razor - that "the simplest explanation is the best one" - fails to capture the gist of the reason behind it, in that it conflates a rigorous notion of simplicity and ease of human comprehension. The two are obviously correlated, but hardly equivalent.

The field is so complicated and rich in deep analyses over millennia that I find it hard to summarize my 100-page paper, or even the shorter papers cut loose from it.

Consider also radically different approaches to explanation, as in Stephen Wolfram's "A New Kind of Science."

I assert that we should not be slaves to Occam's razor.

From my perspective researching paleontology I can see that sometimes what we think are to be processes of evolution in the sense of X changed into Y because of Z can sometimes be explained also as variations of species that is far from being deterministic - there could have been a number of other evolutionary valid changes and the occurrence of one of the options is not a necessity

How do we know "there exists an infinite equally valid explanations"? Is the tooth fairy an equally
valid explanation as "maybe has to do with electrons"?

"we never have perfect information in science" If we had infinitely imperfect information there would
be no science.

"All that was done was eliminate some of the theories." No, not only some but all others.

What is this guy playing at? Sounds like generic postmodern blurb to me.

Structuralist realists think that the overall structure of the theory is what is true, not specific entity classes as such, although I find that hard to grok.

I attended a talk by a Structural Realist and have been trying to get to grips with it ever since.

From what I gathered, they come at it from Quantum Mechanics (or maybe it was just the invitation to speak from the IoP that brought on that line of reasoning).

In QM, statistically two similar objects (say, two electrons) are indistinguishable. That is to say, they have no identity other than that given to them by their variable properties. If you have two electrons and two possible states for them to be in, it makes no difference which of the two is in which state - there is only one possible way for the states to be filled, with one electron in each. There is no distinction between swapping over the two electrons because then you just end up with an identical situation.

[Sidenote - this revelation is what distinguishes the two new sets of Quantum Mechanical statistics from standard classical statistics. Fermi-Dirac covers Fermions and Bose-Einstein covers Bosons, while Maxwell-Boltzmann covers classical objects.]

The Structuralists take this lack of identity to suggest that the objects themselves don't exist. I'm not sure about that particular step. Instead they view them as simply nodes in a structure.

There may be some benefits to physics from such a viewpoint (notably non-local theories of quantum mechanics might offer some insights), but I personally see little difference between objects without identities and nodes with the properties of objects.

By Paul Schofield (not verified) on 24 Jun 2008 #permalink

Structuralist realists think that the overall structure of the theory is what is true, not specific entity classes as such, although I find that hard to grok.

The Structuralists take this lack of identity to suggest that the objects themselves don't exist. I'm not sure about that particular step. Instead they view them as simply nodes in a structure.

This is what I don't understand. We have this fundamental mindset in science and physicalism that anything interesting (consciousness, biology, chemistry, etc) emerges from lower levels of reality, and is explained in those terms - essentially reductionism. And yet, structural realism (and possibly QM weirdness?) seems to be telling us that the higher level "structure" is ontologically more real than the elements from which they supposedly "emerge". So the question then becomes, is our reality constructed from the "bottom-up" (emergence) or from the "top-down". Top-down has disturbing implications, such as idealism. And as another side point that I alluded to previously, if objects don't become ontologically real until they are observed, then reality must be constructed at least partially from the top-down.

Well, original Structuralism was a response to the results of constant reductionism making the fundamental objects more and more complex (while older theories turned out to be describing only crude ideas, not 'real' objects at all), and a dodge of the PI mentioned in the original post. Instead of claiming that theories made reality claims about the objects that were a part of it, they claimed that it makes reality claims about the overarching structure that those objects are only a crude way of describing. Structure is maintained far better between theories than the objects we normally use.

Except not really. Structure changes a lot as well, so you run into the same problems that drives people to pragmatism.

The whole QM side is relatively recent and about the only part of it that makes any sense. Quantum objects are not classical philosophical objects because of that lack of identity, which opens the door for a rehashing of Structuralism.

One of the interesting things is that it does line up with some of the interpretations of quantum mechanics. Notably Bell's theorem forces us to either abandon realism (in a sense) or locality. By assuming some form of structuralism, where the wider structure is what exists rather than isolated particles, you could have some form of non-local hidden variables that allow quantum mechanics to work. I believe there are some groups working on these ideas today.

Then there is the whole thing that has been linked from the sidebar for ages now. An interesting read, but I have heard a lot of skepticism about the area, so have no ideas what to make of it.

By Paul Schofield (not verified) on 25 Jun 2008 #permalink

Notably Bell's theorem forces us to either abandon realism (in a sense) or locality. By assuming some form of structuralism, where the wider structure is what exists rather than isolated particles, you could have some form of non-local hidden variables that allow quantum mechanics to work.

Then there is the whole thing that has been linked from the sidebar for ages now

I assume you've seen this (realism may be more of a problem than locality):
http://physicsworld.com/cws/article/news/27640

Very interesting. In that Seed article, it seems like either QM is wrong or we're living in a dream world. Tough choice. Zeilinger seems to be advocating some kind of neutral monism (Spinoza) or property dualism (Chalmers). They've also created a new philosophy position (they must really be confused ;)