The Work of Bill Demski's New Best Buddy: The Law of Devolution (part 1)

As several of my fellow science-bloggers pointed out, William Dembski has written a post at Uncommon Descent extolling an "international coalition of non-religious ID scientists", and wondering how us nasty darwinists are going to deal with them.

Alas for poor Bill. I'm forced to wonder: is there any purported ID scholar so stupid that Bill won't endorse them? In his eagerness to embrace anyone who supports ID, he didn't both to actually check who or what he was referencing. This "international coalition" turns out to be a lone uneducated crackpot from Canada who uses his ID beliefs as a justification for running on online sex-toys shop! Several people have written about the organization; I decided to take a look at the "science" that it/he published, in the form of a sloppy paper called "In Search of a Cosmic Super-Law: The Supreme "Second Law" of Devolution".

The paper is a mess. It purports to try to address problems with the second law of thermodynamics in a way that shows that the universe must have a creator. The catch, though, is that his argument for why there's something wrong with the second law is pure rubbish. But I'm getting ahead of myself.

The paper is written in a fairly typical crackpot style. Superficially, it's just a wreck: strange emphasis, random underlining, bizarre formatting. When you read it, it's worse than the formatting: lots of jargon strewn around, most of it dreadfully misused. He desparately wants to appear serious, so he structures his paper around "theorems"... but he doesn't understand what the word theorem means. In his argument, he tries to throw in references to fancy technical ideas that he's heard mentioned in discussions of modern physics, like "the ricci curvarature tensor" - but they're the pseudo-scientific equivalent of name-dropping. He spends lots of time about how well his theory matches the math of relativity - but there's really no math at all in the paper - just words, alleging that things work mathematically, but never showing how.

Enough introduction: let's get to the meat. His major section is what he calls "The Physical Incompleteness Theorem". It's basically just another version of something vaguely Dembski-ish: an argument that order can't arise from randomness, therefore there must be something that created order. Same-old, same old. But his argument for why is humorous. He quotes Hawkings, and then attempts to refute Hawkings argument:

"This (A chaotic boundary condition) would mean that the early universe would have
probably been very chaotic and irregular because there are many more chaotic and
disordered configurations of the universe than there are smooth and ordered ones..."

"If the universe is indeed spatially infinite or if there are infinitely many universes, there
would probably be some large regions somewhere that started out in a smooth and
uniform manner. It is a bit like the well-known horde of monkeys hammering away on
typewriters--most of what they write will be garbage, but very occasionally by pure
chance (randomness) they will type out one of Shakespeare's sonnets(order). Similarly,
in the case of the universe, could it be that we are living in a region that happens by
chance (randomness) to be smooth and uniform?(ordered)"

You will notice that I have placed the words "randomness" and "order" after Hawking's
words. I have done this to point out that he is apparently invoking "pure chance" (absence
of constraint) as a possible source of constraint. This is, of course, impossible. Just as
nothing can be lit by darkness, nothing can be ordered by chance. In the same way that
light must come from a source of light, so must order come from a source of order. Such
a juxtaposition of opposites is a strong indication of confusion on the part of the author.
Let's look deeper...

There's the main part of his argument: light comes from a source of light, order comes from a source of order, order can't possible come about as a result of randomness, because randomness is a source of disorder, not order. All just blithely asserted. I especially like the part about things that he doesn't understand being a strong indication of confusion on the part of the author.

But what comes next is the funny part.

"Pure chance" is certainly not pure if you only have 26 letters in the alphabet. In such a
finite system the monkey's choices are utterly constrained to the available letters. If the
alphabet contained only one letter, then monkeys would always type "Shakespeare."(and
vice versa) In such a case Hawking would have to change his "very occasionally"
position to "always." While chance is pure over the range of 26 letters it is absolutely
impure outside of that range (27 28 29...)

He goes on at great length with this argument: Given a finite alphabet, one cannot have "pure chance". The use of the alphabet implies structure - and
even more, it implies an intelligent agent who defined that structure.

One of the things that we can learn from this is that Mr. Brookfield doesn't actually understand what "pure chance" means. It's simply idiotic to assert that one cannot have a
string of letters which is generated by "pure chance". Depending on context, "pure chance" could mean two different things. One of them is the common probabilistic version; and one is the information theoretic one.

In probabilistic terms, a string generated by "pure chance" would mean, roughly: given
a pool of possible outcomes (strings) populated according to a probability distribution, the string was selected from that pool with no information guiding the selection. What makes it pure chance is the probability distribution: the choice is random within that distribution. By "pure chance" talking about strings in an alphabet, most people would probabaly mean a uniform distribution - that is, all possible sequences of characters are equally represented.

By that definition, does the use of an alphabet preclude pure chance? Absolutely not. The things that make it pure chance are the fact that there is a uniform distribution of possible candidates (so that the distribution isn't skewing towards a particular outcome), and that the process of selection from the pool was blind - no information was used to guide the selection.

The other possible meaning of a string generated by pure chance is the more information theoretic approach: viewing the generation of a sequence of characters as a process. In that case, a string generated by pure chance means that as each character is generated, the possible outcomes are equally likely (again assuming the uniform probability distribution as the meaning of "pure chance"). And once again, the fact that we're restricted to an
alphabet has absolutely nothing to do with whether or not we're talking about pure chance.

The only way that the restriction to an alphabet makes a difference is if we're trying to measure the quantity of information represented by the string. And then the alphabet matters - but only in a way unimportant for Brookfield's argument: the larger the alphabet, the more information contained in a random string of characters. But the larger alphabet - even an infinite alphabet - doesn't affect whether or not a string is generated as a result of pure chance.

If you look at Brookfield's argument in detail, in fact, he's making a type error. He's arguing that because the "alphabet" isn't the set of natural numbers (or perhaps integers) that the string can't be pure chance. If we were talking about all possible strings of natural numbers, and then we restricted the outcome to random strings of 1 through 26, then we wouldn't be looking at pure chance - because we would have altered the probability distribution from a uniform distribution over the integers to a uniform distribution over the range 1-26, with a probability of 0 for anything else.

And that's just the most trivial part of his argument - the part that's closest to being correct. The bulk of his argument is the statement "This is, of course, impossible. Just as nothing can be lit by darkness, nothing can be ordered by chance". That argument attempts to refute Hawking, who argues that given an infinite space,
with an infinite number of possible configurations, most would be highly chaotic, but some would be uniform, and that our universe is the result of one of those relatively uniform configurations. The entire use of "order" is a fabrication - and the entire argument based
on it is wretchedly bad. If you look at Hawkings argument, and translate it into Brookfield's terms: suppose you have an infinite number of infinite sequences of numbers. Most of those sequences will be chaotic - there won't be any discernable patterns or structures. But within that infinite set, there are some sequences that are monotonically increasing; there will be some that consist of lists of increasing subsequences. Most won't - but some will. Brookfield is arguing that in randomly generated
sequences - truly random ones - you can't get an ordered pattern, ever.

From this awful beginning, Brookfield tries to build a theory. He moves on to
create his own grand "Brookfield uncertainty principle" for finite thermodynamic systems, which is just a crackpot restatement of the muddle above: "For any given finite system, there shall be uncertainty as to the source of order if one has to base one's conclusion solely on information gleaned from within that finite system." In other words, order can only come from order, so there must be a source of order; within a finite system, since the source of order must be outside the system, and so you can't see the source from the inside. Once again, just babbling on the same bogus riff: order can't be the result of randomness.

Next he creates the "Brookfield absolute certainty principle for infinite systems", which is really just the same thing as his so-called uncertainty principle, with an additional bit of muddled nonsense. This one says that in an infinite system, the
uncertainty is eliminated. In the finite case, there's an uncertainty as to the source of order in the finite system - because the finite system is part of some larger system, and so the ordering agent may be part of that larger system. In the infinite case, that uncertainty disappears, because the infinite system eliminates any source of uncertainty: therefore, you know absolutely what the source of order is: it's something outside of the system.

I look at this and say that they're just pure silliness - and that they're actually contradictory. In the finite case, you don't know where the order comes from, because it's something outside of the system. In the infinite case, you know where the order comes from, because it's something outside of the system. But when it's finite, it's uncertain - I suppose because the source of order is outside the finite system, but it could be inside an enclosing infinite system, or outside of an enclosing infinite system. But either way, it's totally bogus: it's still based on that awful "order can't come from randomness".

The next one is one of the typical hallmarks of the crackpot. As I always say: the worst math is no math. The next minisection is titled "Proof that the phase space of any universe governed by Einstein's field equations is infinite". The contents of that are "See these other references", followed by "These proofs highlight Hawking's error". This is the pattern that he follows repeatedly throughout the paper: throwing out references to
mathematical or scientific terms that he doesn't understand, and foisting the work off onto other people.

And that brings us to the end of part 1 of his paper. Part 2 is funny enough to deserve a fisk of its own.

More like this

(Giggle-giggle.)

Big Bad Bill says, " Just as nothing can be lit by darkness ..."

Hmm. He has never seen shadow puppets?

Have his parents kept him locked in the cellar his whole life?

He would have to think that disorder cannot arise from order.

So I'm guessing he's never made fudge.

By Gork (again) (not verified) on 19 Jun 2007 #permalink

There seems to be a pattern in ID thought:

1) Use large, undefined, concepts to make broad claims.

2) Avoid the details, including the math. Particularly avoid understanding how the math actually works, so you don't have to encounter any serious problems.

3) Use other peoples work rhetorically, to give the impression of dialog, but never engage it seriously.

It's a good recipe, if you're only preaching to the choir.

I just wonder if they secretly know that's what they're doing--somewhere deep down, that they don't even admit to themselves.

I wonder how long until, in his quest to define supporters, Dembski winds up referring to Transcendental Meditation or one of its shell groups as "non-religious". After all, it's all just quantums these days, and their argumentation is about as scientific as this cargo cult information theory you quote.

Or how about Scientology, for that matter? They're historically evolution dissenters, and historically their approach has been not to talk about such things as faith or prayer, but rather to approach their practices as "technology" and "scientific principles", with vocabulary tuned to match. Since vocabulary which makes explicit reference to science concepts but a lack of vocabulary which makes explicit reference to religious concepts appears to be all Dembski requires to consider someone a "non-religious ID supporter", this ought to be enough. Really, all that remains is to go through and replace all this problematic wording about "thetans" with something more like "non-material causal agents" and it sounds like something Dembski would yum right up.

There's a microscopic grain of truth in:

"... there are many more chaotic and disordered configurations of the universe than there are smooth and ordered ones..."

Doctor S*x-T*ys might, at this point, have dropped the term "Boltzmann Brains" into the mound of garbage. That would have required explaining what Boltzmann actually said, which would not be easy, as once can lead a horse to water, etcetera.

It appears that crackpots are, as vermin, spontaneously generated out of mounds of dirty linen resulting from application of s*x toys, and not (as widely believed) the products of biogenesis and evolution.

[asterisks to bypass filtering which preveted preview and posting previously]

Since this is a math blog, it might be worthwhile, in all of this "order cannot arise from chaos" gobbledegook, to drop a reference to Ramsey theory and similar extremal theories, which state the exact opposite: in a large enough system, small-scale order and structure is not only possible but in fact inevitable. In addition, the probabilistic approach to Ramsey-type problems demonstrates that an even larger variety of types of order and structure are, if not inevitable, at least very likely to occur out of pure randomness.

"For any given finite system, there shall be uncertainty as to the source of order if one has to base one's conclusion solely on information gleaned from within that finite system."

You can go on to talk about how this is an abuse of set theory....

I feel I should add this, even if it's not directly related to the arguments of the article:
The theory, that our universe is a random fluctuation of order in a great ocean of chaos, is very unlikely. Just imagine two Universes: one with all these galaxies we have, and another with only a single galaxy in an ocean of chaos. Clearly, the latter is much more probable as a result of a random fluctuation. So, either we must be very, very lucky to have all those galaxies in the sky... or, it's not a random fluctuation.
AFAIK, the current explanation is that, during the superinflation phase, the universe has smoothed to a nearly uniform density of matter, and when the gravity has become the main force, it started to create galaxies and stars.
The question is: what started the superinflation, and why the Universe is so big? Honestly, I don't think scientists have an answer ready.

The theory, that our universe is a random fluctuation of order in a great ocean of chaos, leads to stranger notions, which is why I mentioned Boltzmann Brains.

Just imagine two Universes: one with only a single galaxy in an ocean of chaos, and one with just Earth, and everyone hallucinating that they see other galaxies.

Clearly, the latter is much more probable as a result of a random fluctuation.

Really, these are not probability/cosmology arguments. They are philosophical games akin to Bertrand Russell's notion, which created a back-and-forth among theologians that still continues in the Creationist bizarro world.

Reverend Canon Brian Hebblethwaite, for example, preached against Bertrand Russell's projection of Gosse's concept:

Bertrand Russell wrote, in The Analysis of Mind: "there is no logical impossibility in the hypothesis that the world sprang into being five minutes ago, exactly as it then was, with a population that 'remembered' a wholly unreal past." But that, like much of what Russell wrote and said, is nonsense. 'Human beings', posited in being five minutes ago with built-in 'memory' traces, would not be human beings. The suggestion is logically incoherent.[Reverend Canon Brian Hebblethwaite, In Defence of Christianity, 6 March 2005]

The basis for Hebblethwaite's objection, however, is the presumption of a God that would not deceive us about our very humanity. But this takes us back to the more radically skeptical Descartes, who considered that Satan might very well be deceiving us about our very humanity, and feeding false sense impressions into our minds.

Again, this is, in my opinion, NOT science at all, but some sort of Theophysics and Theomathematics.

Those of us who consider ourselves rational (whether we are or not) might well realize the theological and philosophical context in which some others argue, regardless of whether they speak or write in deep Scientism (the trappings and vocabulary of science, without the substance).

In my opinion, MarkCC has done a very good job of walking this tightrope.

Glancing through that "paper", I found it rather humorous with its "sparticle physics" (I am a Sparticle), the "uh oh", "something is highly irregular?", "indeed you say." I find it interesting how he attributes structure to guessing a letter from an alphabet like {a ... z}. Sure, one can say structure exists in such a case, but by the same token one can say the empty set has some degree of structure strictly greater than 0 since it has a definition. With that sort of definition literally everything has structure, including sequences like kajdfkai, in which case the distinction between something structured and something disordered dissolves. In such a case, even if we assumed his pseudo-hack sophistry correct, the distinction between "devolution" and "evolution" would dissolve, and with that his argument collapses.

i just noticed something even more disappointing about that paper excerpt... the author isn't even quoting any of Hawking's actual papers - he's quoting A Brief History of Time. and he's misunderstanding even that. what the hell's wrong with this guy?

i kinda feel sorry for Demby.

Lepht

I stopped reading closely already after the first sentence: The ability of the Generalized Second Law to withstand singularity.

Right, either we don't have a full description so we have a singularity but since we don't have the full description we generally don't know how entropy applies, or we have a full description presumably without singularities and entropy is applicable.

That said, as usual I look forward to the fisking. I assume Brookfield's idiocy on 2LOT and evolution will figure prominently.

I suppose because the source of order is outside the finite system

Actually I think his problem may be that he thinks he can compare probabilities over volumes in infinite spaces, something that physicists still struggles with in spacetimes much more structured than the one he experience. :-)

But it is ironic, since I believe Penrose and Brookfield's reference Galloway has been able to infer that a closed, but infinite, de Sitter space has an overall entropy. (I think in this particular case in spite of its initial singularity, which is presumably what Brookfield refers to when I stopped reading.)

Which goes against both Galloway's #2 and #3 principle. In #2 he asserts implicitly that entropy is an inadequate description for all infinite spacetimes. And in #3 he asserts that specifically Hawking's entropy description is is an inadequate description for all infinite spacetimes.

the author isn't even quoting any of Hawking's actual papers

Yes, and much as I like Hawking's results on semiclassical singularities such as black holes and his no-boundary initial singularities for spacetimes, I feel that I can level the same criticism against his general cosmologies as I have for Tipler's on this blog: they look like improbable houses of cards. I wouldn't make such a beef about any brief popular description of his.

For example, the Boltzmann Brain/Ramsey theory probabilities are used for all sorts of initial conditions of our patch of spacetime, not just Hawking's. And part of Hawking's description comes back to the mentioned problems of comparing probabilities over volumes in infinite spaces.

By Torbjörn Lars… (not verified) on 19 Jun 2007 #permalink

this is, in my opinion, NOT science at all, but some sort of Theophysics and Theomathematics.

Agreed. What is interesting is when CS guys like Scott Aaronsson can take such ideas, for example Tegmark's probabilities of clones in infinite spaces, and make other physicists consider them as conditions on speculative anthropic and environmental principles in string landscapes and eternal inflation.

To wit, "No application of the Anthropic Principle can be valid, if its validity would give us a means to solve NP-complete problems in polynomial time."

Of course, most physicists probably think anthropic principles are Theophysics. :-)

By Torbjörn Lars… (not verified) on 19 Jun 2007 #permalink

Very interesting comments by Torbjörn Larsson.

What I predict is:

Scott Aaronson ("Scott Aaronsson" has an extra s) takes the Science Fiction course taught every other semester at MIT, now that Scott has accepted an MIT professorship.

He strikes up a conversation with the SF teacher -- Joe Haldeman -- about the following [from Wikipedia on "Omphalos (theology)"].

Jorge Luis Borges, in his 1940 work, Tlön, Uqbar, Orbis Tertius, describes a fictional world in which some essentially follow as a religious belief a philosophy much like Russell's discussion on the logical extreme of Gosse's theory:

"One of the schools of Tlön goes so far as to negate time: it reasons that the present is indefinite, that the future has no reality other than as a present hope, the past none other than present memory."

Borges had earlier written an essay, "The Creation and P. H. Gosse" that explored the rejection of Gosse's Omphalos. Borges argued that its unpopularity stemmed from Gosse's explicit (if inadvertent) outlining of what Borges characterized as absurdities in the Genesis story.

So Scott Aaronson writes a story about Tegmark's ideas, in the style of Borges and Haldeman. This wins the Hugo Award and the Nebula Award. Harlan Ellison writes the screenplay. Peter Jackson directs.

Hey, I can wish, can't I?

"Scott Aaronsson" has an extra s

Oops, I swedified him.

Well, it could be worse - now he can have access to tall blond girls and tasty beer all he wants, and all it cost him was a corny accent.

Hey, I can wish, can't I?

As long as we wish, I would like to have special effects of "Richard Taylor.

By Torbjörn Lars… (not verified) on 19 Jun 2007 #permalink

Thanks for the link, Torbjörn (#17). As usual for Scott, it's instructive and amusing. Sample quote:

"[W]hen I talked before about computational complexity, I forgot to tell you that there's at least one foolproof way to solve NP-complete problems in polynomial time. The method is this: first guess a solution at random, say by measuring electron spins. Then, if the solution is wrong, kill yourself! If you accept the many-worlds interpretation of quantum mechanics, then there's certainly some branch of the wavefunction where you guessed right, and that's the only branch where you're around to ask whether you guessed right! It's a wonder more people don't try this."

I think at least part of Brookfield's paper is correct. He says the energy potential of his Topological Devolution "looks and behaves suspiciously like the Ricci curvature tensor of General Relativity." I agree if anything Brookfield came up with behaved like the Ricci curvature tensor, it *would* be suspicious.

What a shame that Brookfield didn't beat Richard Hamilton to the punch on inventing the Ricci flow equation to redefine thermodynamics with The Supreme Second Law of Devolution. Why, he might have won the Fields medal which Perelman declined. And then we'd HAVE to take ID seriously as Mathematics.

Ricci curvature also appears in the Ricci flow equation, where a time-dependent Riemannian metric is deformed in the direction of minus its Ricci curvature.

This system of partial differential equations is a non-linear analog of the heat equation, first introduced by Richard Hamilton in the early 1980s. Since heat tends to spread through a solid until the body reaches an equilibrium state of constant temperature, Ricci flow may be hoped to produce an equilibrium geometry for a manifold for which the Ricci curvature is constant. Recent contributions to the subject due to Grigori Perelman now seem to show that this program works well enough in dimension three to lead to a complete classification of compact 3-manifolds, along lines first conjectured by William Thurston in the 1970s.

If a million monkeys on a 3-manifold watch Hawking on Star Trek or the Simpsons while randomly pulling Scrabble tiles out a hat to spell: "Proof that the phase space of any universe governed by Einstein's field equations is infinite..." then God exists.

And, ooooh baby, when you power on that s*x t*y, it gets me hot and makes my Ricci flow...

If you argue that an ordered sequence can never be generated by a random process, then you're arguing that you will NEVER get ten heads in a row, no matter how often you flip ten coins in a row. Actual math says that'll happen about one time in a thousand. So his entire argument is isomorphous to the Gambler's Fallacy ("It can't come up heads again, it must come up tails!"). I think an argument from the Gambler's Fallacy can safely by circular-filed without further analysis.

By Stephen Wells (not verified) on 20 Jun 2007 #permalink

Mark CC wrote, "In that case, a string generated by pure chance means that as each character is generated, the possible outcomes are equally likely (again assuming the uniform probability distribution as the meaning of "pure chance")."

I know I'm not a mathematician, but it looks to me that a uniform probability distribution is itself a type of order?

Not the results of the distribution, that's random, but the distribution function itself is not.

I may be making a mistake somewhere.

"I know I'm not a mathematician, but it looks to me that a uniform probability distribution is itself a type of order?"

Well, you can say such in a sense, much like how I said that you can say that "structure" exists in such a case. But we use words differently than normal here. Almost always you'll find people calling such "disordered" or "pure chance." Maybe looking at the following way helps. Let's say that we have a distribution like {.1 at a, .7 at b, .2 at c}. Not evertyhing works out as evenly disbursed. We have a greater concentration or order at b than any other point. Let's say we also have a distribtution like {1/3 at a, 1/3 at b, 1/3 at c}. We have no point with a greater concentration at a point or order than any other point. It lacks a greatest concentration or order. So, it qualifies as disordered. Our second distribution works out as more disordered than that of the first distribution. The second distribution will also have more disorder than any other distribution.

So, there does exist a "pattern" there, but there doesn't exist an "order" you might say. I don't think you made a mistake Flex. You just thought in terms very different from how science or math usually treats them and basically didn't recognize such a difference.

"So, there does exist a "pattern" there, but there doesn't exist an "order" you might say. I don't think you made a mistake Flex. You just thought in terms very different from how science or math usually treats them and basically didn't recognize such a difference."

Yeah -- it isn't a mistake as such, but it does illustrate how careful you have to be with the contexts in which these words are used, because they can change meaning in subtle but dramatic ways.

Having said which, I suspect that a more pathological form of this is what leads to the sort of nonsense that Mark is so rightly pulling apart in his post; our crackpot isn't careful about context or meaning, and what's more, he doesn't seem to *care*.

By Iorwerth Thomas (not verified) on 20 Jun 2007 #permalink

As long as we wish.

More tiredness. I meant as long as we are making wishes.

Sample quote

Yes, that is basically the part that connects back to Tegmark's many worlds quantum speculations and his clones. (Which, I think, in this case also can be situated classically in our universe, if it is large enough. Either way, it's a gamble. :-)

it looks to me that a uniform probability distribution is itself a type of order

Yes, in the necessary and sufficient sense of physical processes. Descriptions of distributions can be described and even evolve deterministically.

This is btw the way QM combines randomness and determinism. The QM wave function description of probability distribution evolves causally, between such interactions with other systems ("observations") which results in realizing one of the possible outcomes.

One of my pet peeves is that one must define "random" in each case, since there are so many possible interpretations - naive equi-probability, general stochastic, unpredictable, chaos, unordered, .... (As evidenced by Brookfield's confusions.)

In QM "randomness" means unpredictability, while determinism for once seems to mean predictability outside special cases of measure zero. (Since QM systems, in contrast to classical systems, basically evolve linearly and doesn't seem to exhibit classical deterministic unpredictability in subsequent chaos.)

By Torbjörn Lars… (not verified) on 20 Jun 2007 #permalink

his entire argument is isomorphous to the Gambler's Fallacy

Yep - first thing I thought was "I'd love to play this guy at poker".

Thanks Doug, Iorwerth, and Torbjörn,

Apparently calling something "ordered" or "random" may well depend on how these terms are defined for the function being examined.

In Doug's examples, his first probability example "{.1 at a, .7 at b, .2 at c}" results in a distribution concentration at point b, and shows "order" when looking at the resulting ditribution. But the probablility function itself may have been generated from a somewhat random process, in this case it was likely Doug's mind (no insult intended).

While in the second example, the probability function "{1/3 at a, 1/3 at b, 1/3 at c}" itself demonstrates an easily described "order" (all probabilities are 1/3), but the resulting distribution from the probablility function is random.

These are simple examples, but doesn't Demski's idea of being able to determine design by looking for 'order' collapse here as well? 'Random' results can be generated by easily modeled functions, while 'ordered' results can be generated by very difficult to model (and apparently random) functions.

If you are looking for 'order' as evidence of design, and you can find apparently 'ordered' results from random functions, your logic is faulty.

As I understand Demski's argument (short form):

Major Premise: Order is evidence of Design
Minor Premise: Design in Nature is evidence of a deity
Conclusion: Order in Nature is evidence of a deity

Of course, he hasn't shown his major premise is true. And the above examples suggest that his major premise is false.

Flex:

Demski's argument isn't that order, in and of itself, is sufficient to argue for intelligent design. He argues something a bit more subtle, but which is even more bogus when you cut through the obfuscation.

He argues that you can conclude design from two things: complexity (which he claims to mean in the information theoretic sense), and specification (which he's careful to be imprecise about, but he claims has information theoretic meaning. Specification implies a kind of ordered structure, for some definition of "order" and "structure"). If you find *both* specification and complexity, then according to Dembski, you can conclude design.

The giant gaping flaw in Dembski's argument is the meaning of specification. When reduced to information theoretic terms, "specification" and complexity are opposites. Specification precludes complexity; complexity precludes specification. It's meaningless - which is why Dembski is so careful to always hedge and wave his hands when pressed for a strict definition of specification.

Thanks Mark,

I know you've covered it in detail before (as I remembered as I read your response), and I appreciate you taking the time to re-iterate.

However, I think there is some value in having a simplied (if somewhat innaccurate) version of Demski's arguments when discussions arise with people who think there may be some truth in what Dembski says. For at the bottom, I think Dembski (and a lot of creationists) believe in design simply out of increduality. Something looks designed, so it must have a designer.

You see Dembski's error from information theory terms, where 'specification' and complexity are opposites. But Dembski isn't really using information theory (regardless of what he claims), Dembski wants to assume, like many others, that 'specification' (or order) and complexity are orthoganal.

If Dembski wants to make 'perceived order' and complexity orthogonal, he can't use information theory. Of course, by setting 'perceived order' and complexity along two orthogonal axes, Dembski can claim that in quadrant I there is both order and complexity. Then if something in nature is determined to be in quadrant I, then it must have been designed. Design implies .. blah .. blah .. God.

Beyond all the hand-waving, I think this is what Dembski is really trying to say. Of course I read his books with a very different background than you. I'm just a simple electrical engineer and I didn't get much information theory in my education. Which means I can't pick apart his miss-use of information theory like you can (which is why I enjoy reading this blog).

Yet, what I got out of his books is his assumption that order and complexity are orthogonal, and that the only way to have both 'ordered' and complex systems are when they are designed (blah .. blah .. blah .. god).

Of course if perceived order can arise randomly, then the entire ediface collapses.

Which means Doug's random probability function can be used to show how percieved order can arises from a random process.

It's not information theory, and it's not be an entirely accurate representation of Dembski's argument, but it may be a easier way to show people why Dembski is wrong.

Thanks again.

Who would have thunk that Conservapaedia would have an entry on the "Generalized Second Law"?

The Generalized Second Law of Thermodynamics was authored by William Brookfield and published on ISCID in response to the (generally accepted notion) that the conventional Second Law of Thermodynamics will sometimes admit local violations of the entropy increase, as often claimed by proponents of Darwinism. Brookfield was able to prove a much stronger version of the Second Law of Thermodynamics which removed the nonsensical allowance in the conventional Second Law for local entropy increase violations.

It's funny how brazenly they admit that Brookfield's "Law" is not physically true. I mean, if you remove the "nonsensical allowance" for local entropy decreases, you're basically disallowing reality, aren't you?

Well, if you actually *accept* the Brookfield reformulation, then the argument becomes
that the definition of entropy in the standard second law is wrong, making the standard
statement of the second law invalid. By that argument, the problem with what we call local decreases in entropy aren't *really* decreases in entropy; they're just artifacts of an incorrect formulation.

Of course, the problem with that whole argument is that Brookfield's argument is a pile of rubbish which isn't true, and which can't be called a law because it makes no actual precise statement of the quantities that it allegedly discusses, meaning that there is no possible way to test it.