Entropy (Basic Concepts)

This post was copied and slightly edited from a post I made a year or so ago at my blog's former location.

More bullshit has been written about entropy than about any other physical quantity.

—Prof. Dave Beeman, 1988

There is a popular-level understanding of entropy that is "randomness" or "disorder". This is not a bad way of looking at it, but brings along with it some associated concepts that are misleading. Creationists exploit this ambiguity by turning the argument around to information, where, even though ultimately we're talking about the same physical quantity, the implications are much less obvious- precisely because "information" has a common, colloquial meaning in regular conversation that is different from the entropy-connected (and therefore second-law-of-thermo-connected) definition of information. Much as the term "theory" is misunderstood when talking about science, so is "information".

So, if we are going to strive to be accurate: what is entropy?

Consider a very simple physical system: a row of eight switches. Each switch can either be "on" or "off". (Old-school computer nerds will recognize this as a "byte", but if you don't, don't worry; all you need is the concept of the switches.)

What is the entropy of this system? Well, each switch has two different states it can be in. There are eight switches, so the system as a whole has 2×2×2×2×2×2×2×2 = 256 different states that it can be in. That's a measure of how much randomness there is in the system; each time you come across a set of eight randomly oriented switches, it could be in one of 256 randomly different configurations. It's also a measure of how much information there is. You could, for instance, assign serial numbers to 256 different objects using this system of switches. You need an alphabet or language that can provide 256 different answers to fully specify the state of this system.

Now let's take the same system, but impose some order, some structure, upon it. Let's say that the switches must alternate: if one is on, then the next one must be off, and vice versa.

Now how many different states are there that the system could have? Only 2. If the first switch is on, then the second is off, the third is on, the fourth is off, and so forth. If the first switch is off, the second is on, and so forth. You can completely specify the state of the system with an alphabet or language that can provide only two different answers; all you need to do is specify the position of the first (or, really, of any) switch, and you know the state of all the rest.

Up to multiplying by a constant, here's the mathematical definition of the physical quantity of entropy, that thing that the second law of thermodynamics talks about: the number of switches whose position you need to specify to completely describe the identity, position, and velocity of every particle in a physical system. (For these switches, it's only identity we're worried about: up or down? Position and velocity would require additional information, effectively additional switches or computer memory to keep track of where everything is, but let's keep the situation simple for now by saying that position is fixed and velocity is zero.)

In the case of eight switches which can be set any way, the entropy is (a constant times) 8. In the case of two switches which can be set any way, the entropy is (a constant times) only 1. The entropy of the system which has some structure imposed upon it is lower than the entropy of the system which has less structure imposed up it.

Here we can see that more "randomness" or "disorder" does mean more entropy. Note, however, that when we talk about this, we talk about the possibilities of where the switches can be; we don't talk about any particular switch alignment. The system of eight uncoupled switches has 256 different configurations it can be in, and thus has a lot more "disorder" or entropy than the system of switches whose positions must alternate.

In this way, a star and planetary system has less entropy than the big randomly mixed gas cloud from which the star and planetary system formed. But how can this be, if the second law of thermodynamics tells us that entropy is always supposed to increase? Well, when that gas cloud collapsed and formed a planetary system, it also released a lot of light- some visible light, but even more infrared light, which sometimes we (not entirely correctly) call "heat". All of the photons released by that infrared radiation carry away a lot of entropy with them; it takes a lot of switches to specify where all of those photons are, and where they are going. The entropy of some system can locally decrease, as long as it is coupled to another system whose entropy can increase. In this case, the stuff that condenses to make the star and planetary system is one system, and the photons released by that stuff is another system. The entropy of the first system can decrease as long as the overall entropy of everything considered together stays the same or increases.

Tags

More like this

Several kinds of entropy:

(1) Thermodynamic;

(2) Information Theory: Shannon et al.;

(3) Information Theory: Kolmogorov-Chaitin et al.;

(4) Topological.

The connection between (1) and (2) is what the Maxwell's Demon arguments were about. That was resolved by Bennett et al, when Reversible Computing thought experiments showed that computing could be done with arbitrarily small energy and entropy per bit.

I could hotlink to numerous citations, but I don't want the scienceblogs filter to queue this up indefinitely.

So google on the terms ahead for details. But I tend to agree: "More bullshit has been written about entropy than about any other physical quantity." Especially by people who do not know the 4 different definitions (and there may be more), and even more especially by people who confuse two of them.

"The entropy of some system can locally decrease, as long as it is coupled to another system whose entropy can increase."

I'm no expert, but I imagine that it could also decrease by chance? Also, I wonder about the entropy of the gas cloud being greater than that of the solar system. As the gas cloud contracts under gravity, that part of space would seem at first to lose entropy without any light being emitted (since there is no star to begin with). But perhaps we could measure the entropy differently, so that the gravitational attractions that cause that apparent loss of entropy are included as "some order, some structure"?

I think that's why he specified infrared light. The gas cloud would 'heat' as it collapsed. The flaring of the star would just mean you need a lot more light to keep entropy moving.

By Weldon MacDonald (not verified) on 09 Mar 2007 #permalink

But I'm not sure that could be the whole explanation, because to begin with the only 'heat' would be the kinetic energy of the gas particles. For simplicity, consider a hundred rocks widely spaced in a completely empty space. They collapse under gravity, so that they begin to occupy less space, and so have less entropy, but the individual rocks are no hotter because there is no friction in completely empty space.

"They collapse under gravity, so that they begin to occupy less space, and so have less entropy, but the individual rocks are no hotter because there is no friction in completely empty space."

The rocks are 'hotter'. The rocks that have gravitationaly collapsed on each other have less Potential Energy, and have converted it to 'heat'. Jupiter acctualy gives off more radiation that it recives from the sun becuase it is still in the process of gravitationaly collapsing.

But I'm not sure that could be the whole explanation, because to begin with the only 'heat' would be the kinetic energy of the gas particles.

That is exactly what we're talking about when we use the term "temperature" -- the kinetic energy of constituent particles.

If the temperature of a gas is higher, the range of momentum states available to its particles increases, which is an increase in entropy.

Mike S -- is Jupiter really still gravitationally compressing? I know that the hot core still has energy left over from that, but I didn't think Jupiter was generating much energy any more doing that.

Heat generated by the Earth mostly comes from decay of radioactive elements.

-Rob

I'm no expert, but I imagine that it could also decrease by chance?

No. The particles could move to what appears to be a very ordered state by chance (although for a sufficiently complicated system, the chance is tiny). For example, all of the gas molecules in a room could, by chance, be packed into one corner of the room temporarily. The probability of that is very small, but non-zero. However, the entropy of that system hasn't changed. Entropy has to do with the states that are available, not the states that are actually occupied. The momentums and positions of all the particles packed into one corner of the room would allow for them to refill the room in a wide variety of near-indistinguishable patterns without any need for putting in external energy, so their entropy is the same as it was when they were spread throughout the room.

-Rob

Jonathan--

I do believe I've been insulted....

Obviously, I'm talking mostly about thermodynamic entropy here.

Shannon information is connected to thermodynamic entropy fairly directly, and indeed by using my "switches" example it's pretty transparently information in the sense of computer memory. In this case, information itself doesn't really need to refer to the Second Law, and reversibility and all of that isn't really the main focus; the main focus is just how much information there is. When, for example, the entropy of the system increases as organisms evolve, the extra entropy is all off in radiated heat and/or provided by the energy input from the Sun; there's no need to worry about minimal entropy increase computation, because the entropy use is actually fairly profligate.

I know very little about the other two kinds of entropy you talk about; I know nothing about topology, other than the whole idea that a donut is the same as a coffee cup, topologically speaking.

I've just begun studying topological entropy myself, but it has to do with the coverings of a topological set, and the properties of a map which acts on that set. Speaking very roughly, it measures the complexity of a dynamical system (see here for the first example I could find).

Dear Rob Knop,

"I do believe I've been insulted....Obviously, I'm talking mostly about thermodynamic entropy here."

No insult intended to you. Your blog is a model of care and clarity. I direct it at the anonymous targets of Professor Dave Beeman's 1988 statement.

On other scienceblogs, there has been extensive BS by ID-Creationists who confuse Shannon Entropy with Thermodynamic Entropy. Blake Stacey can tell you more.

As someone who writes quite a bit for online encyclopedias of Math, I'm just erring on the side of caution. When defining the meaning one cares about of a given word, it's worth mentioning, and excluding, other meanings.

Saturn is emitting more heat than Jupiter by a mechanism of helium raining down to the core.

As to dynamics:

http://arxiv.org/pdf/math.DS/0703167

From: Tom Meyerovitch
Date Tue, 6 Mar 2007 17:17:22 GMT (45kb)
Date (revised v2): Wed, 7 Mar 2007 15:34:18 GMT

Entropy of cellular automata
Authors: Tom Meyerovitch
Comments: 16 pages, 11 figures
Subj-class: Dynamical Systems
MSC-class: 37B15; 37B40; 37B50

Let X=S^G where G is a countable group and S is a finite set. A cellular automaton (CA) is an
endomorphism T : X to X (continuous, commuting with the action of G). Shereshevsky (1993) proved that for G=Z^d with d>1 no CA can be forward expansive, raising the following conjecture: For G=Z^d, d>1 the topological entropy of any CA is either zero or infinite. Morris and Ward (1998), proved this for linear CA's, leaving the original conjecture open. We show that this conjecture is false, proving that for any d there exist a d-dimensional CA with finite, nonzero topological entropy. We also discuss a measure-theoretic counterpart of this question for
measure-preserving CA's. Our main tool is a construction of a CA by Kari (1994).

However, the entropy of that system hasn't changed. Entropy has to do with the states that are available, not the states that are actually occupied

That's a little misleading, I think. It's only the microstates corresponding to the current macrostate of the system which are relevant. But surely the half-empty room is in a different macrostate to the full one, no?

But surely the half-empty room is in a different macrostate to the full one, no?

Yeah, but given that a whole bunch of gas from the rest of the room is about to come rushing into it, it's hardly a closed system. The closed system we're talking about whose entropy is conserved (or goes up) is the entire room.

Just a question about something I have about for years: the first creationist arguement I ever personally demolished was the old "shine light on pile of bricks and you still won't get a house" arguement that complexity cannot spontaneously arise. I posited a container of two gases, methane and chlorine, which can remain stable a long time if not perturbed. A few UV photons, however, can start an exothermic reaction which will create a complex mix of four halomethanes, hydrogen chloride, unreacted methane and chlorine and possibly some free hydrogen. I've often wondered which has the higher entropy, the unreacted mix of two products or the final mix of eight?

By justawriter (not verified) on 09 Mar 2007 #permalink

If Creationists can't understand the difference between Shannon entropy and thermodynamic entropy, how can we expect them to understand topological entropy, or that the entropy of a black hole is proportional to the surface area of its event horizon, or that black holes must radiate (Hawking radiation) in order avoid violating the second law of thermodynamics?

By Wayne McCoy (not verified) on 09 Mar 2007 #permalink

This discussion reminds me of when my lab partner and I were giving a presentation in college, and the professor asked my partner, "What's temperature?" My partner completely balked—his face went white—and then he launched into this long philosophical/thermodynamical/statistical discussion of different ways of trying to defind "temperature." After about two minutes, the professor started laughing and changed the topic... I think he had been looking for a standard "temperature is a measure of the kinetic energy" explanation.

I've often wondered which has the higher entropy, the unreacted mix of two products or the final mix of eight?

That's the wrong question. I have to admit to not knowing a lot about the reaction itself, but I'm guessing that it's exothermic -- that the UV light merely gets it over the activation energy.

If so, then there is energy generated by the chemical reactions. That will go into heating up the gas -- i.e. making all the molecules randomly move around faster -- and/or into radiated light. If you're going to compare entropy before and after, you have to compare everything-- not just the gasses, but also (a) the entropy of the UV light you shined in, and (b) the entropy of any light radiated away.

Just comparing the entropy of the two gasses isn't relevant to the second law of Thermo because the gasses aren't an isolated system. You brought energy in -- the UV -- and probably you've got energy radiating away.

As for which set of gasses will have more entropy just by themselves : well, at the same temperature, probably the one with more particles. But before anybody jumps on this and says anything about the Second Law of Thermo, please reread my last paragraph. And, in any event, the temperature of the final gas is (at least initially) not likely to be the same as the temperature of the initial gas.

-Rob

Rob Knop said:

Well, when that gas cloud collapsed and formed a planetary system, it also released a lot of light- some visible light, but even more infrared light, which sometimes we (not entirely correctly) call "heat".

Let's leave Quantum Mechanics aside for a moment, so there are no photons at all in the picture. Gravitational collapse will still occur because of Jeans instability. I think that classically there's no equilibrium position for a gravitational system so it's not possible to define an entropy either. Of course, for time lengths less than the time of collapse it's possible to define some approximate notion of entropy.

Of course, once we get quantum mechanical, the real question is not why the thermal equilibrium is not a gas, but why it isn't a black hole, since that's the state of highest entropy.

By Lord Sidious (not verified) on 09 Mar 2007 #permalink

There's more to physics than the second law of Thermodynamics.... Even if a B.H. is the highest entropy state, you don't expect everything to get there immediately. The second law just tells you that entropy doesn't decrease. It doesn't trump everything else.

The approximate notion of entropy in stellar collapse is a very, very good approximation. The term "quasi-static" applies to a lot of that process.

-Rob

On "More BS..." I got this email from Forrest Bishop:

"I put Relative Motion at the top of the BS chart, as our libraries and ivory towers are choked with 100 years of rubbish on this topic."

As to the Black Hole entropy being proportional to area:

http://www.sciencedaily.com/releases/2007/02/070227105134.htm

Source: University of York
Date: March 10, 2007

A Hidden Twist In The Black Hole Information Paradox

Science Daily -- Professor Sam Braunstein, of the University of York's Department of Computer Science, and Dr Arun Pati, of the Institute of Physics, Sainik School, Bhubaneswar, India, have established that quantum information cannot be 'hidden' in conventional ways, or in Braunstein's words, "quantum information can run but it can't hide."

This is an artist's conception of an intermediate-sized black hole, which exist in the heart of spiral galaxies throughout the Universe. (Credit: NASA Goddard)

This result gives a surprising new twist to one of the great mysteries about black holes.

Conventional (classical) information can vanish in two ways, either by moving to another place (e.g. across the internet), or by "hiding", such as in a coded message. The famous Vernam cipher devised in 1917 or its relative the one-time pad cryptographic code are examples of such classical information hiding: the information resides neither in the encoded message nor in the secret key pad used to decipher it - but in correlations between the two.

For decades, physicists believed that both these mechanisms were applicable to quantum information as well, but Professor Braunstein and Dr Pati have demonstrated that if quantum information disappears from one place, it must have moved somewhere else.

In a paper published in the latest edition of Physical Review Letters, Braunstein and Pati derive their 'no-hiding theorem' and use it to study black holes which, in Einstein's Theory of Relativity, are characterized as swallowing up anything that comes too close.

In the mid 1970s, Stephen Hawking showed that black holes eventually evaporate away in a steady stream of featureless radiation containing no information. But if a black hole has completely evaporated, where has the information about it gone? This long standing question is known as the black hole information paradox.

Now, Professor Braunstein and Dr Pati have ruled out the possibility that information might escape from the black hole but be somehow hidden in correlations between the Hawking radiation and the black hole's internal state. Braunstein and Pati's result demonstrates that the black hole information paradox is even more severe than previously believed.

Dr Pati said: "Our result shows that either quantum mechanics or Hawking's analysis must break down, but it does not choose between these two possibilities."

Professor Braunstein said: "The no-hiding theorem provides new insight into the different laws governing classical and quantum information. It shows that there's got to be new physics out there."

Note: This story has been adapted from a news release issued by University of York.

Prof. Geoffrey Landis emails:

"Ha! Very likely true, but I'll claim that the Heisenberg uncertainty principle, wavefunction collapse, quantum entanglement, and chaos theory come in a very close second, third, fourth, and fifth!"

Dr. George Hockney countercomments:

"Entropy's been around longer."

It appears to me that the 2d law refers to entropy of a closed system. A black hole, in and of itself, is not a closed system.

The analogy that I use with laymen when confronted with someone saying that the 2d law cannot be violated locally, is, consider a refrigerator. The refrigerator, by cooling, is reducing entropy. But it is drawing power to do so from elsewhere, which results in an increase in entropy. The net is still an increase in entropy. A refrigerator is not a closed system. The refrigerator plus its power supply probably isn't either, but it comes closer.

Rob, nice blog, by the way. (NB: I'm trying to encourage you to do more postings)

If the entropy of a room of gas, with all the gas particles in one half of the room, is the same as the entropy of the more evenly spread gas, because the former gas will be rushing into the other half of the room (that space being available) as you say, Rob, then it seems to me (ignorant as I am of much physics) that the entropy of a gas cloud that was collapsing under gravity to form a single lump, and which for simplicity was not otherwise heating up and radiating, would also stay the same, because that latter scenario seems to be much like the converse of the former scenario, because in the latter scenario most of the space occupied by the original gas cloud will be unavailable because the gas will be rushing out of it.

Further, since the gravitational attractions within the original gas cause the rest of the space to be unavailable, should the entropy of such a gas cloud (one that coalesced into a lump without, for simplicity, any heating up or nuclear reactions etc.) be the same as the entropy of the resulting lump? Not only does that seem wrong, if that lump is then going to become a star and radiate photons back into space, should we therefore count that wider space as available to the lump after all, even though the lump is not yet a star? That is, how far ahead do we look when we consider availability? If not very far (so that the lump is just a lump) then it would also seem that in that former scenario (the gas in half the room) little more than half the room was available after all. So presumably, from what you say, we should look further ahead.

But for a classical deterministic system of particles there is only one possible future, nothing else is available, so it's like there would then be no entropy at all. So I'm thinking that perhaps by 'availability' you mean some sort of counterfactual availability (the places where the gas particles might have been)? Presumably such a thing might be used to model a gas cloud, but then would not the second law become little more than part of the definition of a fictional entity?

And although the universe actually appears to be indeterministic, why then should the second law be a law? Surely physical indeterminism might have been like a finite number of coin tosses, so that the number of possible futures (the number of available states) gradually decreased over time (as the coins were tossed) to zero. Or does the second law rules out that possibility? I'm wondering why entropy must increase; is it because useful energy (potential energy, ordered kinetic energy, nonzero thermal gradients and so forth) would decrease over time?

Anyway to return to the original scenario, the physics I did at school (decades ago) tells me that the entropy of a small bottle of gas placed inside a completely empty room would not, upon the removal of the bottle's top, increase suddenly, but would rather increase gradually as the gas slowly spread throughout the room. So I'm wondering, was entropy once defined in such a way, or am I just misremembering the little physics that I've done?

Yeah, but given that a whole bunch of gas from the rest of the room is about to come rushing into it, it's hardly a closed system. The closed system we're talking about whose entropy is conserved (or goes up) is the entire room

Not sure I see your point. The entropy of a system is determined by the number of microstates corresponding to the current macrostate, not the number of microstates the system is free to explore. Isn't the half-filled room just a Poincare recurrence?

I hope that this makes things perfectly clear.

http://arxiv.org/pdf/cond-mat/0610045

Relative entropy, Haar measures and relativistic canonical velocity distributions
Authors: Jorn Dunkel, Peter Talkner, Peter Hanggi
Comments: 15 pages: extended version, references added
Subj-class: Statistical Mechanics; Mathematical Physics

The thermodynamic maximum principle for the Boltzmann-Gibbs-Shannon (BGS) entropy is reconsidered by combining elements from group and measure theory. Our analysis starts by noting that the BGS entropy is a special case of relative entropy. The latter characterizes probability distributions with respect to a pre-specified reference measure. To identify the canonical BGS entropy with a relative entropy is appealing for two reasons: (i) the maximum entropy principle assumes a coordinate invariant form; (ii) thermodynamic equilibrium distributions, which are obtained as solutions of the maximum entropy problem, may be characterized in terms of the transformation properties of the underlying reference measure (e.g., invariance under group transformations). As examples, we analyze two frequently considered candidates for the one-particle equilibrium velocity distribution of an ideal gas of relativistic particles. It becomes evident that the standard Juttner distribution is related to the (additive) translation group on momentum space. Alternatively, imposing Lorentz invariance of the reference measure leads to a so-called modified Juttner function, which differs from the standard Juttner distribution by a prefactor, proportional to the inverse particle energy.

that the entropy of a gas cloud that was collapsing under gravity to form a single lump, and which for simplicity was not otherwise heating up and radiating,

Unfortunately, that situation is unphysical :)

Yes, there's a probability that all of the gas molecules will briefly be in a little lump, but that probability is infinitesimally infinitesimal. (Deliberate redundancy on purpose.)

If they are collapsing under gravity... they can't do that without radiating or heating up.

the physics I did at school (decades ago) tells me that the entropy of a small bottle of gas placed inside a completely empty room would not, upon the removal of the bottle's top, increase suddenly, but would rather increase gradually as the gas slowly spread throughout the room. So I'm wondering, was entropy once defined in such a way, or am I just misremembering the little physics that I've done?

It will increase not suddenly, but at a rate that corresponds to the velocity distribution of the particles and the available space which different fractions of the particles could have reached in the room given that distribution.

The instant you open the bottle, the particles don't have the full size of the room available to them, because they'd have to move at infinite velocity to get to that space. Over time, though, it becomes possible for a greater and greater fraction of the distribution to be anywhere in the room, and the distribution fills up all of the phase space available.

Many thanks, and apologies because I still fail to see why, if the entropy of the gas in the opened bottle in the vacuous room does not increase instantaneously, why it is that if all the gas molecules in a room were packed into one corner of the room temporarily, the entropy of that system would be as for a spread-out gas.

Ignoring the walls, doesn't uncorking a bottle in an empty room should just lead to adiabatic expansion, which is isentropic? So collapsing the room back into the volumeof the bottle should require the same ammount of work that the gas expended by expanding.

Mind you, my thermo is pretty rusty- mostly because I've been writing bullshit about it:
http://lablemminglounge.blogspot.com/2006/12/thermodynamics-of-hot-chic…

Rob, I have to respectfully disagree with you concerning what Enigman is asking about. You have to remember, the 2nd law is only a statistical law. It is entirely possible for entropy to decrease by chance. The reason we do not see this happen is that the probability of it happening is so ridiculously small.

If by some incredible fluke, all the air in a room rushed to fill in only half of the room, we have seen the entropy decrease by chance. However, to actually see this happen, we would have to wait, well, I forgot the exact number, but much longer than the current age of the universe. Not only that, it would rush right back to filling up the whole room right away again, since that's the much more likely macrostate for it to be in.

Thus, we can safely say that we will never observe it happening. That doesn't mean it can't, just that it's so unlikely we can just say it doesn't, like the 2nd law does. In fact, I recently saw an interesting discussion about this on Cosmic Variance here with respect to the problem of the arrow of time.

Anyways, Enigman's objection is actually right, although something we just don't have to worry about, for the reasons stated above.

By CaptainBooshi (not verified) on 13 Mar 2007 #permalink

Although having just glanced at Penrose (the road to reality), who says that objects collapsing under gravity (and doing nothing else) would be increasing their entropy, not decreasing it (as I've been assuming), I'll take back most of the above...

This may be too technical for even some of the Math-literate, but it is very general, even more than from the von Neumann work from which it dervies:

arXiv:0704.0667
Title: Topological Free Entropy Dimension of One Variable in C*-algebras
Authors: Don Hadwin, Junhao Shen

The notion of topological free entropy dimension of n-tuples of elements in a C* algebra was introduced by Voiculescu. In the paper, we compute topological free entropy dimension of one self-adjoint element and topological orbit dimension of one self-adjoint element in a C* algebra.

This may be too mad for you sciencebloggers, but I was wondering (for no good reason) whether or not the evolution of life (not that plus the attendant effects, just that increase in order) would be (in itself, not in the thermal spinoff) an increase in entropy?

I wonder because, when matter coalesces into lumps under the influence of gravity, it appears that that gravitational collapse (in itself) is associated with a local increase in entropy (rather than the local decrease that I would have expected, as mentioned 2 posts above). And presumably natural selection might operate in a deterministic world (were such a world able to support something like chemistry), because although evolution requires some sort of chance input, presumably that could come from infinitely precise but disorganised initial conditions (whose details were magnified by chaotic mechanisms, so that increasingly finer details affected molecular collisions). So I wonder if evolution is like a natural force, tending to organise matter (in more complicated ways than simple coalescence), much as gravity does. (??)