Sunday Function

Again, apologies for the hideously scanty posting. Been in the lab doing some really interesting research which will with some luck get me in a really nice journal, as well as doing the various rounds of revision on the paper for some previous research. Also putting two talks together for a conference/school. That whole wedding planning ain't doing wonders for my spare time either. But hey, these are all good things so I'm not complaining.

I can at least write up a Sunday Function for y'all. This is one we've mentioned before, but haven't actually derived. I think it's high time we did. As always, math-averse readers are encouraged to read on even without necessarily following everything in detail. The overall trajectory of the argument should still be pretty clear.

What we're doing is Stirling's approximation to the factorial function. The factorial function is the product of the integers from 1 to n, inclusive. For instance, the factorial of 4 is 1*2*3*4 = 24. Unlike most functions, this one gets a punctuation mark as its symbol. The factorial of 4 is compactly written as 4!, and that's not me being excited about the notation. Instead 4! = 24.

This is a very fast-growing function. The factorial of 50 is more than a trillion trillion trillion trillion trillion. As you'd expect, this is a pain to calculate by hand. A pocket calculator can do it without too much trouble for factorials up to about 70!, and dedicated computer algebra systems like Mathematica can come up with exact values for up to a few thousand before becoming too slow to deal with. In physics (mainly thermodynamics) we have to deal with the factorial of numbers of about the size of the Avogadro number (~10^23). Actually calculating this is absolutely, completely bananas. But we're not worried about knowing the exact value, we just need a good approximation. We can do it using the Gamma function. I'm know we've discussed it before, but I can't find the link. In any case you can take my word for it that this is exactly true, and not an approximation:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

Now there's really no way to integrate that in any closed form other than just, "Hey, it's equal to n!" which isn't really helpful since we're after the number that n! represents. But we can do better if we're interested in an approximation. This approximation is due to Abraham de Moivre and James Stirling, and this particular derivation I first saw in Daniel Schroeder's thermodynamics textbook.

The plan is to recognize that the integrand looks kinda sorta like a Gaussian, so we might be able to find the best-fit Gaussian and then integrate that and see what we get. So take that expression under the integral and rewrite it using the rules for logarithms:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

Where we just used the rules for changing the base of a logarithm. Still no approximations yet. Now take the stuff in the exponent (the n ln(x) - x) term and do a bit of rewriting:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

Where for convenience we've defined y = x - n. The algebra to derive the above is not hard - it's about 3 lines if you remember the logarithm rules, so give it a shot if you're curious and following along at home.

Now it's time for the approximation. It's intuitively clear that the original integrand has a maximum at x = n, and you can demonstrate this via differentiation if you're so inclined. And at that point y is equal to zero, so we're justified in making a series expansion of that middle logarithm term above. This is where the approximation actually happens:

i-8d7070ce28f0af28047c7a5fb8246d53-4.png

Which means that our original integrand is approximately:

i-bb5c2c6b0452df43a61e3974bd9b473f-5.png

But the first two are constants with respect to x (and y), so they can be pulled outside the integral, so:

i-80f9713f6d11461837a9f9b540684e36-6.png

Which is an easy integral to do. (You may wonder how I magically turned the 0 in the lower limit to a negative infinity, but it's justifiable because the integrand is basically zero for negative values anyway.) Performing the integral gives:

i-9ef27a331268eac0961c3fd4d1a55446-7.png

Which turns out to be a very good approximation. Keeping more terms in the series expansion would make it a better one, but this one is fine for just about every large n! approximation we need. It's very easy to evaluate for large numbers with a pocket calculator and a basic knowledge of logarithm rules. For instance, the factorial of (10^23) is about 10^(10^(10^1.38)), if I've done my math right. Which is a heck of a lot easier than doing 10^23 separate large integer multiplications.

All right, that's all for now. See y'all around!

More like this

You write: "You may wonder how I magically turned the 0 in the lower limit to a negative infinity, but it's justifiable because the integrand is basically zero for negative values anyway."

This causes me to ask what the ratio is of the top half of the integral to the bottom half.

You've got two compensating factor-of-two errors implied in that text.

I recall reading somewhere that Stirling derived the formula
with a numerical constant. Stirling's formula was used by
DeMoive about 1730 to derive the Central Limit Theorem for
Bernoulli distributions. The fact that the constant is the
square root of 2 pi was shown by Euler about 1740.

Stirling is also remembered for the Stirling Numbers of the
First and Second Kinds.

By Annonymous (not verified) on 11 Jul 2010 #permalink

It's an integral from x=0, so it's y=-n. The approximation is "n large" so it's not a factor of two.

Also, Matt, you're missing a parenthesis in the line where you do the substitution x=y+n.

By BlackGriffen (not verified) on 11 Jul 2010 #permalink

After "Which means that our original integrand is approximately"

there's a missing factor of n in the denominator of the gaussian exponential.

Your integral is also from infinity to infinity, instead of from -infinity.

You have a slight error in your last integrand. It now runs from infinity to infinity. :-)

Otherwise, great Sunday Function. As usual. Thanks!