Sunday Function

There's an interesting book I'm working on called The Fall of Rome: And the End of Civilization, by historian Brian Ward-Perkins. He argues (against a prominent modern school of thought) that Rome did indeed fall rather than merely change, and that European civilization really was wrecked pretty hard with a recovery that by many fairly objective numerical metrics took almost a thousand years to reach its former level of economic complexity and living standards. It wasn't a fast and sharp break separating the Roman from post-Roman world, but it was in fact a break.

You can make a similar argument in a positive direction about the history of mathematics. Math has existed more or less forever, but I'd say there's a definite new era that started at roughly the end of the Renaissance about the time Newton and Leibniz independently developed calculus. Simply because calculus was so brilliantly effective at solving problems, subsequent mathematics tended to focus on whatever worked rather than whatever could be rigorously proved. In math this is a dangerous thing. If your reasoning is unclear, you won't be able to tell under what circumstances your results no longer work. Eventually the sketchier concepts of modern math were examined formally and were put into a logically sound and rigorous framework. But today, we're going to do this old school and assert a whopper of a supposition with no other support. I'll tell you when we get there.

First, our Sunday Function. Unlike most other Sunday Functions, I'm not going to write this one down in terms of f(x) = stuff, because this one is just piecewise continuous straight lines. You can easily write it in terms of mx + b for each piece, but it will save space to define it just in terms of the graph. It's a triangle wave of period 2, and we're only going to deal with it on the interval [0, 2]. Therefore, here simultaneously is the graph and definition:

i-7b7142f1efa09b47c3b8242f89bcb9f4-graph.png

Now here comes our bold and utterly unsupported assertion. We know many functions can be written in terms of a power series. What if it were possible to write this in sort of a trigonometric series? In other words, are there constants a0, a1, a2, and so on such that this is true for our particular f(x):

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

Well, maybe. First we have to figure out what the various coefficients are in this series (which happens to be called a Fourier series, after the mathematician Jean Baptiste Joseph Fourier). First let's dispatch a0 by means of a handwaving but nonetheless valid argument. Here goes. It's clear that the average value of our function on this interval is 0 by virtue of its symmetry about the x-axis. It's also clear that all of the sine terms on the right have an average value of 0 as well for the same reason. But any nonzero constant term will have a nonzero average value. The average of a constant is just that constant, after all. Therefore in order to preserve symmetry the constant term a0 must itself be equal to zero.

The other terms look harder at first. Fortunately there is a very convenient property of the sine function that can help us out with the rest. We call it orthogonality*:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

The delta on the right side is the Kronecker delta, which is just a symbol that means "equals 1 if m = n, equals 0 otherwise". So we have that weird sine series, and we just want to find (for instance) a2. We can just multiply both sides of the series equation by sin(2 pi x) and integrate. Every single one of those infinite terms will integrate out to 0 due to orthogonality, except for the one with n = 2, which is attached to the a2 term. Doing that, we can see that a2 (and in general if we had picked a different numbered "a") is equal to:

i-8d7070ce28f0af28047c7a5fb8246d53-4.png

Sweet! We can evaluate those. I won't bore you by actually doing it (you can try it yourself or take a look at the result). Once you do evaluate them, you can actually plot the first few terms and see what you get. It's going to turn out that all the even terms will equal zero. Let me give you numerical values for the first few odd terms:

a1 = 0.810569469
a3 = -0.0900632743
a5 = 0.0324227788

Plug those in and plot, with the original overlaid for comparison. Only including the a1 term:

i-3fdee7de4197c08986e3407853fa09fa-graph2.png

With the first two nonzero terms:

i-d3fdbda776403268fadccbd8598f89a3-graph3.png

With the first three nonzero terms:

i-d8107f2fa66bfd8e5e6f522cd5b441bc-graph4.png

So it sure looks like our wild supposition was justified and our function can legitimately be written in terms of trig functions. As indeed it can be, and as indeed pretty much any well-behaved periodic function can be. (Crucially, you must also include cosines in general. In the interests of simplicity I've picked an odd function as our example. These have the property that all the cosine coefficients are zero. Conversely, even functions have all the sine terms equal to zero.) This property of the set of sine/cosine functions is called completeness, and unlike orthogonality it's way out of Sunday Function's league to prove from scratch. Way out of my league, to be honest. But that's what mathematicians are for. They've showed that a very broad class of different types of orthogonal functions are complete, and showing that a particular set of functions is in that broad class isn't actually so hard. We might even do it one of these days.

*Here I'm writing the statement of orthogonality in a slightly nonstandard way. More usually it's written with respect to the interval from [-pi, pi] or [0, 2*pi]. Our problem is on the interval [0, 2], so just like on the cooking shows I'm pulling this equation out of the oven "pre-cooked", appropriately scaled to the interval in our problem.

More like this

Nicely done.

Your example of Fourier (early 1800s) is particularly apt as regards the history of analysis (calculus with actual proofs), which developed throughout the 1800s. Apparently, according to my analysis prof, there was quite a shakeup in the later 1800s when examples of functions were developed that were continuous everywhere but nowhere differentiable. It was the efforts of people to explore strong claims like his about what could be done with "every well-behaved function" that exposed weaknesses in the general notions about what functions were and how they behaved.

It should also interest all of us that Fourier invented this idea so he could solve a PHYSICS problem: the heat flow equation. So we can exaggerate a little bit and say that modern analysis started with a physics problem, just as the calculus started with a physics problem. That might be a good reason (home insulation being the other) for always teaching that topic in intro physics.

The exaggeration is because I think Euler's invention of the power series in the 1700s, not to mention the very concept of "function", might have been the key first step even though it lacked rigor.

By CCPhysicist (not verified) on 16 Aug 2009 #permalink

CCPhys: What was even more surprising to me was that Cantor's work started with Fourier series, too!

Yes, nice. I recall coming across a recent analysis book that introduces the subject using an example very similar to this one and then using it to motivate the discussion of all the usual analysis topics. I forgot the title and author, does anyone know ?

(a) That's not an odd function. Odd functions are defined on intervals of the form [-a,a]. You have calculated the Fourier series for the odd extension of the given function. There is a Fourier series for the even extension that has only cosine terms.
(b) The question of what is a proof and what has to be proved is something that's been part of mathematics since Greek times. Euler, for example, had what he would view as proofs for his use of infinitesimals, although we would not accept these proofs.
(c) In response to CCPhysicist, power series were known well before Euler; they were common in the 1600's. Also, Newton's work on calculus predates his work on physics.

Fair point, Bill. The triangle wave is of course an odd function if you take this to be one period worth of a function that extends to infinity (or at least symmetrically about the y-axis) in both directions. We could also say the cosine terms are all equal to zero by virtue of the fact that cos(0) = 1, and since f(0) = 0 the cosine coefficients all have to be identically zero to fit the boundary condition.

Thanks, --bill. So it is more accurate to say that Euler popularized or generalized the use of power series for functions? What motivated Newton to develop the calculus if it wasn't calculating motion or gravitation problems?

Matt, you are still missing a key point about extending the function. Specifically, "symmetrically about the y-axis" produces an even function (equal to +1 at -0.5, etc) that is not periodic and hence messier to deal with.

By CCPhysicist (not verified) on 18 Aug 2009 #permalink

I mean a symmetric interval, ie, [-a, a]. The function would be odd and thus antisymmetric on the interval.

Extending the function to be even still gives a periodic function, only now the period is 4 rather than 2. The Fourier series in this case will indeed be a cosine series, but you need half-integer multiples of pi ... cos(n pi/2 x) to do it (in fact only the odd n's are required). Note also that the sum of the cosine series is zero at x=0 even thought the terms themselves are not.

Here's a problem with Fourier decomposition: the infinite series is supposed to constitute the final result like a true triangular wave. But if I "sample" the triangular wave along a given interval that doesn't include the whole thing, that's just a straight slope. There isn't any way in principle to find all the ever-smaller harmonics that correspond to a particular triangular wave frequency.

But if I had some mathematically perfect "filter" or frequency analyzer (*not* to be confused with actual physical instruments) I should be able to examine the portion and find the "real" harmonics for the actual frequency. This is contradictory.

There is nothing contradictory about this, Neil. It is not necessary that the Fourier transform of a signal be equal to the transform over finite sub-sections of that signal. In fact, many signal processing toolkits will allow you to perform a Fourier transform over a sequence of small sections of the signal. This is (often) called the "Short Time Fourier Transform."

http://en.wikipedia.org/wiki/Spectrogram

Thanks Tercel, but my point is: a superposition of many ever-higher frequency sine waves gives a "triangular wave." So that means, the composition should be contained within a small interval. If I had an ideal (not necessarily what any "real" one could do) frequency spectrometer and the triangular wave was moving past, it should show me all those frequencies that were of shorter period than the sample interval. But those frequencies are geared to the particular period of the specific triangular wave, which is "not knowable" to anyone watching it for a small interval.

I think the trouble is, the use of true infinities here. Any finite superposition will have a structure that is actually represented in the wiggles etc. that can be seen. But when we add an infinite series, we can get a "sum" by the convergence, but it is not IMHO a logically consistent function for the sake of finding derivatives.

A heck of a lot easier way to get a0 is to notice that by definition, f(0)=0. But since sin(0)=0, all the terms in the fourier sine series disappear at x=0 but a0. Thus, a0=f(0)=0.

By Avi Steiner (not verified) on 18 Aug 2009 #permalink