Sunday Function

Most textbooks, especially ones not aimed at college math majors, give a definition of "function" that seems quite intuitive. They'll say something along the lines of: a function is a rule that takes an input x and turns it into an output f(x). Formally this isn't quite right - the essence of a function is in the set of ordered pairs {x, f(x)} and not in the specific rule that connects them. There doesn't even have to be such a rule.

But the idea of function as a machine is such a powerful and intuitive one that it tends to be used pretty universally until you have a good reason to abandon it. Non-mathematicians rarely encounter such reasons, even in the more mathematically demanding disciplines like physics, computer science, and engineering. In fact, most of the time we tend to double down and promiscuously apply the "function as machine" picture to operators. If a function is a machine that turns numbers into other numbers, and operator is a machine that turns functions into other functions. One such operator is called the Laplace transform, after the french mathematician Pierre-Simon Laplace. But I think we'll stick to calling these posts Sunday Functions, even if we take the occasional look at operators.

So pick a function f(x) - you can pick just about any f(x) you want - and multiply it by e^(-sx). That'll give you a new function. For instance, if you started off with f(x) = x^2, which looks like this:

i-ec8d2bc9698de2bf6063e486cb0cc968-g1.png

You'll end up with (x^2)*e^(-sx), which looks like this for s = 1:

i-30ec64ac881f15c96f03db209a447ba5-g2.png

Note the different y-axis scales, and of course note that the precise shape would be a little different if we had picked a different s. Now we'll construct the Laplace transform, which we'll call F(s) or L(f(x)), where the latter denotes applying the transform L to our original function f(x). It creates a function of s based on an integral over x:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

After doing the integral, we'll find that the Laplace transform f(x) = x^2 is:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

All right, that's really lovely, but what does that do practically? Well, not much. What's so interesting about the Laplace transform is what happens if you take a look at how the transform relates functions to their derivatives:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

Where in the second line we used integration by parts. Notice what happened. We discovered that the Laplace transform of a derivative doesn't involve the derivative of the Laplace transform. It just involves the Laplace transform multiplied by s. This is a truly remarkable property. If you have a linear differential equation in f(x) and you take the Laplace transform, you'll just have an algebraic function in F(s). Solve that using 9th grade algebra and do the inverse Laplace transform and you've solved a differential equation without doing any real work at all.

Well, other than finding the inverse Laplace transform. That's actually pretty difficult to do from scratch. Fortunately large tables of functions and their inverse transforms have been compiled and for many cases of interest all you have to do is look up the one you need.

You can go on and do more creative things like letting s be a complex number, and then you'll discover a relationship between the Laplace and Fourier transforms. That's interesting in itself and also useful for things like the mathematics of electromagnetic field propagation in a dispersive medium.

All in all, a nice little mathematical machine. Or whatever it is!

More like this

Formally this isn't quite right - the essence of a function is in the set of ordered pairs {x, f(x)} and not in the specific rule that connects them. There doesn't even have to be such a rule.

Not true. Suppose we have a set f of tuples of points (x,y) with the property that for any x there is at most one tuple in f such that the first element of the tuple is x. Then we may view f as a function by adopting the rule that for any x such that there is a tuple if f such that the first element of the tuple is x we define f(x) to be the second element of the tuple.

Since, as you note, any function can be viewed as such a set of tuples, it follows that we can find a rule for the definition of any function by first transforming the (potentially rule-less) function into a set and then using that set to determine a rule which defines the function.

Quite right, if sort of outside the spirit of a "take a number and square it"-style procedural rule.

I used the function machine for 35 years of teaching high school math. No complaints yet. In fact, "gimme the rule" is a great game.

I think you didn't mention the nicest property of Laplace transforms and why it is so appealing:

In your example, the value f(0) appears. To a mathematician, this might not be really interesting, but to engineers and physicists, this means that Laplace transforms naturally incorporate initial conditions. Most other procedures to solve differential equations require you to look for the general solutions and then select the appropriate one according to the boundary or initial conditions. Laplace transforms do that in one go. If it was just about looking in tables, any other method with tables would do an equally good job.

By Raskolnikov (not verified) on 08 Nov 2010 #permalink

A functional relation is a set of ordered pairs R such that
if (a, b) and (a, b') are members of R then b = b'. The notion of a "rule" is somewhat vague but could mean an
algorithm for computing the function or a definition.

At any rate a "rule" is presumably something linguistic.
Since any language has only a countable number of expressions there are only a countable number of "rules".

So only very special functions can be defined by any sort
of "rule".

The same considerations apply to real numbers themselves.
No language can have names for more than a countable
number of real numbers.

So nearly all real numbers are ineffable and completely
hidden from us.

Science is based on a fantastic ontology of wildly Platonistic entities.

But it seems to work.

By Annonymous (not verified) on 08 Nov 2010 #permalink

Thanks for all the neat Sunday math topics. I am curious about some of the functions that we can't take a Laplace transform of. You talked about one before, x^x, (another is x!) since it grows faster than e^(sx) for any s as x approaches infinity. Then the Laplace transform fails when you take f(x)e^(-sx) at infinity since that is no longer zero. Are there any other simple functions that grow so rapidly, and do any of these ever occur in physics or engineering applications?

richarddawkins.net/discussions/543672-inhertitance-of-acquired-behaviour-adaptions-and-brain-gene-expression-in-chickens

atheists, we're gonna cut off your heads...

THE HIGH PRICE OF REVOLUTION
youtube.com/user/xviolatex?feature=mhum

lim(x-->oo) exp(x^2)/exp(sx) = oo for any s.

By Annonymous (not verified) on 10 Nov 2010 #permalink

The LaPlace transform is awesome. Try calculating anything about a third or fourth moment of a probability distribution without it.

I still don't get the "characteristic function" part about how you're wrapping it around the complex plane, but I'm only just starting Tristan Needham's book and of course I didn't understand a f**kin thing of complex analysis when I took it the first time.