Sunday, August 30, 2009

Padé Approximants

I spent a little time looking into Padé approximants, including how well they work for root-finding.  These are ratios of polynomials which are used as approximations to other functions.  For instance, for both polynomials of order 2, the form is a + bx + cx21 + dx + ex2 .  Note the 1 in the denominator is chosen to avoid the ambiguity of being able to multiply through by a constant.

Just as with the derivation of a Taylor series, one can determine the coefficients of the Padé approximate by equating the derivatives of the desired function to the derivatives of the approximate.  For instance, a is always equal to F(0).

If the order of the denominator is 0 (i.e., the denominator is 1), then the Padé approximants are just the Taylor series for the function.  I looked at the approximate of order (0,1) as well (a1 + dx, which is F2F - F′·x), but found that it converged worse that the Taylor series of order 1.

The useful one seems to be the approximant of order (1,1).  Since it has three free coefficients, it is equivalent to using a Taylor series up to order 2 (the x2 term).  In my experiments the Padé gave closer approximations than the Taylor series.  It is also easier to work with (especially if using rational numbers) than the Taylor series since there is no squared term.  For the same reason, it is easier to solve for x and use the approximation to solve for a root.

Newton's method for root-finding equates to writing the Taylor expansion for a function out to order 1, then inverting it to obtain an expression for x.  Obviously using another term in the Taylor expansion would give more accuracy, but since that produces a quadratic equation, the solution for x has a square root and is very messy (and in many cases, it is the square root function itself we are trying to solve in the first place).  Since the Padé approximate of order (1,1) only has linear terms of x, it can still be easily solved, and provides quicker convergence to the root than Newton's method.

Once I had written out the equation for the correction term using this method (which is (F′α + F″2F′)-1, where α is the needed correction to the function value), I saw that it was the same as that given by Halley's method.  What I find interesting is, if you read the Wikipedia article, you'll see the derivation of Halley's method is completely different than the process I went through.  The convergence of the expression from two different approaches increases my confidence that it is a good method for root-finding.  As an added bonus, when computing with rational terms, the size of the terms grows more slowly than from Newton's method.

The State of the Blog

I'm sure anyone who isn't fascinated by math has noticed I haven't posted about parenting or green living lately.  Does that mean I haven't been thinking about those things?  Or does it mean things have been going well so I haven't had a reason to reflect on them?  Well, we did replace the air conditioning, so we did think about green living, but I don't think there was anything new there.  Overall, I think it just reflects that I've been busy.

Sunday, August 16, 2009

Memoize

I recently came across the computer science concept of memoization.  What's cool is that I had made some suggestions on designs at work that amount to the same thing.

In general, memoization refers to having a computer function remember the problems it's already solved, and just return the answer again if it recognizes the problem, rather than compute it again, as most software does.  The common example is the factorial function.  Since the factorial is the product of all the numbers up to the argument, on the way to, say 6!, you will compute all the factorials less than 6.  Usually the function would loop over all the numbers and multiply them together.  Instead, a memoized version would have stored all the results up to the highest argument it had previously seen, and then only compute those beyond that (storing the new results) if necessary.

One type of object-oriented class that I had proposed is a smart Angle class.  Since angles can be represented in many different ways, I wanted to be able to get any version the calling software wanted, without knowing what version it had been stored in.  In addition to radians and degrees, an angle can be (and often is in our work) stored as the combination of sine and cosine values.  Besides being able to use an angle without knowing what format it was in originally, I suggested that the other representations should be left uncomputed, but once they were asked for, the result should be stored to avoid unnecessary further computations.  Clearly this can been seen as memoization of the conversion.

In the general case, the "hit rate" of a memoized function may not be that high, depending on the number and valid range of the inputs.  However, for an object whose data never changes once it is created, essentially one of the inputs has been fixed, raising the chance that the same result will be requested repeatedly.  (In my design, each Angle object would be for a particular angle.  If the value of the variable changed, a new Angle object would be created to store it.)

By the way, I'm fascinated with the idea of "smart numbers" in computer programs - numbers as objects (or other collections of data, if not object-oriented) which store more than just their value, or that store exact representations in alternate formats.  The Angle example given, quadratic irrationals, storing numbers as rationals or continued fractions instead of decimal, and numbers that store their own error bounds, are all concepts that I've thought about over the years.  If anyone knows where one of these concepts has been used in a real program, I'd be interested to hear about it.