I spent a little time looking into Padé approximants, including how well they work for root-finding. These are ratios of polynomials which are used as approximations to other functions. For instance, for both polynomials of order 2, the form is a + bx + cx2 ⁄ 1 + dx + ex2 . Note the 1 in the denominator is chosen to avoid the ambiguity of being able to multiply through by a constant.
Just as with the derivation of a Taylor series, one can determine the coefficients of the Padé approximate by equating the derivatives of the desired function to the derivatives of the approximate. For instance, a is always equal to F(0).
If the order of the denominator is 0 (i.e., the denominator is 1), then the Padé approximants are just the Taylor series for the function. I looked at the approximate of order (0,1) as well (a ⁄ 1 + dx, which is F2 ⁄ F - F′·x), but found that it converged worse that the Taylor series of order 1.
The useful one seems to be the approximant of order (1,1). Since it has three free coefficients, it is equivalent to using a Taylor series up to order 2 (the x2 term). In my experiments the Padé gave closer approximations than the Taylor series. It is also easier to work with (especially if using rational numbers) than the Taylor series since there is no squared term. For the same reason, it is easier to solve for x and use the approximation to solve for a root.
Newton's method for root-finding equates to writing the Taylor expansion for a function out to order 1, then inverting it to obtain an expression for x. Obviously using another term in the Taylor expansion would give more accuracy, but since that produces a quadratic equation, the solution for x has a square root and is very messy (and in many cases, it is the square root function itself we are trying to solve in the first place). Since the Padé approximate of order (1,1) only has linear terms of x, it can still be easily solved, and provides quicker convergence to the root than Newton's method.
Once I had written out the equation for the correction term using this method (which is (F′ ⁄ α + F″ ⁄ 2F′)-1, where α is the needed correction to the function value), I saw that it was the same as that given by Halley's method. What I find interesting is, if you read the Wikipedia article, you'll see the derivation of Halley's method is completely different than the process I went through. The convergence of the expression from two different approaches increases my confidence that it is a good method for root-finding. As an added bonus, when computing with rational terms, the size of the terms grows more slowly than from Newton's method.