Beeler, M., Gosper, R.W., and Schroeppel, R. HAKMEM. MIT AI Memo 239, Feb. 29, 1972. Retyped and converted to html ('Web browser format) by Henry Baker, April, 1995.

RECURRENCE RELATIONS

Previous Up Next

ITEM 12 (Gosper & Salamin): "the Fast Fibonacci Transform"

(motivation for next item)

Define multiplication on ordered pairs

(A,B) (C,D) = (A C + A D + B C, A C + B D).

This is just (A X + B) * (C X + D) mod X^2 - X - 1, and so is associative, etc. We note (A,B) (1,0) = (A + B, A), which is the Fibonacci iteration. Thus, (1,0)^N = (FIB(N), FIB(N-1)), which can be computed in log N steps by repeated squaring, for instance. FIB(15) is best computed using N = 16, thus pushing the minimal binary addition chain counterexample to 30 (Liknaitzky). (See Knuth vol. 2, p 398.) By the last formula,

(1,0)^-1 = (FIB(-1),FIB(-2)) = (1, -1),

which, as a multiplier, backs up one Fibonacci step (further complicating the addition chain question). Observing that

(1,0)^0 = (FIB(0), FIB(-1)) = (0,1)

= the (multiplicative) identity, equate it with scalar 1. Define addition and scalar multiplication as with ordinary vectors.

(A,B)^-1 = (-A, A + B) / (B^2 + A B - A^2),

so we can compute rational functions when the denominator isn't zero. Now, by using power series and Newton's method, we can compute fractional Fibonaccis, and even e^(X,Y) and log (X,Y). If we start with (1,0) and square iteratively, the ratio will converge to the larger root of x^2 - x - 1 (= the golden ratio) about as rapidly as with Newton's method.

This method generalizes for other polynomial roots, being an improvement on the method of Bernoulli and Whittaker (Rektorys, Survey of Applicable Math., p. 1172). For the general second order recurrence, F(N+1) = X F(N) + Y F(N-1), we have the multiplication rule: (A,B) (C,D) = (A D + B C + X A C, B D + Y A C).

Inverse: (A,B)^-1 = (-A, X A + B) / (B^2 + X A B - Y A^2).

Two for the price of one: (F(1), Y F(0)) (1,0)^N = (F(N+1), Y F(N))).

ITEM 13 (Salamin & Gosper): LINEAR RECURRENCE RELATIONS

Recurrence relation:
(1)     A  = C    A    + ... + C  A   
         k    n-1  k-1          0  k-n

with A , ..., A    given as initial values.
      0        n-1
Consider the algebra with basis vectors
 0   1   2        n-1
X , X , X , ..., X
and the identification
         n         n-1             0
(2)     X  = C    X    + ... + C  X .
              n-1               0
Thus if U, V, W are vectors and W = U V, then componentwise
                ====
                \
(3)      W    =  >      T    U  V  ,
          i     /        ijk  j  k
                ====
                j,k
where the T's depend only on the C's. The following simple procedure yields A(k): express the vector X^k as a linear combination of the basis vectors, then set X^m = A(m) (0<=m<n). Computation of X^k can be done by k-n+1 applications of (2) or by computing the T's in (3) and then applying (3) O(log k) times.

PROOF: If 0 <= k < n, X^k is already a basis vector, so we get A(k). Suppose the procedure works for k < L.

 L    n  L-n
X  = X  X

            n-1              L-n
   = (C    X    + ... + C ) X
       n-1               0

           L-1             L-n
   = C    X    + ... + C  X
      n-1               0
The procedure evaluates each X^m to A(m), so X^L evaluates to
C    A    + ... + C  A    = A .  QED
 n-1  L-1          0  L-n    L
The same procedure will work for negative k using
         -1     n-1         n-2
(4)     X   = (X    - C    X    - ... - C )/C ,
                       n-1               1   0
the unique vector which when multiplied by X yields X^0.

Let (2) be F(X) = 0 and V be the algebra constructed above. Then V is a field iff F(X) is irreducible in the field of the coefficients of V.

PROOF: Note that an element P of V is zero iff P(X) = 0 mod F(X). If G(X) H(X) = F(X), DEG G,H < DEG F, then the product of two non-zero elements is zero and so V can't be a field.

Let P be an arbitrary non-zero element of V.

DEG(GCD(P,F)) <= DEG P < DEG F.

If F(X) is irreducible, then GCD(P,F) = 1, so there exist Q(X), R(X) such that Q(X) P(X) + R(X) F(X) = 1. Then Q(X) P(X) = 1 mod F(X). Since P has an inverse, V is a field.

ITEM 14 (Gosper & Salamin):

Yet another way to rapidly evaluate recurrences is to observe that if

F(N+1) = X * F(N) + Y * F(N-1), then

F(N+2) = (X^2 + 2 Y) * F(N) - Y^2 * F(N-2).

This rate doubling formula can be applied iteratively to compute the Nth term in about log N steps, e.g., to get the 69th term given terms 1 and 2, we form 1, 2, 3, 5, 9, 13, 21, 37, 69. This sequence is computed from right to left by iteratively subtracting half the largest possible power of 2. This is sufficient to guarantee that some term further left will differ from the left one by that same (halved) power of 2; e.g., 5, ..., 21, 37 have a common difference of 2^4, so that term 37 can be found from term 5 and term 21 using the fourth application of the rate doubling formula.

The rate tripling formula is F(N+3) = (X^3 + 3 X Y) * F(N) + Y^3 * F(N-3).

For the K-tupling formula:

F(N+K) = P(K) * F(N) + Q(K) * F(N-K)
P(K+1) = X * P(K) + Y * P(K-1) (the same recurrence as F)
Q(K+1) = -Y * Q(K)
P(1) = X
Q(1) = Y
P(0) = 2
Q(0) = -1
Q(K) = -(-Y)^K
P(K) = 2 (-Y)^K/2 * T(K; X/sqrt(-4 Y)), where T(K; X) is the Kth Chebychev polynomial = cos(K arccos X).

If A(I), B(I), and C(I) obey the same second order recurrence,

        [ A   B  ] -1 [ C  ]
        [  I   I ]    [  I ]
(I)     [        ]    [    ]
        [ A   B  ]    [ C  ]
        [  J   J ]    [  J ]
is independent of I and J, provided the inverse exists. (This is true even if coefficients are not constant, since any two independent sequences form a basis.)

Plugging in F and P as defined above, we get an expression for the Nth term of the general second order recurrence in terms of P(N) and P(N+1):

[ P(N)  P(N+1) ] [ P(0)  P(1) ] -1 [ F(0) ]
                 [ P(1)  P(2) ]    [ F(1) ]  =  F(N).
Setting X = Y = 1, we get FIB(N) = (2 P(N+1) - P(N))/5, which is a complex but otherwise square root free closed form. (sqrt(-4) = 2i)

With constant coefficients, the invariance (I) implies:

[ A     A    ] [ A       A      ] -1 [ A    ]
[  P+I   P+J ] [  Q+I+K   Q+J+K ]    [  R+K ]
               [                ]    [      ]  =  A
               [ A       A      ]    [ A    ]      P-Q+R
               [  Q+I+L   Q+J+L ]    [  R+L ]
These matrix relations generalize directly for Nth order recurrences.

ITEM 15 (Chebychev):

The Nth Chebychev polynomial T(N) = T(N; x) = cos(N arccos x).

T(0) = 1, T(1) = x, T(N+1) = 2 x T(N) -T(N-1).

T(N; T(M)) clearly = T(N M).

x^N - 2^(1-N) T(N), whose degree is N-2, is the polynomial of degree < N which stays closest to x^N in the interval (-1,1), deviating by at most 2^(1-N) at the N+1 places where x = cos(K * pi/N), K=0, 1, ...N.

Generating function:

 ====
 \           N      1 - x S
  >    T(N) S  = --------------
 /                            2
 ====            1 - 2 x S + S
First order (nonlinear) recurrence:
                     -------------------
                    /     2           2
T(N+1) = x T(N) -  /(1 - x ) (1 - T(N) ) .
                  V
(T(N+1),-T(N)) = (T(1),-T(0)) (1,0),

where (A,B) (C,D) = (A D + B C + 2 x A C, B D - A C).

ITEM 16:

                             n         n
                   1   (1+ix)  - (1-ix)
tan (n arctan x) = - * -----------------
                   i         n         n
                       (1+ix)  + (1-ix)

Previous Up Next