## Math 221 (02) • Final Exam, with Solution Guide

Part I gave matrices E and M as follows, calling for: (1) ET and E–1 ; (2) E·M and ET·M ; and (4) M·ET and M·E ; and (3, 5, 6) related discussion:
```
1  0  0  0  0           a1  b1  c1  d1  e1                     1  0  0  0  0            1  0  0  0  0
0  1  0  0  3           a2  b2  c2  d2  e2                     0  1  0  0  0            0  1  0  0 -3
E =  0  0  1  0  0  ,   M =  a3  b3  c3  d3  e3  .  Ans.: 1.  ET =  0  0  1  0  0  ,   E-1 =  0  0  1  0  0  .
0  0  0  1  0           a4  b4  c4  d4  e4                     0  0  0  1  0            0  0  0  1  0
0  0  0  0  1           a5  b5  c5  d5  e5                     0  3  0  0  1            0  0  0  0  1 ```
2. & 3. E·M is exactly the matrix obtained from M by adding, to row 2, three times row 5 (and leaving all rows other than row 2 unchanged); ET·M is exactly the matrix obtained from M by adding, to row 5, three times row 2 (and leaving all rows other than row 5 unchanged).

4. & 5. M·ET is exactly the matrix obtained from M by adding, to column 2, thrice column 5 (leaving all columns other than column 2 unchanged); M·E is exactly the matrix obtained from M by adding, to column 5, thrice column 2 (leaving all columns other than column 5 unchanged).

6. Each of E·MT and M·ET is simply the transpose of the other: E·MT = (M·ET)T , M·ET = (E·MT)T .

Part II, 1. Given matrices A and B , with B being k × r and A being r × n , here’s why the k × n product matrix M =def B·A has rank(M) < r :

Remember that the rank of A is, among other things, the dimension of the row space of A , which, without a doubt, cannot exceed r , the number of rows A has to begin with: rank(A) < r . All the rows of the product B·A are, of course, linear combinations of the rows of A , and consequently the row space of B·A must be a subspace of the row space of A , and therefore be of dimension no greater. Summing up, then, to finish this argument:

rank(M) = rank(B·A) = dim(row-space(B·A)) < dim(row-space(A)) < r .

2. Given a k × n matrix M with rank(M) < r and fully reduced row-equivalent form R = frref(M) , let E be the invertible k × k matrix (product of elementary matrices corresponding to the individual steps in the row-reduction process) multiplication by which converts M to R : E·M = R .

Multiplying each side of this equation on the left by the inverse E–1 of E , we obtain a factorization M = E–1·E·M = E–1·R of M .

Here E–1 is an invertible k × k matrix, while the fully row-reduced matrix R , of same rank as M , has all rows below the first r rows populated entirely with zeros. It follows that the entries after the rth entry in each row of E–1 play no role whatsoever in computing the entries of the matrix product E–1·R — indeed, if we remove from E–1 all the columns after the rth , and likewise remove from R all the (fully-zero) rows after the rth , we obtain matrices, call them B and A , say, that are in fact k × r and r × n , and, more importantly, still have the same product (namely, M ) as have E–1 and R , which is to say (concluding this proof):

B·A = E–1·R = E–1·E·M = M .

3. Why, now, should it be the case that rank(M) the least of the integers r for which there are factorizations (M) = B·A for matrices A and B as envisioned in question 1? Because the results of questions 1 and 2, taken together, assure that those are the same integers as those that rank(M) itself is < to, and it is as plain as the fact that George Washington’s white horse is white that the least integer that rank(M) itself is < to must be rank(M) .

Part III, 1. All the hydrogens (H), at the points (1, 1, 1) , (–1, 1, –1) , (1, –1, –1) , and (–1, –1, 1) have distance from the carbon (C) at the origin (0, 0, 0) given by the (positive choice of) square root of the quantity (±1)2 + (±1)2 + (±1)2 = 3 . So, indeed, the hydrogens are all equidistant from the carbon, with common distance given by √3 .

2. As to the distances of the various hydrogens from each other, notice to begin with that the distance of any of the last three (each with two negative coordinates) from the first one, at (1, 1, 1) , is the (positive) square root of ((–1) – 1)2 + ((–1) – 1)2 = 8 , i.e., is √8 . And three very similar calculations show that the distance between any two of the last three hydrogens is likewise √8 (details left to the reader).

3. For the angles at the (C) of any two of the (C)(H) rays, we offer two approaches.

First approach: recall that the cosine of the angle θ between non-zero vectors (0 u) and (0 v) is given in all generality by the formula

 cos(θ) = u • v   ——— ·||u||·||v||

Where u and v are the coordinate vectors of any two (H)es, we easily calculate (in each of the six cases) that ||u||·||v|| = 3 , while uv = –1 , so that cos(θ) = (–1)/3 .

Second approach:
```
H              The distances from the single carbon to the various hydrogens all being
·            the same, namely √3, and the distances between any two hydrogens also
√3 /|            all being the same, namely √8 = 2√2, we may subdivide any HCH triangle,
/ |            all of which are both isosceles and congruent one to another (see figure
C•  |√8         at left), into two (mutually congruent) subtriangles, by erecting a
\ |            perpendicular to the HH side from the C vertex, thereby not only bisecting
√3 \|            both the angle at vertex C, and the HH side, but also in fact creating two
·            (1, √2, √3) right triangles*, each of which has, for the cosine of the
H            angle ∠HCM at vertex C, just cos(∠HCM) = 1/√3 (see second figure, below).

H              Now the angle ∠HCH we are after is just twice ∠HCM, so we may apply
·            the standard double angle formula for cosines,
√3 /|√2
/ |                cos(∠HCH) = cos(2×∠HCM) = cos²(∠HCM) - sin²(∠HCM) =
C•--+ M
\ |                         = 2cos²(∠HCM) - 1 = 2×(1/√3)² - 1 = (2/3) - 1 = -1/3 ,
√3 \|√2
·            which again solves our problem, just as before, but without dot products.
H
----------------
H
·              4. As for the angles ∠HHH, notice that every three-hydrogen triangle (see
/ \           figure at left) is equilateral, since each hydrogen-hydrogen distance is √8 .
√8/   \√8       But equilateral triangles are also equiangular, so at each hydrogen we must have
/     \         angle ∠HHH = π/3 (think: 60 degrees), and cos(∠HHH) = cos(π/3) = 1/2 .
H· — — — ·H
√8           --------

* (How many other “standard” right triangles do you know of, or can you
think of, besides the (1, √2, √3)?    — The (1, 1, √2)? the (3, 4, 5)?
the (5, 12, 13)? the (1, √3, √4)? the (1, √4, √5)? the (1, √5, √6)? ... ?)
```
Part IV: We record the five matrices (3 by 3, 4 by 4, 5 by 5, 6 by 6, and general n by n , with n > 7) named in this part:
 A = | 1   1   0 | | 0   1   1 | | 1   0   1 | , B = | 1   1   0   0 | | 0   1   1   0 | | 0   0   1   1 | | 1   0   0   1 | , C = | 1   1   0   0   0 | | 0   1   1   0   0 | | 0   0   1   1   0 | | 0   0   0   1   1 | | 1   0   0   0   1 | , D = | 1   1   0   0   0   0 | | 0   1   1   0   0   0 | | 0   0   1   1   0   0 | | 0   0   0   1   1   0 | | 0   0   0   0   1   1 | | 1   0   0   0   0   1 | , and M = | 1   1   0   0   0   . . .   0   0 | | 0   1   1   0   0   . . .   0   0 | | 0   0   1   1   0   . . .   0   0 | | 0   0   0   1   1   . . .   0   0 | |           :               :         :     | |           :               :         :     | | 0   0   0   0   . . .   1   1   0 | | 0   0   0   0   . . .   0   1   1 | | 1   0   0   0   . . .   0   0   1 | .
1. It is easy to calculate det(A) = 2 (there are only two non-zero products involved, and they each come out to +1), so A is invertible.

As for B : row 4, if we add row 2 to it, becomes [ 1 1 1 1 ] ; and row 3, if we add row 1 to it, becomes [ 1 1 1 1 ] also; so subtracting this modified row 3 from the already modified row 4 gives a row of zeros, and thus elementary row operations row-reduce B to a row-equivalent form with a row of zeros, whence B is not invertible.
(Another approach is to compute det(B) , using the approach of expanding by minors along, say, the bottom row: the minor for the entry 1 in the SW corner is a lower triangular 3 by 3 matrix, with all ones on the main diagonal, and hence of determinant 1, while the minor for the entry 1 in the SE corner is an upper triangular 3 by 3 matrix, again with all ones on the main diagonal, and hence of determinant 1 as well. But the signs linked with those two corner entries are –1 and +1, respectively, and so det(B) = (–1)·1·1 + (+1)·1·1 = 0, and B is not invertible.)

C , however, like A , is invertible again: if we compute det(C) by the expansion method applied to B , we find that det(C) = 2 because the minor for the entry 1 in the SW corner is a lower triangular 4 by 4 matrix, with all ones on the main diagonal, and hence of determinant 1, while the minor for the entry 1 in the SE corner is an upper triangular 4 by 4 matrix, again with all ones on the main diagonal, and hence of determinant 1 as well, while the signs linked with those two corner entries are now both +1, so that det(C) = 1 + 1 = 2.

Just as was the case for B , the matrix D is not invertible. The reasons, too, are much the same: we can, for example, add rows 2 and 4 to row 6 to obtain a row full of ones; but we can add rows 5 and 3 to row 1 to obtain another row full of ones; subtracting the new row 1 from the new row 6 delivers a row full of zeros, which shows that the frref of D has a row full of zeros, guaranteeing that D is not invertible. Alternatively, we can compute det(D) by expanding along the bottom row: just as before, the minors of the two entries 1 are lower- or upper-triangular matrices with all ones along their diagonals, hence of determinant 1, but the signs linked with the positions of those entries 1 are –1 and +1, respectively, so that (just as before) det(D) = (–1) + (+1) = 0, which again guarantees that D is not invertible.

2. Inverses for A and for C may be found either by explicitly row-reducing each of them to the corresponding identity matrix, and seeing what the sequence of row operations actually used does to the identity matrix itself, or by using the (1/det(X))·Adj(X) approach; by either method,

 A–1 = | +½ –½ +½ | | +½ +½ –½ | | –½ +½ +½ | ,             C –1 = | +½ –½ +½ –½ +½ | | +½ +½ –½ +½ –½ | | –½ +½ +½ –½ +½ | | +½ –½ +½ +½ –½ | | –½ +½ –½ +½ +½ | .
3. Convenient (if not fully reduced) row-echelon forms of B and D are
 B' = | 1   1   0   0 | | 0   1   1   0 | | 0   0   1   1 | | 0   0   0   0 | and       D' = | 1   1   0   0   0   0 | | 0   1   1   0   0   0 | | 0   0   1   1   0   0 | | 0   0   0   1   1   0 | | 0   0   0   0   1   1 | | 0   0   0   0   0   0 | ,
respectively, so the nullspaces of these matrices have dimension 1, and it suffices to give but a single non-zero vector, in either case, to exhibit a basis. So, for example, as basis for the nullspace of B we may take the column vector transpose of  [ –1   +1   –1   +1 ] , and as basis for the nullspace of D we may take the column vector transpose of  [ –1   +1   –1   +1   –1   +1 ] .

4. In general, the n by n matrices M will be invertible when n is odd, and not invertible, but of nullity 1, when n is even. Here’s why.

When n is even, if we add to the bottom row all the even-numbered rows above it, we completely fill the bottom row with ones (think of the adjacent pair of ones in each row as a little brick, or domino — we’re just bricklaying a little wall of ones between the existing ones in the bottom row). Similarly, if we add to the top row all the odd-numbered rows below it, we completely fill the top row with ones.
We now subtract the (new) top row from the (new) bottom row, obtaining (as newest bottom row) a row full of zeros, and then proceed to subtract from the top row all the odd-numbered rows below it, thereby restoring the top row to its original condition. The end result of the row operations just described is a convenient (if not fully reduced) row-echelon form of M , whose first n – 1 rows are exactly the rows of M , while the bottom row is all-zero. The nullity of M is thus 1, and a handy basis for the nullspace of M is the n-length column whose entries alternate strictly between +1 and –1.

When n is odd, on the other hand, calculating det(M) the same way we did for A or C — that is, expanding by minors along the bottom row, whose two nonzero entries 1 both live in positions linked to the sign +1 and have minors that are either lower- or upper-triangular matrices having (only ones on their main diagonal, and hence) determinant 1 — shows us once again that det(M) = 1 + 1 = 2.
And as for M –1 in these cases, we will have
 M –1 = | +½   –½   +½   –½   +½   . . .   –½   +½ | | +½   +½   –½   +½   –½   . . .   +½   –½ | | –½   +½   +½   –½   +½   . . .   –½   +½ | | +½   –½   +½   +½   –½   . . .   +½   –½ | |                   :                     :             :         | |                   :                     :             :         | | –½   +½   –½   +½   . . .   +½   –½   +½ | | +½   –½   +½   –½   . . .   +½   +½   –½ | | –½   +½   –½   +½   . . .   –½   +½   +½ | ,

where the entries on the main diagonal, and each entry immediately below those, are all +½ , and the remaining entries all alternate, checkerboard fashion, between +½ and –½ .

Part V: The diagrammatic inequality <(1) expresses the geometric observation that the LH figure, with its four triangles and central parallelogram, has area no greater than that of the RH figure, presumably on the grounds that the four triangles are exactly the same in the two figures, while the central rectangle within the RH figure must have area at least as great as the area of the central parallelogram within the LH figure (as the sides in both cases are exactly the same, but the altitude on the RHS is > the altitude on the LHS).

Inequality <(2) recapitulates inequality <(1) algebraically, taking for its LHS the algebraic expression (|a| + |y|)(|b| + |x|) for the area of the entire rectangular LH figure, and for its RHS the sum of the algebraic expressions for the areas of the four little triangles and the central rectangle making up the RH figure.

Inequality <(5) is what results if, after carrying out the multiplication indicated in the RHS of inequality <(2) , one then subtracts from either side the quantity |a||b| + |x||y| ; inequality <(4) is the familiar triangle inequality for ordinary real numbers; and the ∴(3) symbol (“&there4;” in HTML) is probably doing double duty, pointing both to the fact that inequality <(5) follows from inequality <(2) , and to the conclusion to be drawn from the conjunction of inequalities <(4) and <(5) , that |ax + by| , which, after all, is just | u • v | , is <a² + b² ·x² + y² , which, in turn, is just || u ||·|| v || , i.e., that | u • v | < || u ||·|| v || .

Part VI. Q. 1: F1 through F10 are: 1, 1, 2, 3, 5, 8, 13, 21, 34, and 55, respectively.
F–10 through F–1 are: –55, 34, –21, 13, –8, 5, –3, 2, –1, and 1, respectively.

Q. 2: The “fundamental recursion formula” (*) Fn+1 = Fn–1 + Fn is posited initially only for n > 1 where Fn–1 and Fn are already known.
For n = 0 , of course, Fn+1 = F1 = 1 , Fn = F0 = 0 , and Fn–1 = F–1 = +F1 = 1 , so (*) continues to hold (saying just: 1 = 1 + 0).
For n < 0 , say n = – k with k > 0 , what needs to be checked is whether Fn+1 = Fn–1 + Fn remains true, given, after all, that Fn is defined for negative n via the equation
Fn = (–1)k+1Fk         (k = – n > 0) .

When n = – k with k > 0 , of course, n + 1 = – k + 1 = – (k – 1) and n – 1 = – k – 1 = – (k + 1) . So it's a matter of checking, for k > 0 , whether

(–1)(k–1)+1Fk–1 = (–1)k+1Fk + (–1)(k+1)+1Fk+1 .

Now, when k is even (and > 0), this means checking whether Fk–1 = – Fk + Fk+1 ; but just adding Fk to both sides gives confirmation in the form of a known instance of (*). And when k is odd (and > 2), it means checking instead whether Fk–1 = FkFk+1 ; this time, we add Fk–1 + Fk+1 to both sides and again obtain confirmation in the form of a known instance of (*). The only case not yet treated, namely that of k = 1 , is now too easy to waste HTML bytes on (don’t believe me? — try it!).

Q. 3: In a nutshell, F1² + . . . + Fn² = Fn·(Fn–1 + Fn) = Fn·Fn+1 .

 Q. 4: Where M = | 0 1 | | 1 1 | , we have: M | 1 || 0 | = 1st column of M = | 0 || 1 | ; M | 0 || 1 | = 2nd column of M = | 1 || 1 | ; and M | x || y | = | 0x + 1y || 1x + 1y | = |    y    || x + y | .
 Q. 5: When n > 1 , M || Fn–1Fn || = || FnFn–1 + Fn || = || FnFn+1 || , using the frf (*) and the third part of Q. 4.
Q. 6: The only Fibonacci number the NW entry 0 in M can be is F0 .
With totally undirected thinking, the remaining entries 1 in M might be any of F1 , F2 , or F–1 .
 But, coupled with the results of Qq. 4 & 5, the suggestions to think of | 1 || 0 | as || F–1 F0 || and of | 0 || 1 | as || F0 F1 || lead to the insights:
 1st column of M = M | 1 || 0 | = M || F–1 F0 || = || F0 F1 || , and
 2nd column of M = M | 0 || 1 | = M || F0 F1 || = || F1 F2 || ,
 on the basis of which we are led to write: M = || F0 F1 F1 F2 || .

 Q. 7 (i): We wish to confirm that M n = || Fn–1   Fn     Fn   Fn+1 || at least for n = 0, 1, 2 . First, for n = 0 , we have
 || Fn–1   Fn     Fn   Fn+1 || = || F–1 F0 F0   F1 || = || 1 0 0 1 || = I2×2 = M 0 ; for n = 1 , just see the last line of Q. 6, above; and for n = 2 , notice
 || Fn–1   Fn     Fn   Fn+1 || = || F1 F2 F2 F3 || = || 1 1 1 2 || , while M 2 = || 0 1 1 1 || · || 0 1 1 1 || = || 1 1 1 2 || , too.
 (ii) Assuming that n > 0 is an integer for which the formula M n = || Fn–1   Fn     Fn   Fn+1 || happens to be valid, we have, as desired,
 M n+1 = M·M n = || 0 1 1 1 || · || Fn–1   Fn     Fn   Fn+1 || = || Fn         Fn+1 Fn–1+Fn   Fn+Fn+1 || = || Fn    Fn+1 Fn+1   Fn+2 || .
N.B.: In Q. 13 we will see that the expression given here for M n is in fact valid for all integers n , and not just for n > 0 .

Q. 8, (i): det(M) = 0·1 – 1·1 = 0 – 1 = –1 ;     (ii): det(M n) = (det(M))n = (–1)n ;     (iii): (using 7(i)) det(M n) = Fn–1·Fn+1 – (Fn)2 ;     and
(iv): combining (ii) and (iii), we have Fn–1·Fn+1 – (Fn)2 = (–1)n ; if we rebalance this by adding (Fn)2 to both sides, we obtain
Fn–1·Fn+1 = (Fn)2 + (–1)n , as desired.

 Q. 9, (i): First, divide both sides of the equation Fn–1·Fn+1 = (Fn)2 + (–1)n by (Fn)2 to obtain Fn–1   Fn+1——  —— Fn     Fn = 1 + (–1)n———(Fn)2 ;
 now, multiply both sides of this result by Fn——Fn–1 to obtain the desired Fn+1—— Fn = ( 1 + (–1)n———(Fn)2 ) Fn——Fn–1 .
(ii): Divide both sides of the equation Fn–1·Fn+1 – (Fn)2 = (–1)n (with which 8(iv) begins) by FnFn–1 , and you have the desired equation
 Fn+1—— Fn – Fn—— Fn–1 = (–1)n———Fn Fn–1 .
(Alternatively, expand (multiply out) the RHS of the final result of 9(i), simplify, and then rebalance, to come to the same conclusion.)

Q. 10: The number Fn+k+1 is the SE entry (in position (2,2)) in the matrix power M n+k . But as M n+k = M n·M k and the SE entry in M n·M k is obtained by "zipping" together row 2 of M n with column 2 of M k , that entry is just Fn Fk + Fn+1 Fk+1 ; thus:
Fn+k+1 = Fn Fk + Fn+1 Fk+1 .

 Q. 11: Using either direct matrix multiplication or 7(i) (with n = 2), one finds: M·M = M ² = | 1 1 || 1 2 | .
 But I2×2 + M = | 1 0 | | 0 1 | + | 0 1 | | 1 1 | = | 1 1 || 1 2 | , also. So M ² = I2×2 + M , QED.
 (BTW: What do you get if you multiply on both sides by M n–1 , for n > 1 ? — A.: M n+1 = M n–1 + M n , an analogue to the frf (*).)

Q. 12: Rebalancing M ² = I2×2 + M , we obtain M ² – M = I2×2 . But M ² – M can be factored, two different ways:

M·(MI2×2) = M ² – M = (MI2×2)·M , whence also: M·(MI2×2) = I2×2 = (MI2×2)·M ,

and this shows M has (MI2×2) as two-sided inverse. (An alternate approach: by 8(i), M is invertible. Writing M -¹ for its inverse, multiply both sides of M ² – M = I2×2 (on the left, say) by M -¹ . Simplifying, out pops MI2×2 = M -¹ .)

Q. 13: Let n < 0 — say, n = – k with k = | n | > 0 . To see that the formula of 7(i) remains valid for such n , when M n is defined (as in the problem) as (M -¹) k , it suffices to check that both (a) the RHS of 7(i) and (b) the matrix M n = (M -¹) k serve as inverse for the matrix M k . First check (a):

 M k· || Fn–1   Fn     Fn   Fn+1 | | = | | Fk–1   Fk     Fk   Fk+1 | | · || (–1)k+2Fk+1   (–1)k+1Fk   (–1)k+1Fk     (–1)kFk–1 || = | | (–1)k+2(Fk–1Fk+1 – (Fk)²)     (–1)k+1(Fk–1Fk – FkFk–1) (–1)k+2(FkFk+1 – Fk+1Fk)     (–1)k+1((Fk)² – Fk+1Fk–1) | | =
 = | | (–1)k+2(–1)k     (–1)k+1·0   (–1)k+2·0   (–1)k+1(–(–1)k) | | = | +1   0  ||  0   +1 | = I2×2 .

The other check (b) is easier, because, with M being invertible, so is each positive power M k of M , and (M k)-¹ = (M -¹) k .

Q. 14: Recall that the quadratic formula identifies as roots of the polynomial ax² + bx + c the quantities λ+ = (1/2a)(– b + b² – 4ac ) , and guarantees validity of the factorization formula ax² + bx + c = (x – λ+ )(x – λ) .

For the polynomial p(x) = x² – x – 1 of the problem, we have a = 1 , b = –1 , and c = –1 , so that λ+ = ½ (1 + √ 5 ) and λ = ½ (1 – √ 5 ) .

Alternatively, just multiply out the product (x – λ+ )(x – λ) = (x – ½ (1 + √ 5 ))(x – ½ (1 – √ 5 )) , and notice that the result simplifies, using the relations (do check these, please!) λ+ + λ = 1 and λ+ ·λ = –1 , to exactly the desired polynomial p(x) = x² – x – 1 of the problem.

 Q. 15: M – x·I2×2 = | 0 1 || 1 1 | – | x 0 || 0 x | = | –x   1 || 1  1–x| , whence det(M – x·I2×2) = x² – x – 1 ; the eigenvalues of M are the roots of this polynomial, found in Q. 14.
 Q. 16, (i): Yes, M · || 1 λ+ || = || λ+ 1+λ+ || and λ+ · || 1 λ+ || = || λ+ (λ+ )2 || are equal, because (λ+ )2 = 1+λ+ (this follows from (λ+ )2 – (1+λ+ ) = p(λ+ ) = 0 ).
 (ii): Again: yes, M · || 1 λ– || = || λ– 1+λ– || and λ– · || 1 λ– || = || λ– (λ– )2 || are equal (argue as as for (i) — just replace each “ λ+ ” by a “ λ– ” ).
 [Remarks: The upshot here, to be used in Qq. 18-22, is that || 1 λ+ || is an eigenvector for M corresponding to the eigenvalue λ+ , while
 || 1 λ– || is an eigenvector for M corresponding to the eigenvalue λ– .]

Q. 17: (λ+ )2 = 1+λ+ has already been checked, in 16(i) above; (λ)2 = 1+λ is checked the same way.
And both (λ+ )–1 = –λ and (λ)–1 = –λ+ follow from the observation λ+ ·λ = –1 made near the end of the discussion of Q. 14 above.

 Q. 18: The 2×2 matrices S and Λ are defined as S = || 1   1 λ–   λ+ || and Λ = || λ–   0 0   λ+ || . And the discussion of Q. 16
 provides column-by-column confirmation that M·S ( = || λ–       λ+ 1+λ–   1+λ+ || ) = S·Λ ( = || λ–       λ+ (λ– )2   (λ+ )2 || ) .

Q. 19: We have det(S) = λ+ – λ = ½ (1 + √ 5 ) – ½ (1 – √ 5 ) = ½ √ 5 + ½ √ 5 = √ 5 ; and det(Λ) = λ+ ·λ = –1 , as in Q. 17 above.

Q. 20: Multiplying both sides of the equation (cf. Q. 18) M·S = S·Λ on the right by S –1 , delivers M = M·S·S –1 = S·Λ·S –1 . Raising both sides to a positive integer power n > 0 , one obtains
M n = (S·Λ·S –1)n = (S·Λ·S –1)·(S·Λ·S –1)· . . . ·(S·Λ·S –1)·(S·Λ·S –1) = (S·Λ·S –1)·(S·Λ·S –1)· . . . ·(S·Λ2·S –1) = . . . = (S·Λ·S –1)·(S·Λn–1·S –1) = S·Λn·S –1 .
Thus M n = S·Λn·S –1 for all integers n > 0 . For n = 0 , this equation remains valid because M 0 = I2×2 , and (as Λ0 = I2×2 as well, by the same convention) S·Λ0·S –1 = S·S –1 = I2×2 , too. Finally, as each of the matrices M , S , and Λ is invertible (why?), we must have M –1 = (S·Λ·S –1)–1 = S·Λ–1·S –1 , and raising both sides of this to the |n|th power, for n = – |n| < 0 , gives

M n = (M –1)|n| = (S·Λ–1·S –1)|n| = S·(Λ–1)|n|·S –1 = S·Λn·S –1 ,

which thus finishes settling the matter for all integers n .

 Q. 21: Three preliminary remarks will help. First, for notational convenience, we will write [0 1]T in place of the column | 0 | | 1 | .
The other two: (α) whatever the 2 × m matrix A , the matrix product [0 1]·A just gives the second (= bottom) row of A ; and
(β) whatever the k × 2 matrix B , the matrix product B·[0 1]T just gives the second (= right hand) column of B .

(i) Applying remark (α) above to A = M n , and taking into account the formula of 7(i), we have [0 1]·M n = [Fn Fn+1] ; then, by applying (β) to B = [Fn Fn+1] (or by just multiplying out), we have [Fn Fn+1]·[0 1]T = Fn+1 . Thus [0 1]·M n·[0 1]T = Fn+1 .

(ii) With S as above, and relying on remark (α), we have [0 1]·S = the bottom row of S = [λ  λ+ ] .

(iii) Relying on remark (β) above, S –1·[0 1]T is the second column of S –1 . These entries are easily computed, via the cofactors / signed minors approach, as being: on top, –1/5 , and below, +1/5 , or [ –1/5   +1/5 ]T , all told.

Q. 22: Combining the results of Qq. 20 and 21, expanding Λn , multiplying out (twice), and simplifying, we obtain:
 Fn+1 = [0 1]·M n·[0 1]T = [0 1]·S·Λn·S –1·[0 1]T = [λ–   λ+ ] · || (λ– )n   0     0   (λ+ )n || · ||| –1/√5   +1/√5 ||| =
 = [(λ– )n+1   (λ+ )n+1] · ||| –1 /√5   +1 /√5 ||| = – (λ– )n+1/√5  +  (λ+ )n+1/√5 = ((λ+ )n+1 – (λ– )n+1) /√5 .

Q. 23: We establish the “closed-form” formula Fn = (+ )n – (λ)n) /√5 for the Fibonacci numbers “out of the blue” for n > 0 by (i) checking the first few cases (with n = 0, 1, & 2), and then (ii) checking the counterpart of the frf (*) as a working principle for general n > 0 .

(i0) (n = 0): F0 = 0 ; but (λ+ )0 = 1 and (λ)0 = 1 , so the difference in parentheses on the RHS is zero, hence the entire RHS = 0 as well.
(i1) (n = 1): F1 = 1 ; but (λ+ )1 = λ+ , and (λ)1 = λ ; and, by Q. 19, λ+ – λ = √5 ; so the RHS = 1 as well.
(i2) (n = 2): F2 = 1 ; but (recall Q. 17) + )2 = 1 + λ+ , and )2 = 1 + λ ; so, just as before, + )2 – (λ)2 = (1 + λ+ ) – (1 + λ) = λ+ – λ = 5 , and the RHS again = 1 as well.
(ii) Suppose we have an integer n > 0 for which we happen already to know that Fn–1 = (+ )n–1 – (λ)n–1) /√5 and Fn = (+ )n – (λ)n) /√5 are valid.
We shall show that, as a consequence of these assumptions, the formula Fn+1 = (+ )n+1 – (λ)n+1) /√5 is valid too.
To do so, we note first that the LHS Fn+1 of the equation we seek to establish is the sum Fn+1 = Fn–1 + Fn of the LHS’s of the equations we are assuming valid; so it will suffice to establish that the RHS’s stand in a similar relation to each other.
And for this it’s clearly enough to show that (+ )n+1 – (λ)n+1) = (+ )n–1 – (λ)n–1) + (+ )n – (λ)n) .
But, fortunately, we do know (λ+ )2 = 1 + λ+ , and (λ)2 = 1 + λ . Multiplying both sides of the first of these equations by + )n–1 , and both sides of the second by )n–1 , we obtain + )n+1 = (λ+ )n–1 + (λ+ )n and )n+1 = (λ)n–1 + (λ)n ;
then subtracting, we find that, indeed, (+ )n+1 – (λ)n+1) = (+ )n–1 + (λ+ )n)()n–1 + (λ)n) = (+ )n–1 – (λ)n–1) + (+ )n – (λ)n) , as desired. QED.

PS: A supplementary challenge (but an easy one) for the reader: prove that this closed-form formula for Fn remains valid also for n < 0 .

 Tentatively completed 11 June 2006 . . . Everything best viewed at screen widths of 1,000 pixels or greater . . . Courier portions of Part III — best viewed (sigh!) in MSIE 6 (Netscape 7 and Opera render with ugly misalignments). Back to the Math 221 Home Page