Thursday, September 18, 2014

"Inverting" Singular Matrices

You can only invert a matrix if that matrix is non-singular. Right? Actually, that's wrong.

You see, there are various sorts of inverse matrices, and most of them apply to the situation where the original matrix is singular

Before elaborating on this, notice that this fact may be interesting in the context of estimating the coefficients of a linear regression model, y = Xβ + ε. Least squares estimation leads to the so-called "normal equations":

                                     X'Xb = X'y  .                                                                (1)

If the regressor matrix, X, has k columns, then (1) is a set of k linear equations in the k unknown elements of β. You'll recall that if X has full column rank, k, then (X'X) also has full rank, k, and so (X'X)-1 is well-defined. We then pre-multiply each side of (1) by (X'X)-1, yielding the familiar least squares estimator for β, namely b = (X'X)-1X'y.

So, as long as we don't have "perfect multicollinearity" among the regressors (the columns of X), we can solve (1), and the least squares estimator is defined. More specifically, a unique estimator for each individual element of β is defined.

What if there is perfect multicollinearity, so that the rank of X, and of (X'X), is less than k? In that case, we can't compute (X'X)-1, we can't solve the normal equations in the usual way, and we can't get a unique estimator for the (full) β vector.

Let's look carefully at the last sentence above. There are two parts of it that bear closer scrutiny:
  • "we can't solve the normal equations in the usual way"
  • "we can't get a unique estimator for the (full) β vector".
I'll discuss the first of these statements in this post, but I'll leave the second one for a follow-up post.

Although we can't solve the normal equations in the usual way, we can still solve them. To do so, we need to be aware of something called a "generalized inverse" of a matrix. Sometimes this is called a "g-inverse", or a "pseudo-inverse". Also, when encountering this beast you should be aware that it can defined in various ways.

Suppose that A is any square matrix - it may be singular or non-singular. Then any matrix, G, with the property (AGA) = A is called a generalized inverse of A. Notice that in the special case where is non-singular, G = A-1 has this property, so a regular inverse (when it exists) is also a generalized inverse.

Also, notice that I said that G is a generalized inverse, not the generalized inverse, of A. That's because for an arbitrary A matrix, G isn't unique - in fact there's an infinity of G's satisfying (AGA) = A. This suggests that if this concept is going to be useful, we might want to tie it down a little by adding some additional properties that it might have.

However, before doing so, let's consider the special case where A is symmetric. This might be especially interesting to us, as A = (X'X) is symmetric. In this case any generalized inverse, G, of (X'X) satisfies all of the following:

  • G' is also a generalized inverse of (X'X).
  • XGX'X = X'   ;      i.e., GX' is a generalized inverse of X.
  • XGX' is invariant to G .
  • XGX' is symmetric, whether G is symmetric, or not.
These results are established in Theorem 7 of Searle (1971, p.20), for example.

Now, returning to an arbitrary square matrix, A, we have the following result:

There exists a unique matrix, K, that satisfies the following four conditions:
  • AKA = A
  • KAK = K
  • (KA)' = KA
  • (AK)' = AK
The matrix, K is called the "Moore-Penrose Inverse" of A, in recognition of its development by Moore (1920) and Penrose (1955). Often, when people refer rather loosely to "the generalized inverse" of a matrix, they have in mind this Moore-Penrose inverse.

Once again, notice that if A happens to be non-singular, then K = A-1. A regular inverse, if it exists, is also the Moore-Penrose inverse.

There are various algorithms that can be used to calculate the generalized inverse of a matrix. These can be rather messy, and I won't go into them here. For more details, see Searle (1971, pp. 1-5). Luckily many mathematical and statistical computer packages have simple commands for computing (at least) the Moore-Penrose of a matrix. In the case of the EViews econometrics package you can use a command of the form:  matrix  AINV = @pinverse(A)   to get the Moore-Penrose inverse of the matrix, A.

One obvious application of a generalized inverse is solving linear equations. (Remember the "normal equations" discussed earlier?) Specifically, we need to focus on what are usually called "consistent linear equations" - that is, ones which don't involve any logical inconsistencies between them. Let's denote such a set of equations in vector from as Ax = y. We want to solve these equations for the vector x. It can be shown that they have a solution, x = Gy, if an only if AGA = A. So, the generalized inverse is the key to getting a solution, even if A does not have full rank. A simple proof of this important result is given in Theorem 1 of Searle (1971, pp. 8-9).

In fact, we can go even further. Remember that there's an infinity of generalized inverses. We can show that for all possible generalized inverses G of A, x* = Gy generates all possible solutions to the set of consistent equations, Ax = y. (See Searle, 1971, pp. 26-27.)

As an example, consider the pair of (consistent) linear equations:

                x1 + 3x2 = 1
              3x1 + 9x2 = 3
or,

               Ax = y

where:   a11 = 1 ; a12 = a21 = 3; a22 = 9; y1 = 1; y2 = 3.

Note that although A has dimension 2, it's rank is 1.

Solving these equations using the EViews command, matrix x=@pinverse(A)*y  yields the solution, x1 = 0.1, x2 = 0.3. I say the solution, because the Moore-Penrose inverse of A is being computed here, and it is unique. Of course, other generalized inverses would yield other, legitimate, solutions to this pair of equations.



References

Moore, E. H., 1920. On the reciprocal of the general algebraic matrix. Bulletin of the American Mathematical Society, 26, 394-395.

Penrose, R. A., 1955. A generalized inverse for matrices. Proceedings of the Cambridge Philosophical Society, 51, 406-413.

Searle, S. R., 1971. Linear Models. Wiley, New York.



© 2014, David E. Giles

1 comment:

Note: Only a member of this blog may post a comment.