On the geometric interpretation of the determinant of a matrix

Most econometric methods are buttressed by mathematical proofs buried somewhere in academic journals that the methods converge to perfect reliability as sample size goes to infinity. Most arguments in econometrics are over how best to proceed when your data put you very far from the theoretical ideal. Prime examples are when your data are clustered (some villages get bednets and some don’t) and there are few clusters; and when instruments are weak (people offered microcredit were only slightly more likely to take it).

Mucking about in such debates recently, as they pertain to criminal justice studies I’m reviewing, I felt an urge to get back to basics, by which I mean to better understand the mathematics of methods such as LIML. That led me back to linear algebra. So I’ve been trying to develop stronger intuitions about such things as: how a square matrices can have two meanings (a set of basis vectors for a linear space, and the variances and covariances of a set of vectors); and what the determinant really is.

The determinant of a matrix, it turns out, is the volume of the parallelepiped defined by the column or row vectors of a matrix. What good this does me, I’m not sure yet. Here’s my proof for the 2×2 case, which I made for my dad and my son:

determinant as area

print