Precept 8: Eigenvalue iterations

Task 1

In this precept, we will investigate power method and inverse power method for symmetric matrices.

We start by creating an example matrix. We $1000$ uniformly random points on the unit sphere by generating Gaussian points using np.random.randn and dividing by their length. Then we make a scatter plot of the points.

Your task is to compute the $n \times n$ matrix $D$ whose $(i,j)$-th entry is the distance between $X[i,:]$ and $X[j,:]$, and define $$ A(i,j) = \exp(-D(i,j)^2). $$

Task 2

Using the built in numpy command compute the built in eigenvalues and eigenvectors of $A$. Sort the eigenvalues is descending order and sort the columns of $V$ in a corresponding way. Check $$ \text{norm}(A - V D V^\top )/\text{norm}(A) $$ and $$ \text{norm}(I - V^\top V) $$ where $I$ is the identity matrix, to make sure that you have decomposed $A$ into $V D V^\top$, where $V$ is orthogonal and $D$ is diagonal, which is the decomposition guaranteed by the spectral theorem.

Task 3

Visualize the eigenvectors in $V$. In particular, the eigenvectors $V[:,j]$ are vectors in $\mathbb{R}^n$. Make a scatter plot of $X$ and color the scatter plot using $V_2$, the second eigenvector (equal to $V[:,1]$ in numpy), the eigenvector associated with the second largest eigenvalue These eigenvectors are very rough approximations of spherical harmonics https://en.wikipedia.org/wiki/Spherical_harmonics

Task 4

First, use the power method to find $V_1$ (the first eigenvector, associated with the greatest eigenvalue). That is, start with a random vector $v$ and keep applying the matrix and normalizing until you start to converge to $v$. You can check the error by $$ \min(\text{norm}(v - V_1),\text{norm}(v + V_1)) $$ the reason that we need to check $-v$ and $v$ is that eigenvectors are only defined up to multiplication by a constant. Even when $v$ is normalized to have length $1$ there is still a sign ambiguity.

Task 5

Next, use the power method to find $V_2$ (the second eigenvector) using the following trick: at each iteration, before multiplying by $A$, project the vector $v$ to the space orthogonal to $V_1$. Theoretically, it would suffice to just project $v$ orthogonal to the space $V_1$ initially, but to avoid any numerical issues perform the projection each iteration. Recall that the project orthogonal to a vector $u$ which is normalized to have unit length is $$ w = v - u(u^\top v) $$ Check that you are able to get $V(:,2)$.

Task 6

Next, using the eigenvalues $d$ that you computed using the built in solver, use the inverse power method to compute $V_1$ and $V_2$