In this precept, we will investigate power method and inverse power method for symmetric matrices.
We start by creating an example matrix. We $1000$
uniformly random points on the unit sphere by generating Gaussian points using np.random.randn
and
dividing by their length. Then we make a scatter plot of the points.
Your task is to compute the $n \times n$ matrix $D$ whose $(i,j)$-th entry is the distance between $X[i,:]$ and $X[j,:]$, and define $$ A(i,j) = \exp(-D(i,j)^2). $$
# Task 1
import numpy as np
import matplotlib.pyplot as plt
# n random vectors on the sphere
n = 1000
X = np.random.randn(n,3)
X = X / np.linalg.norm(X, axis =1,keepdims=True)
# Scatter plot
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2])
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
## Compute the matrix A :
A = np.zeros((n,n))
Using the built in numpy command compute the built in eigenvalues and eigenvectors of $A$. Sort the eigenvalues is descending order and sort the columns of $V$ in a corresponding way. Check $$ \text{norm}(A - V D V^\top )/\text{norm}(A) $$ and $$ \text{norm}(I - V^\top V) $$ where $I$ is the identity matrix, to make sure that you have decomposed $A$ into $V D V^\top$, where $V$ is orthogonal and $D$ is diagonal, which is the decomposition guaranteed by the spectral theorem.
# Task 2 use the following function to compute eigenvalues and eigenvectors of A
help(np.linalg.eigh)
help(np.flip) # May want to use this function to reorder eigenvalues/vectors
Visualize the eigenvectors in $V$. In particular, the eigenvectors $V[:,j]$ are vectors in $\mathbb{R}^n$. Make a scatter plot of $X$ and color the scatter plot using $V_2$, the second eigenvector (equal to $V[:,1]$ in numpy), the eigenvector associated with the second largest eigenvalue These eigenvectors are very rough approximations of spherical harmonics https://en.wikipedia.org/wiki/Spherical_harmonics
# Task 3: scatter plot of eigenvectors
# Scatter plot
First, use the power method to find $V_1$ (the first eigenvector, associated with the greatest eigenvalue). That is, start with a random vector $v$ and keep applying the matrix and normalizing until you start to converge to $v$. You can check the error by $$ \min(\text{norm}(v - V_1),\text{norm}(v + V_1)) $$ the reason that we need to check $-v$ and $v$ is that eigenvectors are only defined up to multiplication by a constant. Even when $v$ is normalized to have length $1$ there is still a sign ambiguity.
# Task 4 implement power method
def distance_to_eigenvector(vk, eigenvec):
return np.min([np.linalg.norm(vk - eigenvec), np.linalg.norm(vk + eigenvec)])
err = np.zeros(100)
## add your code
plt.semilogy(err)
Next, use the power method to find $V_2$ (the second eigenvector) using the following trick: at each iteration, before multiplying by $A$, project the vector $v$ to the space orthogonal to $V_1$. Theoretically, it would suffice to just project $v$ orthogonal to the space $V_1$ initially, but to avoid any numerical issues perform the projection each iteration. Recall that the project orthogonal to a vector $u$ which is normalized to have unit length is $$ w = v - u(u^\top v) $$ Check that you are able to get $V(:,2)$.
# Task 5 implement power method for v2
def distance_to_eigenvector(vk, eigenvec):
return np.min([np.linalg.norm(vk - eigenvec), np.linalg.norm(vk + eigenvec)])
err = np.zeros(100)
## add your code
plt.semilogy(err)
Next, using the eigenvalues $d$ that you computed using the built in solver, use the inverse power method to compute $V_1$ and $V_2$
# Implement task 6
# add your code
print(distance_to_eigenvector(v1, V[:,0]))
# add your code
print(distance_to_eigenvector(v2, V[:,1]))