Q1: Remember rootfinding?
Q2: Fast mat-vecs
You may use code from your HW and precepts (and solutions of these) for these problems (same will be true on exam)
Let T be Toeplitz with the vector $c$ in the first column and $r$ in the first row. Find and implement an algorithm to compute $x = u - Tuv^T T w$ for given $u,v,w, c,r$ in $\mathcal{O}(n \log n) $.
Let $A\in \mathbb{R}^{n \times n}$ be Toeplitz except in the last row. Find and implement an algorithm for $Ax$ that takes $\mathcal{O}(n \log n)$. For example, a 5x5 matrix of this form is:
import scipy.linalg
import numpy as np
def T_op(c,r,u,v,w):
## ADD YOUR CODE HERE
return u
def T_op_naive(c,r,u,v,w):
T = scipy.linalg.toeplitz(c,r)
return u - T @ np.outer(u, v) @ T @ w
# Random vectors
n = 10
c = np.random.rand(n)
r = np.random.rand(n); r[0] = c[0]
u = np.random.rand(n)
v = np.random.rand(n)
w = np.random.rand(n)
# part 1 test:
error = T_op_naive(c,r,u,v,w) - T_op(c,r,u,v,w)
print('part 1 error:', np.linalg.norm(error) / np.linalg.norm(T_op_naive(c,r,u,v,w)))
# part 2:
# The last entree of c doesn't matter here
def almost_toeplitz_mat_vec(c,r,v,x):
## ADD YOUR CODE HERE
return x
# part 2test:
A = scipy.linalg.toeplitz(c,r)
A[-1,:] = v
error = A@w - almost_toeplitz_mat_vec(c,r,v,w)
print('part 2 error:', np.linalg.norm(error) / np.linalg.norm(A@w))
# part 3:
# The last entree of c doesn't matter here
def circulant_solve(c,x):
## ADD YOUR CODE HERE
return x
# part 3 test
C = scipy.linalg.circulant(c)
Cinvw = np.linalg.solve(C,w)
error = Cinvw - circulant_solve(c,w)
print('part 3 error:', np.linalg.norm(error) / np.linalg.norm(Cinvw))
Q3: Transformations
Let $$ x = \begin{bmatrix} x_1\\ x_2\\ \vdots\\ x_k \\ \vdots\\ x_n \end{bmatrix} \qquad \text{ and } \qquad y = \begin{bmatrix} 0\\ \vdots\\ 0\\ x_k \\ x_{k+1} \\ \vdots\\ x_{n} \end{bmatrix} $$ with $x_k \neq 0$. Find $u,v \in \mathbb{R}^n$ such that:
$$ (I - uv^T)x = y $$and $(I - uv^T)$ is upper triangular.
def mygauss(x,k):
## ADD YOUR CODE HERE
return u, v
# Test
n = 10
k = 6
x = np.random.rand(n)
y = x.copy()
y[:k] = 0
u,v, = mygauss(x,k)
error = y - (x - u * (np.dot(v,x)))
print('error Q3:', np.linalg.norm(r) / np.linalg.norm(x))
Consider the least squares: $\min_{x,y} \| A x + By\|_2^2 + \| Cy - d\|_2^2 $ Find the normal equations and do a step of Block Gaussian elimination to find a linear system that $y$ satisfies. That is, find $H$ and $v$ such that $Hy = v$. You may assume A, B, and C all have full column rank.
Implement your solution. You may use np.linalg.solve(F,H)
to form matrices of the form $F^{-1} H$, and to solve $Hy =v$
# Code
def solve_block_least_square(A,B,C,d):
## ADD YOUR CODE HERE
return d
# Below is for testing only.
def naive_solve(A,B,C,d):
M = np.zeros((n+n, n+m))
M[:n,:n] = A
M[:n,n:] = B
M[n:, n:] = C
dpad = np.zeros(2*n)
dpad[n:] = d
result = np.linalg.lstsq(M,dpad)
return result[0][n:]
n = 10
m = 8
A = np.random.randn(n,n)
B = np.random.randn(n,m)
C = np.random.randn(n,m)
d = np.random.randn(n)
dpad = np.zeros(2*n)
dpad[n:] = d
z = solve_block_least_square(A,B,C,d)
z2 = naive_solve(A,B,C,d)
error = np.linalg.norm(z - naive_solve(A,B,C,d))/ np.linalg.norm(naive_solve(A,B,C,d))
print('error: ', error)