Task 1: Finite Difference¶

Assume $f\in C^{5}([x-2h,x+2h])$. Find constants $a,b,c$ (independent of $f,x,h$) such that the following finite difference scheme is $4$th order accurate:

$$ \frac{a\,f(x-2h)+b\,f(x-h)+c\,f(x)-b\,f(x+h)-a\,f(x+2h)}{h} \;=\; f'(x)+O(h^4). $$

Task 2: Forming Matrices¶

Let $ P_2 $ be the set of polynomials of degree $ \leq 2 $. The monomials $ \{1, x, x^2\} $ form a basis for $P_2$.

  1. Write the matrix $A \in \mathbb{R}^{4 \times 3}$ corresponding to the mapping $P_2 \to P_3$, given by multiplication by $x$.
  2. Write the matrix $A \in \mathbb{R}^{3 \times 2}$ corresponding to the mapping $P_1 \to P_2$, using the Chebyshev polynomials as a basis. The Chebyshev polynomials are defined using the recursion: $$ T_0(x) = 1, \quad T_1(x) = x, \quad T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x) $$

Task 3: Characterize the induced $\ell_1$ matrix norm¶

Let $A\in\mathbb{R}^{m\times n}$ and

$$ \|A\|_1 := \sup_{x\neq 0}\frac{\|Ax\|_1}{\|x\|_1} = \sup_{\|x\|_1=1}\|Ax\|_1. $$

Show that

$$ \boxed{\ \|A\|_1 \;=\; \max_{1\le j\le n}\sum_{i=1}^m |a_{ij}|\ } \quad\text{(maximum absolute column sum).} $$

Task 4: Order Matters¶

Suppose $ A, B \in \mathbb{R}^{n \times n} $ and $ d, u, v, w \in \mathbb{R}^n $. Rewrite each of the following Python functions to compute the same result but with the indicated asymptotic complexity.

Confirm the improved complexity by timing the two implementations for $n = 2000$.

  • Compute $Aw$ where $A = D + u v^T$ where $D$ is a diagonal matrix and $uv^T$ is an outer product in $\mathcal{O}(n)$.

  • Compute $(D + AB)x$ where $D$ is diagonal and $A$, $B$ are dense in $\mathcal{O}(n^2) $. The current implementation is $\mathcal{O}(n^3)$ as it involves a dense matrix-matrix multiply.

In [ ]:
import numpy as np
def diag_plus_rank_one(d, u, v, w):
	# Note that adding two matrices is O(n^2)!
	A = np.diag(d) + np.outer(u,v)
	return A @ w

def diag_plus_rank_one_fast(d, u, v, w):
    # Write O(n) code to compute (diag(d) + u v^T) w
    return w


# Rewrite to run in O(n^2)
def mat_mat_vec(A,B,d,w):
	# Note: matrix-matrix multiply is O(n^3)!
	return (np.diag(d) + A@B)@w

def mat_mat_vec_fast(A,B,d,w):
    # Write O(n^2) code to compute (np.diag(d) + A@B)x
    return w

n = 100
A = np.random.randn(n,n)
B = np.random.randn(n,n)
d = np.random.randn(n)
u = np.random.randn(n)
v = np.random.randn(n)
w = np.random.randn(n)
print('error: diag_plus_rank_one', np.linalg.norm(diag_plus_rank_one(d,u,v,w) - diag_plus_rank_one_fast(d,u,v,w)))
print('error: mat_mat_vec', np.linalg.norm(mat_mat_vec(A,B,d,w) - mat_mat_vec_fast(A,B,d,w)))
In [ ]:
## Some syntax help

A = np.random.rand(n,n) # forms a n x n matrix with random entries (good for testing)
x = np.random.randn(n) # forms a n vector with random entries (good for testing)
b = A @ x  # computes the matrix -vector product of A and x 
C = A @ B # computes the matrix -matrix product of A and B)
A = np.outer(u,v) # computes the outer product of u and v, returns a matrix
c = np.dot(u,v) # computes the dot product between u and v, returns a scalar
A = np.diag(d) # forms a diagonal  n x n matrix with the entries of the vector d 
# on its diagonal
c = u * v # computes the elementwise product between vectors u and v, c_i = u_i * v_i