# 作业Algorithm | math代做 | 代写python | 算法代写 – HW 8: Linear Algebra

### TITLE Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel Restart) and then run all cells (in the menubar, select Cell Run All).

Make sure you fill in any place that says YOUR CODE HERE or “YOUR ANSWER HERE”, as well as your name and collaborators below:

In [ ]:

NAME = “” COLLABORATORS = “”

In [ ]:

from future import print_function % matplotlib inline % precision 16 import numpy import matplotlib.pyplot as plt import pandas as pd

### HW 8: Linear Algebra Part 2

#### Question 1 Ill-conditioned matrices

Consider the system of equations defined by

The true solution of this system is

Consider for the following questions the approximate solutions

and define the residual as

##### r ( y )= b Ay.

(a) (6) Compute the residuals (you may use numpy) and for the two approximate solutions using a couple of different norms. Does the more accurate solution have a smaller residual? Does IT matter which norm you use?

##### || r ( x 1 )|| || r ( x 2 )||

In [ ]:

# YOUR CODE HERE raise NotImplementedError ()

(b)  Compute , and the condition number of the matrix using the and norms. Again you can use numpy.

##### || A || || A ^1 || ( A ) L 2 L

In [ ]:

# YOUR CODE HERE raise NotImplementedError ()

(c)  Calculate the SVD of A and do the following

``````check your your answers for and in terms of the singular values of
Compute , the best fit rank-1 approximation to.
Compute and calculate and compare to the singular values
Given find its pseudo-inverse
Find the approximate solution and calculate
``````
##### x + = A + b || r ( x +)|| 2

In [ ]:

# YOUR CODE HERE raise NotImplementedError ()

(d)  Use the SVD to explain the behavior from part (a)

Hint: to understand this problem, consider the change in the residual for a change in the solution vector , i.e.

and compute upper and lower bounds on

and consider what values of correspond to the upper and lower bounds.

Then consider the behavior of for above and explain the difference in residuals (hint: think SVD)

##### x i = x x i i = 1 , 2

In [ ]:

x = numpy.array([[ 1 ], [- 1 ]]) delta_x_1 = x_1 – x delta_x_2 = x_2 – x

y_1 = V_T.dot(delta_x_1) y_2 = V_T.dot(delta_x_2)

print(‘delta_x1 = {} ‘.format(delta_x_1)) print(‘V_T delta_x1 = {} ‘.format(y_1)) print(‘delta_x2 = {} ‘.format(delta_x_2)) print(‘V_T delta_x2 = {} ‘.format(y_2))

#### Question 2 – Hidden Figures

Okay a little more fun with the SVD. Here we are going to use it extract low-dimensional information embedded in high dimensions.

The following cells will read in a matrix of 46765 samples of 5-dimensional data and make a series of scatter plots comparing the data along each dimension pairwise.

In [ ]:

data = pd.read_csv(‘data.csv.gz’).values print(‘shape = {} ‘.format(data.shape))

In [ ]:

# plot scatter diagrams

fig = plt.figure(figsize=( 20 , 6 )) for i in range( 4 ): axes = fig.add_subplot( 1 , 4 ,i+ 1 ) axes.scatter(data[:,i],data[:,i+ 1 ],s= 1 ) axes.set_xlabel(‘x_ {} ‘.format(i), fontsize= 16 ) axes.set_ylabel(‘x_ {} ‘.format(i+ 1 ),fontsize= 16 ) axes.grid() plt.show()

(a)  Now demean the data and use the SVD to determine the dimension of the subspace of that contains the data. Making a plot of the singular values will help. (hint: you will also want to use the full_matrices=False argument to the SVD to get the skinny SVD and save a lot of computation and memory)

##### ^5

In [ ]:

# YOUR CODE HERE raise NotImplementedError ()

(b)  Principal Components. Make a 2-D scatter plot that projects the data onto the plane spanned by the

##### first two principal components (singular vectors of V ). and comment. ( Extra Credit do this in 3-D)

In [ ]:

# YOUR CODE HERE raise NotImplementedError ()

#### Question 3 – Extra Credit LU Factorization

Gaussian elimination is usually one of the first operations students learn in linear algebra but we seemed to have skipped it in lecture! Let us fix that here in the homework. Make sure to read the lecture notes on Gaussian elimination when doing this question. Useful versions of pseudo-code for this Algorithm are available in that notebook.

(a) (5 points) By hand compute the factorization of the matrix

Make sure to compute all the way to the matrix , not the components of the inverse. You do not have to worry about pivoting.

##### L

(b) (10 points) Write a function that computes the factorization of a given matrix without using numpy or scipy packages. Make sure to also return the pivoting matrix, i.e. compute the matrices , , and where

Note that the provided function swap_rows may be useful although you do not have to use it. You may also want to use the example in the notes to test your basic algorithm with (i.e. A = numpy.array([[2, 1, 1, 0], [4, 3, 3, 1], [8, 7, 9, 5], [6, 7, 9, 8]], dtype=float) noting that the algorithm will complain without the casting to float).

##### PA = LU.

In [ ]:

def swap_rows(i, j, A, column_indices= None ): r “”” Swap the ith and jth rows of the matrix A in place

``````Optional argument column_indices is a tuple that controls
the columns being swapped. Defaults to the entire row.
"""
if column_indices is None :
column_indices = ( 0 , A.shape[ 0 ])
``````

pivot_row = A[i, column_indices[ 0 ]:column_indices[ 1 ]].copy() A[i, column_indices[ 0 ]:column_indices[ 1 ]] = A[j, column_indices[ 0 ]:column_in dices[ 1 ]] A[j, column_indices[ 0 ]:column_indices[ 1 ]] = pivot_row

def LU_factorization(A): # YOUR CODE HERE raise NotImplementedError () return P, L, U

In [ ]:

import scipy.linalg # A = numpy.array([[2, 1, 1, 0], [4, 3, 3, 1], [8, 7, 9, 5], [6, 7, 9, 8]], dtyp e=float) A = numpy.random.uniform(low=1.0, high=10.0, size=( 25 , 25 )) P, L, U = LU_factorization(A) P_s, L_s, U_s = scipy.linalg.lu(A) numpy.testing.assert_allclose(P, numpy.linalg.inv(P_s)) numpy.testing.assert_allclose(L, L_s) numpy.testing.assert_allclose(U, U_s, atol=1e-8) print(“Success!”)

(c) (5 points) For this question write a function that solves the system given an and using your factorization function and the pivot matrix. Again do not use any of the functions from numpy or scipy other than to check your solution.

##### LU

In [ ]:

def solve(A, b): # YOUR CODE HERE raise NotImplementedError () return x

In [ ]:

m = 10 A = numpy.random.uniform(size=(m, m)) b = numpy.random.uniform(size=(m)) x = solve(A, b) x_n = numpy.linalg.solve(A, b) numpy.testing.assert_allclose(x, x_n) print(“Success!”)