a logo for the course

Matrix Computations

David Gleich

Purdue University

Fall 2024

Course number CS-51500

Tuesday-Thursday 10:30-11:45pm.


Homework 3b

Homework 3b

Please answer the following questions in complete sentences in a clearly prepared manuscript and submit the solution by the due date on Gradescope.

Remember that this is a graduate class. There may be elements of the problem statements that require you to fill in appropriate assumptions. You are also responsible for determining what evidence to include. An answer alone is rarely sufficient, but neither is an overly verbose description required. Use your judgement to focus your discussion on the most interesting pieces. The answer to "should I include 'something' in my solution?" will almost always be: Yes, if you think it helps support your answer.

Problem 0: Homework checklist

Problem 1: Recursive block elimination

In class, we saw "element-wise" variable elimination. That is, in the first step, we eliminated the the first variable from the problem and built a new problem for every other variable.

In fact, we can do this for multiple variables too. There's only one small catch that has to do with pivoting, but we won't worry about that for this problem. That is, you can assume everything you need to be non-singular, is actually non-singular.

Why this matters A recent innovation are the presence of large "tensor-cores" that support "matrix multiplication" directly on GPUs or other CPU accelerators. These can make matrix-multiplication operations much much faster than expected. We are going to use them to solve a linear system via block-substitution. As a simple example, note that Apple's recent M-series chip includes a dedicated "matrix-multiplication" instruction! As another example, Intel has their own extensions too

Your problem Write down the steps of an algorithm to solve via recursive block-elimination. That is, divide into blocks: where all blocks have the same size. In this case, assume that has a power-of-two dimension so that you can do this recursively as well until you get down to a system. Then if are the vectors that conform to the partitioning of , develop an algorithm that uses as a subroutine:

    # return the matrix after solving k linear systems $A^{-1} B$
    function solve(A::Matrix, B::Matrix)
    end

and matrix-matrix multiplication to solve the linear system. Operations you are allowed:

For testing, you can use the fact that pivoting isn't required for symmetric positive definite matrices. So just compute a random matrix and let .

Notes You may find it helpful to think about doing this with only two variables at a time first. Here, I write down a few small steps in the process to get you started. I have not fully debugged these notes as they are meant as a supplementary guide to this problem as a bridge from the lecture notes to the question

Let
And let
Let . Then note that . So like we found in class, we can find and given . (Assuming that is non-singular. (Which, to be clear, we are assuming in this problem!) Let . The last expression shows that we can write as a function of . At this point, we can then substitute in like we did for the other cases in the class notes.

Problem 2: More on Cholesky

Note, we often use either or as the Cholesky factor.

Your professor also boldy asserted that we preserve positive definiteness after one step of the Cholesky factorization. Recall that this was: So we set and . Then, we recursively compute the Cholesky factorization of This only works if is positive definite. This is discussed in the notes. (Hint: you can do it from the definition of positive definiteness: for all .)

  1. Briefly explain why this step justifies that a Cholesky factorization exists for any positive definite matrix. (This is really like a one or two sentence answer.)

  2. One of the most useful aspects of the Cholesky factorization is as a way to check if a matrix is positive definite or negative definite. A matrix is negative definite if for all . Change our Julia implementation to report if a matrix is not positive definite. (Hint: this relates to why a Cholesky factorization always exists for a positive definite matrix in the previous problem.)

Use this to test if the matrix for Poisson's equation from Homework 1 is positive or negative definite.

Problem 3: Direct Methods for Tridiagonal systems (Even more on Cholesky!)

In class, we said the definition of a sparse matrix was one with enough zeros to take advantage of it. In this problem, we'll show how to take advantage of tridiagonal structure in a matrix.

Make sure you have read through the notes on the Cholesky factorization in our notes on Elimination methods.

Let be a symmetric, tridiagonal, positive definite matrix:

  1. Let is the leading prinicipal minor of . Suppose you are given the Choleksy factorization of . Determine the structure of (hint, must be tridiagonal!).

  2. Now, show how to compute the Cholesky factorization of from in a constant number of operations.

  3. Use the result of the first two parts to write down an algorithm that computes the Cholesky factorization of in a linear amount of work (starting from scratch!)

Problem 4: Direct methods for eigenvectors.

Suppose you try to compute an eigenvector using the "elimination" techniques from class. This question will ask you to investigate the issues that arise.

First, we will assume we know the eigenvalue $\lambda$ and that this

eigenvector is simple. (Don't worry if you don't know what this means. What it means for this problem is that the linear system we build only has the mild issue we discuss now.) Consequently, we wish to find a solution of the linear system: This linear system is singular because any scalar multiple of an eigenvector is also a solution. However, this might be considered a mild singularity as we simply get to choose the scale of the solution. (If the eigenvalue was not simple, then the solution would be more complicated, that's why we assume simple, which formally means that )

Explain what happnes when we try and use our elimination solver solve1_pivot2 using this system and also alter this routine so that we are able to produce an eigenvector when given the matrix and .

Problem 5: QR and Cholesky

Let be an tall (), full rank matrix.

Let be the QR decomposition. Let be the Cholesky factor of the symmetric, positive definite matrix . (So .) Show that and have an extremely close relationship and discuss any uniqueness properties of Cholesky or QR that impact this relationship.