Please answer the following questions in complete sentences in a typed manuscript and submit the solution on Gradescope by April 5th around 5am like our usual deadline.
Please identify anyone, whether or not they are in the class, with whom you discussed your homework. This problem is worth 1 point, but on a multiplicative scale.
Make sure you have included your source-code and prepared your solution according to the most recent Piazza note on homework submissions.
Derive the 2-point Gauss-Hermite quadrature rule.
Example application (from http://ice.uchicago.edu/2012_presentations/Faculty/Judd/Quadrature_ICE11.pdf). An investor holds one bond that will be worth 1 in the future and equity whose value is where . (So this means that the log of the value of the expected utility is a normally distributed random variable.) The expected utility is the random number . where is a utility function, we'll use , where . (This is a concave utility function because having more money doesn't give you all that much more utility.) We'll use . Suppose also that and . We want to find the expected utility to the investor! This involves evaluating the integral Write a compute program to use Gauss-Hermite quadrature to approximate the value of this integral. You need to justify the number of points you use for the approximation.
In this problem, we will investigate multivariate quadrature for integrals such as using tensor product rules. Let and be the nodes and weights of an -point 1-dimensional Gauss-Legendre rule. Then the multidimensional quadrature rule is: We can derive this as follows: Of course, this makes it clear we don't have to use the same number of points to integrate in and ! So in general, let be an -point Gauss-Legendre quadrature rule for the -variable and be an -point Gaussian quadrature for the variable.
Implement a computer code to perform two-dimensional Gauss-Legendre quadrature using this type of construction. Your code should allow a user to input and to determine the number of points in each variable.
Use your code to estimate the integrals using points in each dimension.
This is an open ended question that requires you to investigate. You can also find the answer in many textbooks, but if you do so, make sure you document your sources and demonstrate the effect that is claimed. We saw in class that an -point Gaussian quadrature rule exactly integrated polynomials of up to degree . In this problem, I want you to generate a conjecture about the degree of exactness of multidimensional Gaussian quadrature. You should use your code from part 1, along with carefully constructed examples, to support a statement such as:
My evidence suggests that 2d Gauss-Legendre quadrature will exactly
integrate two-dimensional functions $f(x,y)$ when ...
Here are some helpful ideas that may play a role.
The total degree of a multidimensional polynomial is the maximum of the sum of degrees of each term. So has total degree .
Another type of degree is the largest degree in each variable, so involves polynomials of degree only.
In this problem, we'll study how different quadrature methods converge on a variety of problems. For a technical paper on this idea, see Trefethen, Is Gaussian Quadrature better than Clenshaw-Curtis? SIAM Review, 2008. In this problem, we'll be studying and reproducing Fig. 2 from that paper.
The functions are all defined on the region and are:
The quadrature methods are:
Monte Carlo quadrature. Monte Carlo quadrature is a randomized method. We simply guess points between uniformly at random and then take the average of all the function values. For instance, the following Julia code evaluates a Monte Carlo approximation
function montecarlo(f::Function, n::Int, a::Float64, b::Float64)
assert(a < b)
xi = (b-a)*rand(n)+a
fi = mean(f(xi))
end
Clenshaw-Curtis quadrature. This quadrature uses Chebyshev points of the second kind to build an interpolatory quadrature formula instead of uniformly spaced points (as is common in Newton-Cotes quadrature). It just so happens that there is an incredibly elegant method to compute the weights associated with this quadrature based on the Fast-Fourier transform. See Trefethen's paper above for a 6-line Matlab code that implements Clenshaw-Curtis quadrature.
Gauss-Legendre quadrature. In Gauss-Legendre quadrature, we pick the quadrature nodes and weights together. This gives even more accuracy. To find these nodes and weights, we must evaluate the eigenvalues and one component of each eigenvector of the Jacobi matrix associated with the Legendre orthogonal polynomials. The Jacobi matrix for these polynomials is easy: The size of the matrix should be if you want an -point formula. To get the nodes, we just look at the eigenvalues of the matrix. To get the weights, we need to get the first component of each eigenvector, and square it. (Hint: see Trefethen's paper for a simple Matlab code.)
By whatever method you want, determine the exact values of these 6 integrals. (Hint: Wolfram Alpha is okay!)
Write a program to create the Jacobi matrix for Gauss-Legendre quadrature and show the eigenvalues of the matrix for .
Implement a computer program for Clenshaw-Curtis quadrature. (Hint: you can copy Trefethen's routines, with attribution) but you must explain the steps. Julia implementations are advisable too.
Implement a computer program for Gauss-Legendre quadrature.
Note that the Monte Carlo method is a randomized algorithm, so the result will change if you do it again. Each run is called a trial or sample. Use a computer implementation of Monte Carlo integration to how much the values in a Monte Carlo integration can vary between one trial and the next. Compute the variance for and samples for each of the above functions.
Prepare 6 figures like Trefethen had in his paper for the 6 functions. Except use Monte Carlo integration instead of Newton-Cotes.
Empirically determine how computing the Gaussian Quadrature nodes and weights scales as a function of , the number of points, in terms of CPU time. (Hint, consider between and .) You should be able to justify your final scaling constant in terms of the runtime of a known algorithm.
(Optional) There are some recent developments in fast quadrature methods that
produce the nodes and weights much more quickly. In Julia, these are
implemented in the FastGaussQuadrature
package and the gausslegendre
function
and in Matlab, the function
legpts.m
from Chebfun implements them (an old simple version is here:
http://www.mathworks.com/matlabcentral/fileexchange/23972-chebfun-v4-old-version--please-download-current-version-instead/content/chebfun/legpts.m
Determine the simplest equation that describes the
empirical scaling for these routines to find the quadrature
points and weights. (Hint, consider between and .)
The following sequences converge to as :
Indicate the type of convergence for each sequence in terms of
Using any method we've seen to solve a scalar nonlinear equation (bisection, false position, secant), develop a routine to compute using only addition, subtraction, multiplication, and division (and basic control structures) to numerical precision. (Use double-precision.)
Compare the results of your method to the Matlab/Julia/Python function sqrt
.
Comment on any differences that surprise you.