CS525: Parallel Computing
Fall 08.
Ananth Grama, ayg@cs.purdue.edu, 494 6964
MWF 11:30 - 12:20 PM
LWSN 1103
Office Hours:
W, 3:00 - 4:30, and by appointment.
TA: Karthik Kambatla, kkambatl@cs.purdue.edu
Office Hours: Thursday 3:00 - 4:00, and by appointment.
Course Announcements:
Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
Please read this policy before starting as I intend on enforcing it
strictly.
Assignment 1:
Problems 2.6, 2.13, 2.24, 2.27, 2.28, and 2.29 of the text `Introduction to Parallel Computing', by Grama et al.
Deadline, Sept 26, 2008 in class.
Assignment 2:
Problems 4.5, 4.7, 4.8, 4.16, 4.19, 4.20 of the text `Introduction to
Parallel Computing', by Grama et al. Deadline, Oct 10, 2008 in class.
Assignment 3:
Problems 5.1, 5.2, 5.5, 5.6, 5.9, and 5.13 of the text `Introduction to Parallel Computing', by Grama et al. Deadline, Oct 22, 2008 in class.
Assignment 4:
Problems 8.1, 8.4, 8.5, 8.6, 8.12, 8.17, and 8.21 of the text `Introduction to Parallel Computing', by Grama et al. Deadline, Nov 12, 2008 in class.
Assignment 5:
Code quicksort using pthreads. Your program should take as input the list
size, generate a random list of required size, take as input the number
of threads, and partition the list across threads. Use the first number in
anu (sub)list as pivot. Rearrange around pivot and each thread is assigned
to either the left or right part of the list. Recurse until you have no
more than n/p elements, which can be sorted locally (use qsort function
in unix if needed). Execute the code on variable number of threads and
plot performance on dual/quad core processors (if available).
Assignment 6:
Problems 9.3, 9.23, 9.30, 10.1, 10.6, 10.8, 11.3, 11.11 of the text
`Introduction to Parallel Computing', by Grama et al.
Course Contents:
CS525, Parallel Computing deals with emerging trends in the use of large scale
computing platforms ranging from desktop multicore processors and
tightly coupled SMPs to message passing clusters and multiclusters. The
course consists of four major parts:
-
Parallel computing platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
SMP and message passing hardware, interconnection networks, network hardware,
and evaluation metrics for architectures. Cost models for communication are
also developed.
-
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed in this part. Message passing using MPI,
thread-based programming using POSIX threads, and directive-based programming
using OpenMP will be discussed.
-
Parallel Algorithms: Starting from design principles for parallel algorithms,
this part develops parallel algorithms for a variety of problems. Various
metrics for evaluating these algorithms are also discussed.
-
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.