CS525: Parallel Computing
Fall 2014.
Ananth Grama, ayg@cs.purdue.edu, 494 6964
Tu/Th 10:30 - 11:45 AM
Lambert 105
Office Hours:
W, 1:30 - 3:00, and by appointment.
TA: Rohit Bhatia (rohit2412@gmail.com)
Course Announcements:
Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
Course Contents
CS525, Parallel Computing deals with emerging trends in the use of large
scale computing platforms ranging from desktop multicore processors,
tightly coupled SMPs, message passing platforms, and state-of-the-art
virtualized cloud computing environments . The course consists of four major
parts:
-
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed. Message passing using MPI,
thread-based programming using POSIX threads, directive-based programming
using OpenMP, and GPU programming in CUDA are discussed.
-
Parallel and distributed platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
multicore, SMP, and message passing hardware, interconnection networks,
and evaluation metrics for architectures. Cost models for communication are
also developed.
-
Parallel and Distributed Algorithms: Starting from design principles for
parallel algorithms, this part develops parallel algorithms for a variety of
problems. Various metrics for evaluating these algorithms are also discussed.
-
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.
Please read this policy before starting as I intend on enforcing it
strictly.
Grading Policy
To be discussed in class.