This page documents what we do in class. It contains C programs that we will discuss in class, reading assignments from our textbook, simple homework exercises that you can work on for practice and exam preparation (not for credit), and links to other sources of information.
The reading assignments and practice problems are listed by the date on which they were assigned. The sample program files are listed by the date on which they were used. You can click on either a .java link and see the source code, or click on a .html link (when available) and see an applet running. When you are viewing the source code, you can use your browser's "File -> Save As..." menu item to save a copy of the file on your computer. It is a good idea for you to "play" with these example programs; compile them, run them, make simple changes to them and compile and run them again.
- Tuesday, May 18.
- Here is the final exam (and its LaTeX source code).
- Tuesday, May 11.
- Here is the final exam (and its LaTeX source code). This is due at noon on Thursday, May 13.
- Thursday, May 6.
- Here is a reference about nested parallelism.
- Thursday, April 29.
- Here are two references about how OpenMP compilers transform directives into threads.
- I have added more example OpenMP programs to the following zip file.
- Tuesday, April 27.
- Here is a nice intoduction to the main ideas behind OpenMP. It is the first chapter from a very good book on OpenMP. (But the example OpenMP program given on page 20 is wrong. Can you find the error?)
- I have added more example OpenMP programs to the following zip file.
- Thursday, April 22.
- If you would like to apply for an account on Purdue Calumet's new Miner cluster, then I have been told that you should follow the instruction given on this page. Use me as your "supervisor/advisor". My Career Account login name is "rlkraft".
- I have upgraded the version of gcc in the small development environment that we have been using in class so that it now compiles OpenMP programs.
- Here are some example OpenMP programs.
- Tuesday, April 20.
- Read pages 193-199 in Chapter 6, about OpenMP, from our textbook.
- Here is some online documentation about OpenMP.
- Here are the Pthreads and OpenMP versions of the Count3's program from the textbook.
- Two principle kinds of parallelism.
- Thursday, April 15.
- Read Chapter 4, pages 88-110, from our textbook.
- Here are sample PBS batch files for the Falcon cluster. I have not been able to completely test these, since the Falcon cluster lately has been monopolized by a single job.
- Thursday, April 8.
- Here are two overviews of the material from Chapter 3.
- Tuesday, April 6.
- See the homework page for your fifth homework assignment.
- Read Chapter 3, pages 61-84, from our textbook.
- Thursday, April 1.
- See the homework page for your fourth homework assignment.
- Here is the MPI example of "2D Successive over-relaxation" from Chapter 7, pages 224-226, that uses non-blocking communication to overlap computation with communication.
- Tuesday, March 30.
- Here is the Pthread example of "1D Successive over-relaxation" from Chapter 6, pages 179-186, that uses a "split-phase barrier" to overlap computation with synchronization.
- Thursday, March 25.
- If you are not real familiar with how C (and C++) store 2-dimensional arrays, then you should read the following example program.
- Tuesday, March 23.
- Here are the Pthread and MPI implementations of "2D Successive over-relaxation."
- Thursday, March 11.
- See the homework page for your third homework assignment.
- For after spring break, read Chapter 3, pages 61-84, from our textbook.
- Here are some example MPI programs that use messaging as a synchronization primitive.
- Here is an interesting reference page from the MPI documentation
- Tuesday, March 9.
- For Thursday, read pages 174-186 and pages 219-226 about "Successive Over-Relaxation".
- Here are some example MPI programs.
- I have added the MPICH2 version of MPI to the small development environment that we have been using in class. If you want to try writing, compiling, and running some simple MPI programs on Windows, then download the following zip file and try turning your home computer into a super computer cluster. (To compile and run MPI programs, read the file called
MPI-README-for-CS590A.txt in the cs590a.zip zip file.)
- Thursday, March 4.
- If you have already downloaded hw2.zip, then you should download it again. I fixed a couple of bugs.
- The following zip file contains a collection of shortcuts to definitions of important words and concepts from the study of concurrency. You can use this collection as a kind of outline and study guide for concurrency.
- Here is the textbook's MPI version of the count3's program.
- Tuesday, March 2.
- See the homework page for your second homework assignment.
- Read Chapter 7, pages 202-218 (about MPI), from our textbook, "Principles of Parallel Programming".
- Here are a couple of notes about the Producer/Consumer problem.
- Here are semaphore solutions to the Readers & Writers problem..
- Thursday, February 25.
- I have added the Pthreads-win32 library to the small development environment that we have been using in class. If you want to try writing, compiling, and running some pthreads examples on Windows, then download the following zip file and try out this GCC based environment.
- You all now have accounts on the Falcon cluster here at Purdue Calumet. Your user name is the same as your career account user name. I'll give you your initial password in class (or send me an email). You must change your password as soon as you first log in (type
passwd at the command prompt). Please do this soon. You access this cluster by using SSH to log in to the "head node", falcon.calumet.purdue.edu . The best way to use SHH on Windows is to download a copy of PuTTY.
- Tuesday, February 23.
- Read about the "Readers & Writers" problem in both our textbook (pages 168-171) and in the The Little Book of Semaphores (pages 71-85).
- Here is a good reference about threads, mutexes, semaphores, and condition variables. Look at the seven chapters on Concurrency from these Notes written by Remzi H. Arpaci-Dusseau for an operating systems course at the University of Wisconsin.
- Here is another "producer-consumer" kind of example written using condition variables.
- Thursday, February 18.
- See the homework page for your first homework assignment.
- Here is a producer-consumer example written using Microsoft's implementation of condition variables.
- Tuesday, February 16.
- Here are two C programs that demonstrate the cache ping-ponging. The demos are written in both Pthreads and Win32 threads.
- Here is a collections of C programs that demonstrates the producer-consumer (bounded buffer) problem.
- If you are interested in reading more about semaphores and synchronization, then you should download and look at the following book.
- Thursday, February 11.
- Here are two collections of C programs that demonstrate the idea of a race condition. The demos are written in both Pthreads and Win32 threads.
- The following zip file contains a small Windows development environment built around the MinGW version of the GCC compiler.
- Tuesday, February 9.
- Read Chapter 6, pages 145-167 (about Pthreads), from our textbook, "Principles of Parallel Programming".
- Here are two collections of C programs that demonstrate the idea of what a thread is. The demos are written in both Pthreads and Win32 threads.
- Here are two other references about Pthreads.
- Here are two links to the Windows thread API's.
- Thursday, February 4.
- For next week, read Chapter 6, pages 145-167, from our textbook, "Principles of Parallel Programming".
- Tuesday, February 2.
- For today and Thursday, read Chapter 2 from our textbook, "Principles of Parallel Programming".
- Here is a page that describes the parallel programming systems that are available at Purdue West Lafayette.
- Here are some links with more information about "parallel architectures".
- Here are some links with information about the architecture of computer memory.
- Tuesday, January 19.
- For today and Thursday, read Chapter 1 from our textbook, "Principles of Parallel Programming ".
- If you do not yet have a copy of the textbook, you can download a copy of Chapter 1 from the publisher's web site using the following link.
- The following links are to two fairly famous articles that spell out the reasons why parallel programming has become such an essential (yet difficult) tool for software developers. They were written by Herb Sutter.
|
|