This project aims to analyze and compare the performance of the OpenMPI Allgather
collective communication operation with a custom implementation of the All-to-All
communication pattern. The Allgather
operation is commonly used in parallel computing to gather data from all processes in a communicator and distribute the combined data back to all processes.
The objective is to evaluate the efficiency, scalability, and overhead of both approaches in various scenarios, such as different message sizes, number of processes, and communication patterns. By conducting extensive experiments on MSstate's Supercomputer "Shadow," we aim to gain insights into the strengths and weaknesses of each method and provide recommendations for choosing the appropriate approach based on specific requirements and constraints.
-
Connect to MSstate's Supercomputer "Shadow" using your preferred SSH client:
ssh -Y shadow-login
-
Load the required modules for compiling and executing MPI programs:
module load openmpi slurm
-
Copy the project files from your local machine to the "Shadow" cluster and change to the project directory:
scp -r project-directory shadow-login:~ /project-directory && cd project-directory
-
Compile the project:
mpicc -o executable_name source_file.c
-
Submit the job using Slurm:
sbatch job_script.js
-
Monitor the status of your job:
squeue -u username
Once the job is completed, you will find the output in the specified file.
This project was created as a class project for Mississippi State University CSE 4163 Design of Parallel Algorithms Class. All rights are reserved. Attribution:
- Dr. Ed Luke: This project was created by Dr. Ed Luke for students to complete as part of the course.
Unauthorized copying, modification, or distribution of this project is strictly prohibited. For education only.