...
Anchor | ||||
---|---|---|---|---|
|
To submit an MPI job on the cluster, the user must secure-copy the .mpi executable over the cluster.
In the following example, we prepared a C script that calculates the value of pi and we compiled it with:
module load openmpi
mpicc example.c -o picalc.mpi
Code Block |
---|
#!/bin/bash # #SBATCH --job-name=test_mpi_picalc #SBATCH --output=res_picalc.txt #SBATCH --nodelist=... #or use --nodes=... #SBATCH --ntasks=8 #SBATCH --time=5:00 #SBATCH --mem-per-cpu=1000 srun picalc.mpi |
If the .mpi file is not available on desired compute nodes, which will be the most frequent scenario if you save your files in a custom path, computation is going to fail. That happens because Slurm does not take autonomously the responsibility of transfering transferring files over compute nodes.
...