Skip to content

Commit

Permalink
Update installation_HPC.md
Browse files Browse the repository at this point in the history
as it doesn't always work with MPItrampoline
  • Loading branch information
boriskaus authored Nov 15, 2023
1 parent 98f1322 commit 9985025
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion docs/src/man/installation_HPC.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@
Installing LaMEM on high performance computer (HPC) systems can be complicated, because you will have to compile PETSc with the correct dependencies for that system.
The reason is that HPC systems use MPI versions that are specifically tailored/compiled for that system.

> Warning: the explanation below is still somewhat experimental and may not work on your system
> The best approach of running LaMEM on large HPC systems remains to install the correct version of PETSc using the locally recommended MPI libraries and install the correct version of LaMEM with that. You can still save the input setup to file, for the correct number or processors using LaMEM.jl. The locally generated `*.dat` file will still work.
Luckily there is a solution thanks to the great work of `@eschnett` and colleagues, who developed [MPITrampoline](https://github.com/eschnett/MPItrampoline) which is an intermediate layer between the HPC-system-specific MPI libraries and the precompiled `LaMEM` binaries.

It essentially consists of two steps:
Expand Down Expand Up @@ -61,4 +64,4 @@ julia> LaMEM.LaMEM_jll.host_platform
Linux x86_64 {cxxstring_abi=cxx11, julia_version=1.8.1, libc=glibc, libgfortran_version=5.0.0, mpi=mpitrampoline}
```

At this stage the precompiled version of `LaMEM` should be useable on that system.
At this stage the precompiled version of `LaMEM` should be useable on that system.

0 comments on commit 9985025

Please sign in to comment.