Cupla is a simple user interface for the platform independent parallel kernel acceleration library alpaka. It follows a similar concept as the Nvidia CUDA (TM) API by providing a software layer to manage accelerator devices. alpaka is used as backend for cupla.
Please keep in mind that a first, "find & replace" port from CUDA to cupla(x86) will result in rather bad performance. To get decent performance on x86 systems you just need to add the alpaka element level to your kernels.
(Read: add some tiling to your CUDA kernels by letting the same thread compute a fixed number of elements (N=4..16) instead of just computing one element per thread. Also, make the number of elements in your tiling a compile-time constant and your CUDA code (N=1) will just stay with the very same performance while adding single-source performance portability for, e.g., x86 targets).
cupla is licensed under LGPLv3 or later.
For more information see LICENSE.md.
- cmake 3.3.0
- alpaka
- is loaded as
git submodule
within cupla (see INSTALL.md) - for more information please read README.md
- is loaded as
- See our notes in INSTALL.md.
- Checkout the guide how to port your project.
- Checkout the tuning guide for a step further to performance portable code.
- Rene Widera
- Axel Huebl
- Dr. Michael Bussmann