🎉 Modern CUDA Learn Notes with PyTorch for Beginners: fp32/tf32, fp16/bf16, fp8/int8, Tensor/CUDA Cores, flash_attn, rope, embedding, sgemm, sgemv, hgemm, hgemv, warp/block reduce, dot prod, elementwise, sigmoid, relu, gelu, softmax, layernorm, rmsnorm, hist and some CUDA optimization techniques (pack LDST, cp.async, warp gemv, sliced_k/split_k/pipeline gemm, bank conflicts reduce, WMMA/MMA, block/warp swizzle, etc).
- / = not supported now.
- ✔️ = known work and already supported now.
- ❔ = in my plan, but not coming soon, maybe a few weeks later.
- workflow: custom CUDA kernel impl -> PyTorch python binding -> Run tests.
- How to contribute? please check 🌤🌤Kernel Trace & 目标 & 代码规范 & 致谢🎉🎉
👉TIPS: * means using Tensor Cores(MMA/WMMA), otherwise, using CUDA Cores by default.
💡说明: 大佬们写的文章实在是太棒了,学到了很多东西。欢迎大家提PR推荐更多优秀的文章!
GNU General Public License v3.0
Welcome to 🌟👆🏻star & submit a PR to this repo!