Interesting papers and links Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU https://huggingface.co/blog/trl-peft