Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.
This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.
git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp
make chat
./chat
You can download the weights for ggml-alpaca-7b-q4.bin
with BitTorrent:
magnet: magnet:?xt=urn:btih:5aaceaec63b03e51a98f04fd5c42320b2a033010&dn=ggml-alpaca-7b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce
torrent: https://btcache.me/torrent/5AACEAEC63B03E51A98F04FD5C42320B2A033010
torrent: https://torrage.info/torrent.php?h=5aaceaec63b03e51a98f04fd5c42320b2a033010
Alternatively you can download them with IPFS.
# any of these will work
wget -O ggml-alpaca-7b-q4.bin -c https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
wget -O ggml-alpaca-7b-q4.bin -c https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
wget -O ggml-alpaca-7b-q4.bin -c https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
Save the ggml-alpaca-7b-q4.bin
file in the same directory as your ./chat
executable.
The weights are based on the published fine-tunes from alpaca-lora
, converted back into a pytorch checkpoint with a modified script and then quantized with llama.cpp the regular way.
-
Get the
chat.exe
binarya) Download a prebuilt release and extract it.
or
b) Build it yourself:
- Download and install CMake: https://cmake.org/download/
- Download and install
git
. If you've never used git before, consider a GUI client like https://desktop.github.com/ - Clone this repo using your git client of choice (for GitHub Desktop, go to File -> Clone repository -> From URL and paste
https://github.com/antimatter15/alpaca.cpp
in as the URL) - Open a Windows Terminal inside the folder you cloned the repository to
- Run the following commands one by one:
mkdir build cd build cmake .. cmake --build . --config Release
-
Download the weights via any of the links in "Get started" above, and save the file as
ggml-alpaca-7b-q4.bin
in the main Alpaca directory. -
In the terminal window, run this command:
.\chat.exe
- (You can add
--help
to view the launch options) - You can now type to the AI in the terminal and it will reply. Enjoy!
TODO
You can download the weights for ggml-alpaca-13b-q4.bin
with BitTorrent:
magnet: magnet:?xt=urn:btih:053b3d54d2e77ff020ebddf51dad681f2a651071&dn=ggml-alpaca-13b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce
torrent: https://btcache.me/torrent/053B3D54D2E77FF020EBDDF51DAD681F2A651071
torrent: https://torrage.info/torrent.php?h=053b3d54d2e77ff020ebddf51dad681f2a651071
./chat -m ggml-alpaca-13b-q4.bin
This combines Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Inspired by Simon Willison's getting started guide for LLaMA.
Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.