Inspired by synochat, chatgpt and ollama.
The goal is to run LLM 100%
locally and integrate as a chatbot with Synology Chat
Install ollama
and download llama3:8b
on your mac
ollama pull llama3:8b
ollama server
It also needs your Synology Chat Bot's token and incoming URL (host), set them as environment variables before using the app:
export export SYNOLOGY_TOKEN='...'
export export SYNOLOGY_INCOMING_URL='...'
Disable PROXY for localhost HTTP access if needed
export NO_PROXY=http://127.0.0.1
pip install -r requirements.txt
python synochatgpt.py
- Fine tune
- Docker
- RAG