Before you get started with Local RAG, ensure you have:
- A local Ollama instance
- At least one model available within Ollama
llama3:8b
orllama2:7b
are good starter models
- Python 3.10+
WARNING: This application is untested
on Windows Subsystem for Linux. For best results, please utilize a Linux host if possible.
pip install pipenv && pipenv install
pipenv shell && streamlit run main.py
docker compose up -d
If you are running Ollama as a service, you may need to add an additional configuration to your docker-compose.yml file:
extra_hosts:
- 'host.docker.internal:host-gateway'