Skip to content

Commit

Permalink
Merge pull request #62 from mborne/open-webui-proxy
Browse files Browse the repository at this point in the history
ollama / open-webui - split and add proxy support
  • Loading branch information
mborne authored Jul 17, 2024
2 parents 62e99b0 + 29a26a3 commit 4310698
Show file tree
Hide file tree
Showing 9 changed files with 114 additions and 254 deletions.
1 change: 1 addition & 0 deletions ollama/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
DEVBOX_HOSTNAME=dev.localhost
55 changes: 55 additions & 0 deletions ollama/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# ollama

Containers running [Ollama](https://hub.docker.com/r/ollama/ollama)

## Usage with docker

* Ensure that GPU support is enabled in docker (or adapt [docker-compose.yaml](docker-compose.yaml)) :

```bash
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
```

* To use Ollama CLI :

```bash
# pull models from https://ollama.com/library
docker compose exec ollama ollama pull llama3
docker compose exec ollama ollama pull gemma2
# interactive model
docker compose exec ollama ollama run llama3
```

* To use [Ollama API](https://github.com/ollama/ollama/blob/main/docs/api.md#api) :

```bash
# list models
curl -sS http://localhost:11434/api/tags | jq -r '.models[].name'

# pull model from https://ollama.com/library
curl http://localhost:11434/api/pull -d '{
"name": "llama3"
}'

# use model
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?"
}'
```

* To create **custom model** from [OLLAMA Modelfile](https://github.com/ollama/ollama/tree/main?tab=readme-ov-file#customize-a-prompt), a sample [models/geoassistant](models/geoassistant/README.md) is available :

```bash
docker compose exec ollama /bin/bash
ollama create geoassistant -f /models/geoassistant/Modelfile
ollama run geoassistant
# Do you know the most visited museums in Paris?
```

## Ressources

* [ollama](https://github.com/ollama/ollama/tree/main?tab=readme-ov-file#ollama)
* [hub.docker.com - ollama/ollama](https://hub.docker.com/r/ollama/ollama)
* [ollama - API](https://github.com/ollama/ollama/blob/main/docs/api.md#api)
* [mborne/toolbox - cuda-toolkit](https://github.com/mborne/toolbox/tree/master/cuda-toolkit#ressources)
38 changes: 38 additions & 0 deletions ollama/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
services:
# https://hub.docker.com/r/ollama/ollama
ollama:
image: ollama/ollama:0.2.6
container_name: ollama
ports:
- 11434:11434
volumes:
- ollama:/root/.ollama
- ./models:/models:ro
environment:
#- OLLAMA_DEBUG=1
- HTTP_PROXY
- HTTPS_PROXY
- NO_PROXY=0.0.0.0,ollama,${NO_PROXY:-127.0.0.1,localhost}
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: ["gpu"]
count: all
labels:
- "traefik.enable=true"
# https://ollama.dev.localhost
- "traefik.http.routers.ollama.rule=Host(`ollama.${DEVBOX_HOSTNAME}`)"
- "traefik.http.routers.ollama.service=ollama-service@docker"
- "traefik.http.services.ollama-service.loadbalancer.server.port=11434"
restart: unless-stopped

volumes:
ollama:
name: ollama

networks:
default:
name: devbox
external: true
1 change: 1 addition & 0 deletions open-webui/.env
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ DEVBOX_HOSTNAME=dev.localhost
WEBUI_AUTH=False
OPENAI_API_KEY=
PIPELINES_API_KEY=0p3n-w3bu!
OLLAMA_BASE_URL=http://ollama:11434
49 changes: 9 additions & 40 deletions open-webui/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Open WebUI

Containers running [Open WebUI](https://github.com/open-webui/open-webui?tab=readme-ov-file#open-webui-formerly-ollama-webui-) and [ollama](https://hub.docker.com/r/ollama/ollama) to get started with LLM locally.
Container running [Open WebUI](https://github.com/open-webui/open-webui?tab=readme-ov-file#open-webui-formerly-ollama-webui-) for [Ollama](../ollama/README.md).

## Requirements

* [ollama](../ollama/README.md)

## Usage with docker

Expand All @@ -12,49 +16,14 @@ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

* Start : `docker compose up -d`
* Open https://open-webui.dev.localhost
* To use [ollama API](https://github.com/ollama/ollama/blob/main/docs/api.md#api) :

```bash
# list models
curl -sS http://localhost:11434/api/tags | jq -r '.models[].name'

# pull model from https://ollama.com/library
curl http://localhost:11434/api/pull -d '{
"name": "llama3"
}'

# use model
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?"
}'
```

## Custom model

To create custom model from [OLLAMA Modelfile](https://github.com/ollama/ollama/tree/main?tab=readme-ov-file#customize-a-prompt), a sample [models/geoassistant](models/geoassistant/README.md) is available :

```bash
docker compose exec ollama
ollama create geoassistant -f /models/geoassistant/Modelfile
ollama run geoassistant
# Do you know the most visited museums in Paris?
```


## Resources

* [Open WebUI - Getting Started](https://docs.openwebui.com/getting-started/)
* [mborne/toolbox - cuda-toolkit](https://github.com/mborne/toolbox/tree/master/cuda-toolkit#ressources)
* [Pipelines](https://docs.openwebui.com/pipelines) :
* https://docs.openwebui.com/pipelines/#-quick-start-with-docker
* https://ikasten.io/2024/06/03/getting-started-with-openwebui-pipelines/
* https://raw.githubusercontent.com/open-webui/pipelines/main/examples/filters/function_calling_filter_pipeline.py

[OLLAMA](https://github.com/ollama/ollama) :

* [ollama](https://github.com/ollama/ollama/tree/main?tab=readme-ov-file#ollama)
* [hub.docker.com - ollama/ollama](https://hub.docker.com/r/ollama/ollama)
* [ollama - API](https://github.com/ollama/ollama/blob/main/docs/api.md#api)

[Pipelines](https://docs.openwebui.com/pipelines) :

* https://docs.openwebui.com/pipelines/#-quick-start-with-docker
* https://ikasten.io/2024/06/03/getting-started-with-openwebui-pipelines/
* https://raw.githubusercontent.com/open-webui/pipelines/main/examples/filters/function_calling_filter_pipeline.py
43 changes: 10 additions & 33 deletions open-webui/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,23 @@ services:
- 3000:8080
environment:
- WEBUI_AUTH=False
- OLLAMA_BASE_URL=http://ollama:11434
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- GLOBAL_LOG_LEVEL="DEBUG"
# add pipelines
- OPENAI_API_BASE_URL=https://api.openai.com/v1;http://pipelines:9099
- OPENAI_API_KEYS=${OPENAI_API_KEY};${PIPELINES_API_KEY}
- HTTP_PROXY
- HTTPS_PROXY
- NO_PROXY=ollama,pipelines,$NO_PROXY
volumes:
- open-webui:/app/backend/data
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: ["gpu"]
count: all
resources:
reservations:
devices:
- driver: nvidia
capabilities: ["gpu"]
count: all
labels:
- "traefik.enable=true"
# https://open-webui.dev.localhost
Expand All @@ -30,30 +33,6 @@ services:
- "traefik.http.services.open-webui-service.loadbalancer.server.port=8080"
restart: unless-stopped

# https://hub.docker.com/r/ollama/ollama
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- 11434:11434
volumes:
- ollama:/root/.ollama
- ./models:/models:ro
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: ["gpu"]
count: all
labels:
- "traefik.enable=true"
# https://ollama.dev.localhost
- "traefik.http.routers.ollama.rule=Host(`ollama.${DEVBOX_HOSTNAME}`)"
- "traefik.http.routers.ollama.service=ollama-service@docker"
- "traefik.http.services.ollama-service.loadbalancer.server.port=11434"
restart: unless-stopped

# https://docs.openwebui.com/pipelines/#-quick-start-with-docker
# https://ikasten.io/2024/06/03/getting-started-with-openwebui-pipelines/
# https://raw.githubusercontent.com/open-webui/pipelines/main/examples/filters/function_calling_filter_pipeline.py
Expand All @@ -69,8 +48,6 @@ services:
restart: unless-stopped

volumes:
ollama:
name: ollama
open-webui:
name: open-webui
pipelines:
Expand Down
14 changes: 0 additions & 14 deletions open-webui/models/geoassistant/Modelfile

This file was deleted.

74 changes: 0 additions & 74 deletions open-webui/models/geoassistant/README.md

This file was deleted.

Loading

0 comments on commit 4310698

Please sign in to comment.