1
0
Fork 0

improve gpu requirements in readme

This commit is contained in:
Massaki Archambault 2024-11-13 20:40:51 -05:00
parent 938ab69322
commit 17439b079a
2 changed files with 23 additions and 22 deletions

View File

@ -11,35 +11,36 @@ A quick prototype to self-host [Open WebUI](https://docs.openwebui.com/) backed
### Steps for NVIDIA GPU
1. Make sure your drivers are up to date.
2. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
3. Clone the repo.
4. Symlink the NVIDIA compose spec to select it. `ln -s docker-compose.nvidia.yml docker.compose.yml`
5. Run `docker compose up`.
6. Browse http://localhost:8080/
7. Add a model and start chatting!
1. Check if your GPU is supported: https://github.com/ollama/ollama/blob/main/docs/gpu.md#nvidia. You need CUDA 5.0+. As a reference, the oldest card I managed to make it run is a GeForce GTX 970Ti and a Quadro M4000 (they both were quite slow though).
2. Make sure your drivers are up to date. If you are on Windows, make sure your drivers are up to date on your Windows host.
3. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
4. Clone the repo.
5. Symlink the NVIDIA compose spec to select it. `ln -s docker-compose.nvidia.yml docker.compose.yml`
6. Run `docker compose up`.
7. Browse http://localhost:8080/
8. Add a model and start chatting!
### Steps for AMD GPU
**Warning: AMD will *doesn't* support Windows at the moment. Use Linux.**
1. Make sure your drivers are up to date.
2. Clone the repo.
3. Symlink the AMD compose spec to select it. `ln -s docker-compose.amd.yml docker.compose.yml`
4. Run `docker compose up`.
5. Browse http://localhost:8080/
6. Add a model and start chatting!
1. Check if your GPU is supported: https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon. It may be possible to run even with an unsupported GPU (I once managed to make it run on a 5700XT) by setting the `HSA_OVERRIDE_GFX_VERSION` environment variable but you are on your own. You can add this environment variable by editing the file `docker-compose.amd.yml`.
2. Make sure your drivers are up to date.
3. Clone the repo.
4. Symlink the AMD compose spec to select it. `ln -s docker-compose.amd.yml docker.compose.yml`
5. Run `docker compose up`.
6. Browse http://localhost:8080/
7. Add a model and start chatting!
### Steps for NO GPU (use CPU)
**Warning: This may be very slow depending on your CPU and may us a lot of RAM depending on the model**
1. Make sure your drivers are up to date.
2. Clone the repo.
3. Symlink the CPU compose spec to select it. `ln -s docker-compose.cpu.yml docker.compose.yml`
4. Run `docker compose up`.
5. Browse http://localhost:8080/
6. Add a model and start chatting!
1. Clone the repo.
2. Symlink the CPU compose spec to select it. `ln -s docker-compose.cpu.yml docker.compose.yml`
3. Run `docker compose up`.
4. Browse http://localhost:8080/
5. Add a model and start chatting!
## Adding models

View File

@ -19,9 +19,9 @@ services:
- SYS_PTRACE
security_opt:
- seccomp=unconfined
# environment:
# # https://github.com/ROCm/ROCm/issues/2788#issuecomment-1915765846
# HSA_OVERRIDE_GFX_VERSION: 11.0.0
environment:
# https://github.com/ROCm/ROCm/issues/2788#issuecomment-1915765846
HSA_OVERRIDE_GFX_VERSION:
# end of section for AMD GPU support
volumes: