1
0
Fork 0

Compare commits

...

5 Commits

7 changed files with 51 additions and 11 deletions

5
.env
View File

@ -1,3 +1,8 @@
# If set, HTTP_PROXY messes with inter-container communication in the deployment.
# Ollama downloads the models via https anyway so it should be safe to unset it
HTTP_PROXY=
http_proxy=
#=============================================================# #=============================================================#
# LibreChat Configuration # # LibreChat Configuration #
#=============================================================# #=============================================================#

View File

@ -1,6 +1,6 @@
# librechat-mistral # librechat-mistral
A quick prototype to self-host [LibreChat](https://github.com/danny-avila/LibreChat) with [Mistral](https://mistral.ai/news/announcing-mistral-7b/), and an OpenAI-like api provided by [LiteLLM](https://github.com/BerriAI/litellm) on the side. A quick prototype to self-host [LibreChat](https://github.com/danny-avila/LibreChat) backed by a locally-run [Mistral](https://mistral.ai/news/announcing-mistral-7b/) model, and an OpenAI-like api provided by [LiteLLM](https://github.com/BerriAI/litellm) on the side.
## Goals ## Goals
@ -38,7 +38,16 @@ A quick prototype to self-host [LibreChat](https://github.com/danny-avila/LibreC
6. Browse http://localhost:3080/ 6. Browse http://localhost:3080/
7. Create an admin account and start chatting! 7. Create an admin account and start chatting!
The API along with the APIDoc will be available at http://localhost:8000/ ### Steps for NO GPU (use CPU)
**Warning: This may be very slow depending on your CPU and may us a lot of RAM depending on the model**
1. Make sure your drivers are up to date.
2. Clone the repo.
3. Copy the CPU compose spec to select it. `cp docker-compose.cpu.yml docker.compose.yml`
4. Run `docker compose up`. Wait for a few minutes for the model to be downloaded and served.
5. Browse http://localhost:3080/
6. Create an admin account and start chatting!
## Configuring additional models ## Configuring additional models
@ -85,3 +94,8 @@ eg:
* [LibreChat](https://github.com/danny-avila/LibreChat) is a ChatGPT clone with support for multiple AI endpoints. It's deployed alongside a [MongoDB](https://github.com/mongodb/mongo) database and [Meillisearch](https://github.com/meilisearch/meilisearch) for search. It's exposed on http://localhost:3080/. * [LibreChat](https://github.com/danny-avila/LibreChat) is a ChatGPT clone with support for multiple AI endpoints. It's deployed alongside a [MongoDB](https://github.com/mongodb/mongo) database and [Meillisearch](https://github.com/meilisearch/meilisearch) for search. It's exposed on http://localhost:3080/.
* [LiteLLM](https://github.com/BerriAI/litellm) is an OpenAI-like API. It is exposed on http://localhost:8000/ without any authentication by default. * [LiteLLM](https://github.com/BerriAI/litellm) is an OpenAI-like API. It is exposed on http://localhost:8000/ without any authentication by default.
* [Ollama](https://github.com/ollama/ollama) manages and serve the local models. * [Ollama](https://github.com/ollama/ollama) manages and serve the local models.
## Alternatives
Check out [LM Studio](https://lmstudio.ai/) for a more integrated, but non web-based alternative!

View File

@ -5,6 +5,11 @@ services:
command: --config /config.yaml command: --config /config.yaml
ports: ports:
- 8000:8000 - 8000:8000
env_file:
- .env
environment:
- HOST=0.0.0.0
- PORT=8000
volumes: volumes:
- ./litellm/config.yaml:/config.yaml:ro - ./litellm/config.yaml:/config.yaml:ro
@ -33,6 +38,8 @@ services:
volumes: volumes:
- librechat_mongodb_data:/data/db - librechat_mongodb_data:/data/db
command: mongod --noauth command: mongod --noauth
env_file:
- .env
meilisearch: meilisearch:
image: getmeili/meilisearch:v1.5 image: getmeili/meilisearch:v1.5
restart: unless-stopped restart: unless-stopped
@ -40,6 +47,8 @@ services:
environment: environment:
- MEILI_HOST=http://meilisearch:7700 - MEILI_HOST=http://meilisearch:7700
- MEILI_NO_ANALYTICS=true - MEILI_NO_ANALYTICS=true
env_file:
- .env
volumes: volumes:
- librechat_meilisearch_data:/meili_data - librechat_meilisearch_data:/meili_data

17
docker-compose.cpu.yml Normal file
View File

@ -0,0 +1,17 @@
include:
- docker-compose.base.yml
services:
# Begin Ollama service
ollama:
image: ollama/ollama:0.1.23
restart: unless-stopped
entrypoint: /bootstrap.sh
command: mistral
env_file:
- .env
ports:
- 11434:11434
volumes:
- ./ollama/bootstrap.sh:/bootstrap.sh:ro
- ./ollama:/root/.ollama

View File

@ -7,12 +7,12 @@ endpoints:
apiKey: "noUse" apiKey: "noUse"
baseURL: "http://litellm:8000" baseURL: "http://litellm:8000"
models: models:
default: ["mistral-7b"] default: ["mistral"]
fetch: true fetch: true
titleConvo: true titleConvo: true
titleModel: "mistral-7b" titleModel: "mistral"
summarize: true summarize: true
summaryModel: "mistral-7b" summaryModel: "mistral"
forcePrompt: false forcePrompt: false
modelDisplayLabel: "Ollama" modelDisplayLabel: "Ollama"
# addParams: # addParams:

View File

@ -1,5 +1,5 @@
model_list: model_list:
- model_name: mistral-7b - model_name: mistral
litellm_params: litellm_params:
model: ollama/mistral model: ollama/mistral
api_base: http://ollama:11434 api_base: http://ollama:11434

View File

@ -1,10 +1,5 @@
#!/bin/bash -x #!/bin/bash -x
# Ollama has trouble handling HTTP_PROXY
# https://github.com/ollama/ollama/issues/2168
unset HTTP_PROXY
unset http_proxy
ollama serve & ollama serve &
sleep 1 sleep 1