docker/README.md
wren-engine: the engine service. check out example here: wren-engine
/examplewren-ai-service: the AI service.qdrant: the vector store ai service is using.wren-ui: the UI service.bootstrap: put required files to volume for engine service.Shared data using data volume.
Path structure as following:
/mdl
*.json (will put sample.json during bootstrap)accountsconfig.propertiesbridge network driver..env.example to .env and modify the OpenAI API key.config.example.yaml to config.yaml for AI service configuration.docker-compose --env-file .env up -d.docker-compose --env-file .env down.HOST_PORT in .env.To start with a custom LLM, the process is similar to starting with OpenAI. The main difference is that you need to modify the config.yaml file
that we created on the previous step. After modifying the file, you can restart the services by running docker-compose --env-file .env up -d --force-recreate wren-ai-service.
For detailed information on how to modify the configuration for different LLM providers and models, please refer to the AI Service Configuration. This guide provides comprehensive instructions on setting up various LLM providers, embedders, and other components of the AI service.