docs/Installation/ollama.md
This guide will help you set up Ollama for Devika. Ollama is a tool that allows you to run open-source large language models (LLMs) locally on your machine. It supports varity of models like Llama-2, mistral, code-llama and many more.
ollama run llama2.it will download the model and start the server.ollama list will show the list of models you have downloaded.ollama serve. default address for the server is http://localhost:11434ollama [command] --help will show the help menu. for example, ollama run --help will show the help menu for the run command.config.toml file or you can change it via UI.