docs/content/installation/linux.md
You can manually download the appropriate binary for your system from the releases page:
chmod +x local-ai-*
./local-ai-*
Hardware requirements vary based on:
For performance benchmarks with different backends like llama.cpp, visit this link.
After installation, you can:
http://localhost:8080