website/docs/quick-start/installation/linux/index.mdx
import releaseImage from './assets.png'; import successImage from '../windows/success.png';
Running Tabby on Linux using Tabby's standalone executable distribution.
Tips:
sudo apt install nvidia-cuda-toolkit.nvcc --versionsudo apt install libvulkan1.Tabby utilizes Katana as a crawling backend for the developer docs context provider.
To import and analyze developer docs when llms.txt is unavailable at the specified link, Katana is required.
Please be aware that the minimum Katana version required is 1.1.2.
The simplest way to install Katana is to download a prebuilt binary from the official releases:
curl -L https://github.com/projectdiscovery/katana/releases/download/v1.1.2/katana_1.1.2_linux_amd64.zip -o katana.zip
unzip katana.zip katana
sudo mv katana /usr/bin/
rm katana.zip
This works well for most Linux environments. If you're using a different environment, please refer to the official installation instructions.
tabby executable will be in a subdirectory of dist.tabby to a folder of your choice.chmod +x tabby llama-serverRun the following command:
# For CPU-only environments
./tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
# For GPU-enabled environments (where DEVICE is cuda or vulkan)
./tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device $DEVICE
You can choose different models, as shown in the model registry
You should see a success message similar to the one in the screenshot below. After that, you can visit http://localhost:8080 to access your Tabby instance.
<div align="left"> </div>