Back to Tabby

Docker

website/docs/quick-start/installation/docker.mdx

4.3.11.7 KB
Original Source

import Collapse from '@site/src/components/Collapse';

Docker

This guide explains how to launch Tabby using docker.

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

<Tabs> <TabItem value="cuda" label="CUDA" default>

To run Tabby with CUDA support in Docker, please install the NVIDIA Container Toolkit. Once installed, you can launch Tabby using the command below:

bash
docker run -d \
  --name tabby \
  --gpus all \
  -p 8080:8080 \
  -v $HOME/.tabby:/data \
  registry.tabbyml.com/tabbyml/tabby \
    serve \
    --model StarCoder-1B \
    --chat-model Qwen2-1.5B-Instruct \
    --device cuda
<Collapse title="For the systems with SELinux enabled, you may need to add the `:Z` option to the volume mount">
bash
docker run -d \
  --name tabby \
  --gpus all \
  -p 8080:8080 \
  -v $HOME/.tabby:/data:Z \
  registry.tabbyml.com/tabbyml/tabby \
    serve \
    --model StarCoder-1B \
    --chat-model Qwen2-1.5B-Instruct \
    --device cuda
</Collapse>

After Tabby is running, you can access it at http://localhost:8080.

To view the logs, you can use the following command:

bash
docker logs -f tabby
</TabItem>

{false && <TabItem value="cpu" label="CPU" default>

bash
docker run --entrypoint /opt/tabby/bin/tabby-cpu -it \
  -p 8080:8080 -v $HOME/.tabby:/data \
  registry.tabbyml.com/tabbyml/tabby \
  serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct

</TabItem>} </Tabs>