Back to Localai

Container Images

docs/content/getting-started/container-images.md

4.1.37.4 KB
Original Source

+++ disableToc = false title = "Run with container images" weight = 6 url = '/basics/container/' ico = "rocket_launch" +++

LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.

For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don't have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the [build section]({{%relref "installation/build" %}}).

{{% notice tip %}}

Available Images Types:

  • Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images.

{{% /notice %}}

Prerequisites

Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:

{{% notice tip %}}

Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp, visit this link. The rwkv backend is noted for its lower resource consumption.

{{% /notice %}}

Standard container images

Standard container images do not have pre-installed models. Use these if you want to configure models manually.

{{< tabs >}} {{% tab title="Vanilla / CPU Images" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:masterlocalai/localai:master
Latest tagquay.io/go-skynet/local-ai:latestlocalai/localai:latest
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}localai/localai:{{< version >}}

{{% /tab %}}

{{% tab title="GPU Images CUDA 12" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-12localai/localai:master-gpu-nvidia-cuda-12
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-12localai/localai:latest-gpu-nvidia-cuda-12
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-gpu-nvidia-cuda-12localai/localai:{{< version >}}-gpu-nvidia-cuda-12

{{% /tab %}}

{{% tab title="GPU Images CUDA 13" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-13localai/localai:master-gpu-nvidia-cuda-13
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-13localai/localai:latest-gpu-nvidia-cuda-13
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-gpu-nvidia-cuda-13localai/localai:{{< version >}}-gpu-nvidia-cuda-13

{{% /tab %}}

{{% tab title="Intel GPU" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-intellocalai/localai:master-gpu-intel
Latest tagquay.io/go-skynet/local-ai:latest-gpu-intellocalai/localai:latest-gpu-intel
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-gpu-intellocalai/localai:{{< version >}}-gpu-intel

{{% /tab %}}

{{% tab title="AMD GPU" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-hipblaslocalai/localai:master-gpu-hipblas
Latest tagquay.io/go-skynet/local-ai:latest-gpu-hipblaslocalai/localai:latest-gpu-hipblas
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-gpu-hipblaslocalai/localai:{{< version >}}-gpu-hipblas

{{% /tab %}}

{{% tab title="Vulkan Images" %}}

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-vulkanlocalai/localai:master-vulkan
Latest tagquay.io/go-skynet/local-ai:latest-gpu-vulkanlocalai/localai:latest-gpu-vulkan
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-vulkanlocalai/localai:{{< version >}}-vulkan
{{% /tab %}}

{{% tab title="Nvidia Linux for tegra (CUDA 12)" %}}

These images are compatible with Nvidia ARM64 devices with CUDA 12, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Orin. For more information, see the [Nvidia L4T guide]({{%relref "reference/nvidia-l4t" %}}).

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64localai/localai:master-nvidia-l4t-arm64
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64localai/localai:latest-nvidia-l4t-arm64
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-nvidia-l4t-arm64localai/localai:{{< version >}}-nvidia-l4t-arm64

{{% /tab %}}

{{% tab title="Nvidia Linux for tegra (CUDA 13)" %}}

These images are compatible with Nvidia ARM64 devices with CUDA 13, such as the Nvidia DGX Spark. For more information, see the [Nvidia L4T guide]({{%relref "reference/nvidia-l4t" %}}).

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64-cuda-13localai/localai:master-nvidia-l4t-arm64-cuda-13
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64-cuda-13localai/localai:latest-nvidia-l4t-arm64-cuda-13
Versioned imagequay.io/go-skynet/local-ai:{{< version >}}-nvidia-l4t-arm64-cuda-13localai/localai:{{< version >}}-nvidia-l4t-arm64-cuda-13

{{% /tab %}}

{{< /tabs >}}

See Also

  • [GPU acceleration]({{%relref "features/gpu-acceleration" %}})