docs/backend/VirtGPU/configuration.md
This document describes the environment variables used by the ggml-virtgpu backend system, covering both the frontend (guest-side) and backend (host-side) components.
The ggml-virtgpu backend uses environment variables for configuration across three main components:
ggml/src/ggml-virtgpu/virtgpu.cppexport GGML_REMOTING_USE_APIR_CAPSET=1 # Use APIR capset
# or leave unset for Venus capset
These environment variables are used during the transition phase for running with an unmodified hypervisor (not supporting the VirglRenderer APIR component). They will be removed in the future, and the hypervisor will instead configure VirglRenderer with the APIR Configuration Key.
virglrenderer/src/apir/apir-context.capir.load_library.pathexport VIRGL_APIR_BACKEND_LIBRARY="/path/to/libggml-remotingbackend.so"
virglrenderer/src/apir/apir-renderer.hexport VIRGL_ROUTE_VENUS_TO_APIR=1 # For testing with an unmodified hypervisor
virglrenderer/src/apir/apir-renderer.cVIRGL_APIR_LOG_TO_FILEstderrexport VIRGL_APIR_LOG_TO_FILE="/tmp/apir-debug.log"
These environment variables are used during the transition phase for running with an unmodified hypervisor (not supporting the VirglRenderer APIR component). They will be removed in the future, and the hypervisor will instead configure VirglRenderer with the APIR Configuration Key.
ggml/src/ggml-virtgpu/backend/backend.cppAPIR_LLAMA_CPP_GGML_LIBRARY_PATHggml.library.path# macOS with Metal backend
export APIR_LLAMA_CPP_GGML_LIBRARY_PATH="/opt/llama.cpp/lib/libggml-metal.dylib"
# Linux with CUDA backend
export APIR_LLAMA_CPP_GGML_LIBRARY_PATH="/opt/llama.cpp/lib/libggml-cuda.so"
# macOS or Linux with Vulkan backend
export APIR_LLAMA_CPP_GGML_LIBRARY_PATH="/opt/llama.cpp/lib/libggml-vulkan.so"
ggml/src/ggml-virtgpu/backend/backend.cppAPIR_LLAMA_CPP_GGML_LIBRARY_REGggml.library.regggml_backend_init)ggml_backend_init# Metal backend
export APIR_LLAMA_CPP_GGML_LIBRARY_REG="ggml_backend_metal_reg"
# CUDA backend
export APIR_LLAMA_CPP_GGML_LIBRARY_REG="ggml_backend_cuda_reg"
# Vulkan backend
export APIR_LLAMA_CPP_GGML_LIBRARY_REG="ggml_backend_vulkan_reg"
# Generic fallback (default)
# export APIR_LLAMA_CPP_GGML_LIBRARY_REG="ggml_backend_init"
ggml/src/ggml-virtgpu/backend/backend.cpp:62APIR_LLAMA_CPP_LOG_TO_FILEexport APIR_LLAMA_CPP_LOG_TO_FILE="/tmp/ggml-backend-debug.log"
The configuration system works as follows:
Hypervisor Setup: Virglrenderer loads the APIR backend library specified by VIRGL_APIR_BACKEND_LIBRARY
Context Creation: When an APIR context is created, it populates a configuration table with environment variables:
apir.load_library.path ← VIRGL_APIR_BACKEND_LIBRARYggml.library.path ← APIR_LLAMA_CPP_GGML_LIBRARY_PATHggml.library.reg ← APIR_LLAMA_CPP_GGML_LIBRARY_REGBackend Initialization: The backend queries the configuration via callbacks:
virgl_cbs->get_config(ctx_id, "ggml.library.path") returns the library pathvirgl_cbs->get_config(ctx_id, "ggml.library.reg") returns the registration functionLibrary Loading: The backend dynamically loads and initializes the specified GGML library
Common error scenarios and their messages:
"cannot open the GGML library: env var 'APIR_LLAMA_CPP_GGML_LIBRARY_PATH' not defined""cannot register the GGML library: env var 'APIR_LLAMA_CPP_GGML_LIBRARY_REG' not defined"Here's an example configuration for a macOS host with Metal backend:
# Hypervisor environment
export VIRGL_APIR_BACKEND_LIBRARY="/opt/llama.cpp/lib/libggml-virtgpu-backend.dylib"
# Backend configuration
export APIR_LLAMA_CPP_GGML_LIBRARY_PATH="/opt/llama.cpp/lib/libggml-metal.dylib"
export APIR_LLAMA_CPP_GGML_LIBRARY_REG="ggml_backend_metal_reg"
# Optional logging
export VIRGL_APIR_LOG_TO_FILE="/tmp/apir.log"
export APIR_LLAMA_CPP_LOG_TO_FILE="/tmp/ggml.log"
# Guest configuration
export GGML_REMOTING_USE_APIR_CAPSET=1