docs/backend/SYCL.md
SYCL is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17.
oneAPI is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to Intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include:
The llama.cpp SYCL backend is primarily designed for Intel GPUs. SYCL cross-platform capabilities enable support for other vendor GPUs as well.
The following releases are verified and recommended:
| Commit ID | Tag | Release | Verified Platform | Update date |
|---|---|---|---|---|
| 24e86cae7219b0f3ede1d5abdf5bf3ad515cccb8 | b5377 | llama-b5377-bin-win-sycl-x64.zip | Arc B580/Linux/oneAPI 2025.1 | |
| LNL Arc GPU/Windows 11/oneAPI 2025.1.1 | 2025-05-15 | |||
| 3bcd40b3c593d14261fb2abfabad3c0fb5b9e318 | b4040 | llama-b4040-bin-win-sycl-x64.zip | Arc A770/Linux/oneAPI 2024.1 | |
| MTL Arc GPU/Windows 11/oneAPI 2024.1 | 2024-11-19 | |||
| fb76ec31a9914b7761c1727303ab30380fd4f05c | b3038 | llama-b3038-bin-win-sycl-x64.zip | Arc A770/Linux/oneAPI 2024.1 | |
| MTL Arc GPU/Windows 11/oneAPI 2024.1 |
2026.03
2026.02
2025.11
2025.2
| GPU | Base tokens/s | Increased tokens/s | Percent |
|---|---|---|---|
| PVC 1550 | 39 | 73 | +87% |
| Flex 170 | 39 | 50 | +28% |
| Arc A770 | 42 | 55 | +30% |
| MTL | 13 | 16 | +23% |
| ARL-H | 14 | 17 | +21% |
2024.11
2024.8
2024.5
2024.4
2024.3
2024.1
| OS | Status | Verified |
|---|---|---|
| Linux | Support | Ubuntu 22.04, Fedora Silverblue 39, Arch Linux |
| Windows | Support | Windows 11 |
SYCL backend supports Intel GPU Family:
On older Intel GPUs, you may try OpenCL although the performance is not optimal, and some GPUs may not support OpenCL nor have any GPGPU capabilities.
| Intel GPU | Status | Verified Model |
|---|---|---|
| Intel Data Center Max Series | Support | Max 1550, 1100 |
| Intel Data Center Flex Series | Support | Flex 170 |
| Intel Arc A-Series | Support | Arc A770, Arc A730M, Arc A750 |
| Intel Arc B-Series | Support | Arc B580 |
| Intel built-in Arc GPU | Support | built-in Arc GPU in Meteor Lake, Arrow Lake, Lunar Lake |
| Intel iGPU | Support | iGPU in 13700k, 13400, i5-1250P, i7-1260P, i7-1165G7 |
Notes:
Memory
llm_load_tensors: buffer_size, is displayed in the log when running ./bin/llama-completion.Execution Unit (EU)
NA
The docker build option is currently limited to Intel GPU targets.
# Using FP32
docker build -t llama-cpp-sycl --build-arg="GGML_SYCL_F16=OFF" --target light -f .devops/intel.Dockerfile .
# Using FP16
docker build -t llama-cpp-sycl --build-arg="GGML_SYCL_F16=ON" --target light -f .devops/intel.Dockerfile .
Notes:
You can also use the .devops/llama-server-intel.Dockerfile, which builds the "server" alternative.
Check the documentation for Docker to see the available images.
# First, find all the DRI cards
ls -la /dev/dri
# Then, pick the card that you want to use (here for e.g. /dev/dri/card1).
docker run -it --rm -v "/path/to/models:/models" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card0:/dev/dri/card0 llama-cpp-sycl -m /models/7B/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -c 4096 -s 0
Notes:
Intel data center GPUs drivers installation guide and download page can be found here: Get intel dGPU Drivers.
Note: for client GPUs (iGPU & Arc A-Series), please refer to the client iGPU driver installation.
Once installed, add the user(s) to the video and render groups.
sudo usermod -aG render $USER
sudo usermod -aG video $USER
Note: logout/re-login for the changes to take effect.
Verify installation through clinfo:
sudo apt install clinfo
sudo clinfo -l
Sample output:
Platform #0: Intel(R) OpenCL Graphics
`-- Device #0: Intel(R) Arc(TM) A770 Graphics
Platform #0: Intel(R) OpenCL HD Graphics
`-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
SYCL backend depends on:
Intel® oneAPI DPC++/C++ compiler/running-time.
Intel® oneAPI DPC++/C++ library (oneDPL).
Intel® oneAPI Deep Neural Network Library (oneDNN).
Intel® oneAPI Math Kernel Library (oneMKL).
For Intel GPU
All above are included in both Intel® oneAPI Base toolkit and Intel® Deep Learning Essentials packages.
It's recommended to install Intel® Deep Learning Essentials which only provides the necessary libraries with less size.
The Intel® oneAPI Base toolkit and Intel® Deep Learning Essentials can be obtained from the official Intel® oneAPI Base Toolkit page.
Please follow the instructions for downloading and installing the Toolkit for Linux, and preferably keep the default installation values unchanged, notably the installation path (/opt/intel/oneapi by default).
Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.
Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI oneDNN for Intel GPUs.
| Verified release |
|---|
| 2025.2.1 |
| 2025.1 |
| 2024.1 |
In order to check the available SYCL devices on the machine, please use the sycl-ls command.
source /opt/intel/oneapi/setvars.sh
sycl-ls
When targeting an intel GPU, the user should expect one or more devices among the available SYCL devices. Please make sure that at least one GPU is present via sycl-ls, for instance [level_zero:gpu] in the sample output below:
[level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) Arc(TM) A770 Graphics 12.55.8 [1.3.29735+27]
[level_zero:gpu][level_zero:1] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) UHD Graphics 730 12.2.0 [1.3.29735+27]
[opencl:cpu][opencl:0] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i5-13400 OpenCL 3.0 (Build 0) [2025.20.8.0.06_160000]
[opencl:gpu][opencl:1] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [24.39.31294]
[opencl:gpu][opencl:2] Intel(R) OpenCL Graphics, Intel(R) UHD Graphics 730 OpenCL 3.0 NEO [24.39.31294]
./examples/sycl/build.sh
or
# Export relevant ENV variables
source /opt/intel/oneapi/setvars.sh
# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
# Option 2: Use FP16
cmake -B build -DGGML_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL_F16=ON
# build all binary
cmake --build build --config Release -j -v
It is possible to come across some precision issues when running tests that stem from using faster
instructions, which can be circumvented by setting the environment variable SYCL_PROGRAM_COMPILE_OPTIONS
as -cl-fp32-correctly-rounded-divide-sqrt
You can refer to the general Obtaining and quantizing models guide for model preparation, or download an already quantized model like llama-2-7b.Q4_0.gguf or Meta-Llama-3-8B-Instruct-Q4_0.gguf.
source /opt/intel/oneapi/setvars.sh
Similar to the native sycl-ls, available SYCL devices can be queried as follow:
./build/bin/llama-ls-sycl-device
This command will only display the selected backend that is supported by SYCL. The default backend is level_zero. For example, in a system with 2 intel GPU it would look like the following:
found 2 SYCL devices:
| | | |Compute |Max compute|Max work|Max sub| |
|ID| Device Type| Name|capability|units |group |group |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]| Intel(R) Arc(TM) A770 Graphics| 1.3| 512| 1024| 32| 16225243136|
| 1|[level_zero:gpu:1]| Intel(R) UHD Graphics 770| 1.3| 32| 512| 32| 53651849216|
| Chosen Device ID | Setting |
|---|---|
| 0 | export ONEAPI_DEVICE_SELECTOR="level_zero:0" or no action |
| 1 | export ONEAPI_DEVICE_SELECTOR="level_zero:1" |
| 0 & 1 | export ONEAPI_DEVICE_SELECTOR="level_zero:0;level_zero:1" |
Choose one of following methods to run.
./examples/sycl/test.sh -mg 0
./examples/sycl/test.sh
There are two device selection modes:
In two device selection modes, the default SYCL backend is level_zero, you can choose other backend supported by SYCL by setting environment variable ONEAPI_DEVICE_SELECTOR.
| Device selection | Parameter |
|---|---|
| Single device | --split-mode none --main-gpu DEVICE_ID |
| Multiple devices | --split-mode layer (default) |
Examples:
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-completion -no-cnv -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 99 -sm none -mg 0 --mmap
ZES_ENABLE_SYSMAN=1 ./build/bin/llama-completion -no-cnv -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 99 -sm layer --mmap
Notes:
detect 1 SYCL GPUs: [0] with top Max compute units:512
Or
use 1 SYCL GPUs: [0] with Max compute units:512
Intel GPU drivers instructions guide and download page can be found here: Get Intel GPU Drivers.
Download the binary package for Windows from: https://github.com/ggml-org/llama.cpp/releases.
Extract the package to local folder, run the llama tools directly. Refer to Run the inference.
Note, the package includes the SYCL running time and all depended dll files, no need to install oneAPI package and activte them.
If you already have a recent version of Microsoft Visual Studio, you can skip this step. Otherwise, please refer to the official download page for Microsoft Visual Studio.
SYCL backend depends on:
All above are included in both Intel® oneAPI Base toolkit and Intel® Deep Learning Essentials packages.
It's recommended to install Intel® Deep Learning Essentials which only provides the necessary libraries with less size.
The Intel® oneAPI Base toolkit and Intel® Deep Learning Essentials can be obtained from the official Intel® oneAPI Base Toolkit page.
Please follow the instructions for downloading and installing the Toolkit for Windows, and preferably keep the default installation values unchanged, notably the installation path (C:\Program Files (x86)\Intel\oneAPI by default).
Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.
b. Enable oneAPI running environment:
Type "oneAPI" in the search bar, then open the Intel oneAPI command prompt for Intel 64 for Visual Studio 2022 App.
On the command prompt, enable the runtime environment with the following:
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
c. Verify installation
In the oneAPI command line, run the following to print the available SYCL devices:
sycl-ls.exe
There should be one or more level-zero GPU devices displayed as [ext_oneapi_level_zero:gpu]. Below is example of such output detecting an intel Iris Xe GPU as a Level-zero SYCL device:
Output (example):
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Iris(R) Xe Graphics OpenCL 3.0 NEO [31.0.101.5186]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Iris(R) Xe Graphics 1.3 [1.3.28044]
a. Download & install cmake for Windows: https://cmake.org/download/ (CMake can also be installed from Visual Studio Installer) b. The new Visual Studio will install Ninja as default. (If not, please install it manually: https://ninja-build.org/)
You could download the release package for Windows directly, which including binary files and depended oneAPI dll files.
Choose one of following methods to build from source code.
.\examples\sycl\win-build-sycl.bat
On the oneAPI command line window, step into the llama.cpp main directory and run the following:
@call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force
# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -G "Ninja" -DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release
# Option 2: Or FP16
cmake -B build -G "Ninja" -DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release -DGGML_SYCL_F16=ON
cmake --build build --config Release -j
Or, use CMake presets to build:
cmake --preset x64-windows-sycl-release
cmake --build build-x64-windows-sycl-release -j --target llama-completion
cmake -DGGML_SYCL_F16=ON --preset x64-windows-sycl-release
cmake --build build-x64-windows-sycl-release -j --target llama-completion
cmake --preset x64-windows-sycl-debug
cmake --build build-x64-windows-sycl-debug -j --target llama-completion
You have two options to use Visual Studio to build llama.cpp:
Note:
All following commands are executed in PowerShell.
You can use Visual Studio to open the llama.cpp folder directly as a CMake project. Before compiling, select one of the SYCL CMake presets:
x64-windows-sycl-release
x64-windows-sycl-debug
Notes:
For a minimal experimental setup, you can build only the inference executable using:
cmake --build build --config Release -j --target llama-completion
You can use Visual Studio solution to build and work on llama.cpp on Windows. You need to convert the CMake Project into a .sln file.
If you want to use the Intel C++ Compiler for the entire llama.cpp project, run the following command:
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
If you prefer to use the Intel C++ Compiler only for ggml-sycl, ensure that ggml and its backend libraries are built as shared libraries ( i.e. -DBUILD_SHARED_LIBRARIES=ON, this is default behaviour):
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release \
-DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" \
-DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
If successful the build files have been written to: path/to/llama.cpp/build Open the project file build/llama.cpp.sln with Visual Studio.
Once the Visual Studio solution is created, follow these steps:
Open the solution in Visual Studio.
Right-click on ggml-sycl and select Properties.
In the left column, expand C/C++ and select DPC++.
In the right panel, find Enable SYCL Offload and set it to Yes.
Apply the changes and save.
Navigation Path:
Properties -> C/C++ -> DPC++ -> Enable SYCL Offload (Yes)
Now, you can build llama.cpp with the SYCL backend as a Visual Studio project.
To do it from menu: Build -> Build Solution.
Once it is completed, final results will be in build/Release/bin
Additional Note
You can avoid specifying SYCL_INCLUDE_DIR and SYCL_LIBRARY_DIR in the CMake command by setting the environment variables:
SYCL_INCLUDE_DIR_HINT
SYCL_LIBRARY_DIR_HINT
Above instruction has been tested with Visual Studio 17 Community edition and oneAPI 2025.0. We expect them to work also with future version if the instructions are adapted accordingly.
You can refer to the general Obtaining and quantizing models guide for model preparation, or download an already quantized model like llama-2-7b.Q4_0.gguf or Meta-Llama-3-8B-Instruct-Q4_0.gguf.
On the oneAPI command line window, run the following and step into the llama.cpp directory:
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
Similar to the native sycl-ls, available SYCL devices can be queried as follow:
build\bin\llama-ls-sycl-device.exe
This command will only display the selected backend that is supported by SYCL. The default backend is level_zero. For example, in a system with 2 Intel GPU it would look like the following:
found 2 SYCL devices:
| | | |Compute |Max compute|Max work|Max sub| |
|ID| Device Type| Name|capability|units |group |group |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]| Intel(R) Arc(TM) A770 Graphics| 1.3| 512| 1024| 32| 16225243136|
| 1|[level_zero:gpu:1]| Intel(R) UHD Graphics 770| 1.3| 32| 512| 32| 53651849216|
| Chosen Device ID | Setting |
|---|---|
| 0 | Default option. You may also want to set ONEAPI_DEVICE_SELECTOR="level_zero:0" |
| 1 | set ONEAPI_DEVICE_SELECTOR="level_zero:1" |
| 0 & 1 | set ONEAPI_DEVICE_SELECTOR="level_zero:0;level_zero:1" or set ONEAPI_DEVICE_SELECTOR="level_zero:*" |
Choose one of following methods to run.
examples\sycl\win-test.bat
Launch inference
There are two device selection modes:
In two device selection modes, the default SYCL backend is level_zero, you can choose other backend supported by SYCL by setting environment variable ONEAPI_DEVICE_SELECTOR.
| Device selection | Parameter |
|---|---|
| Single device | --split-mode none --main-gpu DEVICE_ID |
| Multiple devices | --split-mode layer (default) |
Examples:
build\bin\llama-completion.exe -no-cnv -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 99 -sm none -mg 0 --mmap
build\bin\llama-completion.exe -no-cnv -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 99 -sm layer --mmap
Note:
detect 1 SYCL GPUs: [0] with top Max compute units:512
Or
use 1 SYCL GPUs: [0] with Max compute units:512
| Name | Value | Function |
|---|---|---|
| GGML_SYCL | ON (mandatory) | Enable build with SYCL code path. |
| GGML_SYCL_TARGET | INTEL (default) | Set the SYCL target device type. |
| GGML_SYCL_DEVICE_ARCH | Optional | Set the SYCL device architecture. Setting the device architecture can improve the performance. See the table --offload-arch for a list of valid architectures. |
| GGML_SYCL_F16 | OFF (default) |ON (optional) | Enable FP16 build with SYCL code path. (1.) |
| GGML_SYCL_GRAPH | OFF (default) |ON (Optional) | Enable build with SYCL Graph extension. |
| GGML_SYCL_DNN | ON (default) |OFF (Optional) | Enable build with oneDNN. |
| CMAKE_C_COMPILER | icx (Linux), icx/cl (Windows) | Set icx compiler for SYCL code path. |
| CMAKE_CXX_COMPILER | icpx (Linux), icx (Windows) | Set icpx/icx compiler for SYCL code path. |
GGML_SYCL_F16=OFF/ON.| Name | Value | Function |
|---|---|---|
| GGML_SYCL_DEBUG | 0 (default) or 1 | Enable log function by macro: GGML_SYCL_DEBUG |
| GGML_SYCL_ENABLE_FLASH_ATTN | 1 (default) or 0 | Enable Flash-Attention. It can reduce memory usage. The performance impact depends on the LLM. |
| GGML_SYCL_DISABLE_OPT | 0 (default) or 1 | Disable optimize features for Intel GPUs. (Recommended to 1 for intel devices older than Gen 10) |
| GGML_SYCL_DISABLE_GRAPH | 0 or 1 (default) | Disable running computations through SYCL Graphs feature. Disabled by default because SYCL Graph is still on development, no better performance. |
| GGML_SYCL_DISABLE_DNN | 0 (default) or 1 | Disable running computations through oneDNN and always use oneMKL. |
| ZES_ENABLE_SYSMAN | 0 (default) or 1 | Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory. |
| Recommended to use when --split-mode = layer | ||
| UR_L0_ENABLE_RELAXED_ALLOCATION_LIMITS | 0 (default) or 1 | Support malloc device memory more than 4GB. |
Open to all contributors.
All code change should be useful to user:
Don't accept the codes of following cases:
Encourage to use environment variable to control features to be opened/closed.
Design the code based on the published official releases of oneAPI packages: compiler, library, driver, OS kernel.
Developers need to maintain the code they submit.
Split-mode:[row] is not supported.
Missed the AOT (Ahead-of-Time) in buiding.
Error: error while loading shared libraries: libsycl.so: cannot open shared object file: No such file or directory.
source /opt/intel/oneapi/setvars.sh.General compiler error:
I can not see [ext_oneapi_level_zero:gpu] after installing the GPU driver on Linux.
Please double-check with sudo sycl-ls.
If it's present in the list, please add video/render group to your user then logout/login or restart your system:
sudo usermod -aG render $USER
sudo usermod -aG video $USER
Otherwise, please double-check the GPU driver installation steps.
Can I report Ollama issue on Intel GPU to llama.cpp SYCL backend?
No. We can't support Ollama issue directly, because we aren't familiar with Ollama.
Suggest reproducing on llama.cpp and report similar issue to llama.cpp. We will support it.
It's same for other projects including llama.cpp SYCL backend.
Native API failed. Native API returns: 39 (UR_RESULT_ERROR_OUT_OF_DEVICE_MEMORY), ggml_backend_sycl_buffer_type_alloc_buffer: can't allocate 3503030272 Bytes of memory on device, or failed to allocate SYCL0 buffer
You are running out of Device Memory.
| Reason | Solution |
|---|---|
| The default context is too big. It leads to excessive memory usage. | Set -c 8192 or a smaller value. |
| The model is too big and requires more memory than what is available. | Choose a smaller model or change to a smaller quantization, like Q5 -> Q4; |
Alternatively, use more than one device to load model.|
ggml_backend_sycl_buffer_type_alloc_buffer: can't allocate 5000000000 Bytes of memory on device
You need to enable to support 4GB memory malloc by:
export UR_L0_ENABLE_RELAXED_ALLOCATION_LIMITS=1
set UR_L0_ENABLE_RELAXED_ALLOCATION_LIMITS=1
Please add the [SYCL] prefix/tag in issues/PRs titles to help the SYCL contributors to check/address them without delay.