Back to Whisper Cpp

CI

ci/README.md

1.8.41.7 KB
Original Source

CI

In addition to Github Actions whisper.cpp uses a custom CI framework:

https://github.com/ggml-org/ci

It monitors the master branch for new commits and runs the ci/run.sh script on dedicated cloud instances. This allows us to execute heavier workloads compared to just using Github Actions. Also with time, the cloud instances will be scaled to cover various hardware architectures, including GPU and Apple Silicon instances.

Collaborators can optionally trigger the CI run by adding the ggml-ci keyword to their commit message. Only the branches of this repo are monitored for this keyword.

It is a good practice, before publishing changes to execute the full CI locally on your machine:

bash
mkdir tmp

# CPU-only build
bash ./ci/run.sh ./tmp/results ./tmp/mnt

# with CUDA support
GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt

Environment Variables

The CI script supports several environment variables to control the build:

VariableDescription
GG_BUILD_CUDAEnable NVIDIA CUDA GPU acceleration
GG_BUILD_SYCLEnable Intel SYCL acceleration
GG_BUILD_VULKANEnable Vulkan GPU acceleration
GG_BUILD_METALEnable Metal acceleration on Apple Silicon
GG_BUILD_BLASEnable BLAS CPU acceleration
GG_BUILD_OPENVINOEnable OpenVINO support
GG_BUILD_COREMLEnable Core ML support for Apple Neural Engine
GG_BUILD_LOW_PERFLimit tests for low-performance hardware
GG_BUILD_TEST_MODELSComma-separated list of models to test (e.g. "tiny.en,tiny,base,medium", defaults to all models unless GG_BUILD_LOW_PERF is set)