Back to Insightface

Overview of CMake Option for Compilation

cpp-package/inspireface/doc/CMake-Option.md

0.75.4 KB
Original Source

Overview of CMake Option for Compilation

Here are the translation details for the compilation parameters as per your requirement:

ParameterDefault ValueDescription
ISF_THIRD_PARTY_DIR3rdpartyPath for required third-party libraries
ISF_SANITIZE_ADDRESSOFFEnable AddressSanitizer for memory error detection
ISF_SANITIZE_LEAKOFFEnable LeakSanitizer to detect memory leaks
ISF_ENABLE_SYMBOL_HIDINGONEnable symbol hiding by default for better security and performance. Only symbols explicitly marked for export will be visible.
ISF_INSTALL_CPP_HEADEROFFWhether to install the headers file for CPP-API (the default C-API with wider compatibility is recommended)
ISF_ENABLE_RKNNOFFEnable RKNN for Rockchip embedded devices
ISF_RK_DEVICE_TYPERV1109RV1126Target device model for Rockchip(Supports RV1109RV1126, RV1106, RV356X)
ISF_RK_COMPILER_TYPEarmhfThe armhf, armhf-uclibc and aarch64 compilers are supported. Select one based on the actual situation
ISF_ENABLE_RGAOFFEnable RGA image acceleration on Rockchip devices (currently only supported on devices using RKNPU2)
ISF_BUILD_LINUX_ARM7OFFCompile for ARM7 architecture
ISF_BUILD_LINUX_AARCH64OFFCompile for AARCH64 architecture
ISF_BUILD_WITH_TESTOFFCompile test case programs
ISF_BUILD_WITH_SAMPLEONCompile sample programs
ISF_BUILD_SHARED_LIBSONCompile shared libraries
ISF_ENABLE_BENCHMARKOFFEnable Benchmark tests for test cases
ISF_ENABLE_USE_LFW_DATAOFFEnable using LFW data for test cases
ISF_ENABLE_TEST_EVALUATIONOFFEnable evaluation functionality for test cases, must be used together with ISF_ENABLE_USE_LFW_DATA
ISF_ENABLE_TEST_INTERNALOFFEnable test cases for some internal functions, requires disabling ISF_ENABLE_SYMBOL_HIDING for compilation.
ISF_BUILD_SAMPLE_INTERNALOFFEnable executable examples for some internal functions, requires disabling ISF_ENABLE_SYMBOL_HIDING for compilation.
ISF_ENABLE_TENSORRTOFFEnable the backend of inference using TensorRT, Linux must be on NVIDIA devices, and CUDA, TensorRT-10 must be installed
TENSORRT_ROOT/usr/local/TensorRTTensorRT-10 installation path
ISF_GLOBAL_INFERENCE_BACKEND_USE_MNN_CUDAOFFEnable global MNN_CUDA inference mode, requires device support for CUDA
ISF_LINUX_MNN_CUDA""Specific MNN library path, requires pre-compiled MNN library supporting MNN_CUDA, only effective when ISF_GLOBAL_INFERENCE_BACKEND_USE_MNN_CUDA is enabled
ISF_ENABLE_APPLE_EXTENSIONOFFIf you are using an Apple device (MacOS/iOS), you can enable it to allow some parts of the SDK's models to switch to certain backend neural network acceleration inference on Apple devices, such as Metal and ANE
ISF_MNN_CUSTOM_SOURCE""Using this option to replace different versions of MNN, the input must be the root directory of the MNN source code
INSPIRECV_BACKEND_OPENCVOFFThe image processing backend relies on OpenCV and is not recommended
ISF_ENABLE_COST_TIMEOFFThis parameter can be used to print the time of some important compute nodes in the Debug state
ISF_NEVER_USE_OPENCVONWhen you need to use OpenCV as the image processing engine, you need to turn this option off