docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library.rst
.. meta:: :description: Develop and implement independent inference solutions for different devices with the components of plugin architecture of OpenVINO.
.. toctree:: :maxdepth: 1 :caption: Converting and Preparing Models :hidden:
Implement Plugin Functionality <openvino-plugin-library/plugin> Implement Compiled Model Functionality <openvino-plugin-library/compiled-model> Implement Synchronous Inference Request <openvino-plugin-library/synch-inference-request> Implement Asynchronous Inference Request <openvino-plugin-library/asynch-inference-request> Provide Plugin Specific Properties <openvino-plugin-library/plugin-properties> Implement Remote Context <openvino-plugin-library/remote-context> Implement Remote Tensor <openvino-plugin-library/remote-tensor> openvino-plugin-library/build-plugin-using-cmake openvino-plugin-library/plugin-testing openvino-plugin-library/advanced-guides openvino-plugin-library/plugin-api-references
The plugin architecture of OpenVINO allows to develop and plug independent inference
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
exporting the single create_plugin_engine function that allows to create a new plugin instance.
OpenVINO Plugin Library #######################
OpenVINO plugin dynamic library consists of several main components:
:doc:Plugin class <openvino-plugin-library/plugin>:
compiled model <openvino-plugin-library/compiled-model> instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model which is backend-independent.compiled model <openvino-plugin-library/compiled-model> object.:doc:Compiled Model class <openvino-plugin-library/compiled-model>:
Inference Request <openvino-plugin-library/synch-inference-request>.:doc:Inference Request class <openvino-plugin-library/synch-inference-request>:
:doc:Asynchronous Inference Request class <openvino-plugin-library/asynch-inference-request>:
Inference Request <openvino-plugin-library/synch-inference-request> class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.:doc:Plugin specific properties <openvino-plugin-library/plugin-properties>:
:doc:Remote Context <openvino-plugin-library/remote-context>:
:doc:Remote Tensor <openvino-plugin-library/remote-tensor>
.. note::
This documentation is written based on the Template plugin, which demonstrates plugin development details. Find the complete code of the Template, which is fully compilable and up-to-date, at <openvino source dir>/src/plugins/template.
Detailed Guides ###############
Build <openvino-plugin-library/build-plugin-using-cmake> a plugin library using CMaketesting <openvino-plugin-library/plugin-testing>Quantized networks <openvino-plugin-library/advanced-guides/quantized-models>Low precision transformations <openvino-plugin-library/advanced-guides/low-precision-transformations> guideWriting OpenVINO™ transformations <transformation-api> guideIntegration with AUTO Plugin <https://github.com/openvinotoolkit/openvino/blob/master/src/plugins/auto/docs/integration.md>__API References ##############
OpenVINO Plugin API <https://docs.openvino.ai/2026/api/c_cpp_api/group__ov__dev__api.html>__OpenVINO Transformation API <https://docs.openvino.ai/2026/api/c_cpp_api/group__ie__transformation__api.html>__