docs/articles_en/get-started/learn-openvino/openvino-samples/hello-classification.rst
.. meta:: :description: Learn how to do inference of image classification models using Synchronous Inference Request API (Python, C++, C).
This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API. Before using the sample, refer to the following requirements:
core.read_model.Build the Sample Applications <build-samples>
section in "Get Started with Samples" guide.How It Works ####################
At startup, the sample application sets log message capturing callback and reads command-line parameters. Then it prepares input data, loads a specified model and image to the OpenVINO™ Runtime plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
.. tab-set::
.. tab-item:: Python :sync: python
.. scrollbox::
.. doxygensnippet:: samples/python/hello_classification/hello_classification.py
:language: python
.. tab-item:: C++ :sync: cpp
.. scrollbox::
.. doxygensnippet:: samples/cpp/hello_classification/main.cpp
:language: cpp
.. tab-item:: C :sync: c
.. scrollbox::
.. doxygensnippet:: samples/c/hello_classification/main.c
:language: c
You can see the explicit description of each sample step at
:doc:Integration Steps <../../../openvino-workflow/running-inference>
section of "Integrate OpenVINO™ Runtime with Your Application" guide.
Running ####################
.. tab-set::
.. tab-item:: Python :sync: python
.. code-block:: console
python hello_classification.py <path_to_model> <path_to_image> <device_name>
.. tab-item:: C++ :sync: cpp
.. code-block:: console
hello_classification <path_to_model> <path_to_image> <device_name>
.. tab-item:: C :sync: c
.. code-block:: console
hello_classification_c <path_to_model> <path_to_image> <device_name>
To run the sample, you need to specify a model and an image:
the storage <https://storage.openvinotoolkit.org/data/test_data>__... note::
reverse_input_channels argument specified. For more information about
the argument, refer to the Color Conversion section of
:doc:Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>.model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>.Example ++++++++++++++++++++
Download a pre-trained model.
You can convert it by using:
.. tab-set::
.. tab-item:: Python :sync: python
.. code-block:: python
import openvino as ov
ov_model = ov.convert_model('./models/alexnet')
# or, when model is a Python model object
ov_model = ov.convert_model(alexnet)
.. tab-item:: CLI :sync: cli
.. code-block:: console
ovc ./models/alexnet
Perform inference of an image, using a model on a GPU, for example:
.. tab-set::
.. tab-item:: Python :sync: python
.. code-block:: console
python hello_classification.py ./models/alexnet/alexnet.xml ./images/banana.jpg GPU
.. tab-item:: C++ :sync: cpp
.. code-block:: console
hello_classification ./models/googlenet-v1.xml ./images/car.bmp GPU
.. tab-item:: C :sync: c
.. code-block:: console
hello_classification_c alexnet.xml ./opt/intel/openvino/samples/scripts/car.png GPU
Sample Output #############
.. tab-set::
.. tab-item:: Python :sync: python
The sample application logs each step in a standard output stream and
outputs top-10 inference results.
.. code-block:: console
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: /models/alexnet/alexnet.xml
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: /images/banana.jpg
[ INFO ] Top 10 results:
[ INFO ] class_id probability
[ INFO ] --------------------
[ INFO ] 954 0.9703885
[ INFO ] 666 0.0219518
[ INFO ] 659 0.0033120
[ INFO ] 435 0.0008246
[ INFO ] 809 0.0004433
[ INFO ] 502 0.0003852
[ INFO ] 618 0.0002906
[ INFO ] 910 0.0002848
[ INFO ] 951 0.0002427
[ INFO ] 961 0.0002213
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
.. tab-item:: C++ :sync: cpp
The application outputs top-10 inference results.
.. code-block:: console
[ INFO ] OpenVINO Runtime version ......... <version>
[ INFO ] Build ........... <build>
[ INFO ]
[ INFO ] Loading model files: /models/googlenet-v1.xml
[ INFO ] model name: GoogleNet
[ INFO ] inputs
[ INFO ] input name: data
[ INFO ] input type: f32
[ INFO ] input shape: {1, 3, 224, 224}
[ INFO ] outputs
[ INFO ] output name: prob
[ INFO ] output type: f32
[ INFO ] output shape: {1, 1000}
Top 10 results:
Image /images/car.bmp
classid probability
------- -----------
656 0.8139648
654 0.0550537
468 0.0178375
436 0.0165405
705 0.0111694
817 0.0105820
581 0.0086823
575 0.0077515
734 0.0064468
785 0.0043983
.. tab-item:: C :sync: c
The application outputs top-10 inference results.
.. code-block:: console
Top 10 results:
Image /opt/intel/openvino/samples/scripts/car.png
classid probability
------- -----------
656 0.666479
654 0.112940
581 0.068487
874 0.033385
436 0.026132
817 0.016731
675 0.010980
511 0.010592
569 0.008178
717 0.006336
This sample is an API example, for any performance measurements use the dedicated benchmark_app tool.
Additional Resources ####################
Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference>Get Started with Samples <get-started-demos>Using OpenVINO Samples <../openvino-samples>Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>OpenVINO Runtime C API <https://docs.openvino.ai/2026/api/c_cpp_api/group__ov__c__api.html>__Hello Classification Python Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/README.md>__Hello Classification C++ Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/hello_classification/README.md>__Hello Classification C Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/c/hello_classification/README.md>__