doc/tutorials/content/benchmark.rst
.. _benchmarking:
This document introduces benchmarking concepts for 3D algorithms. By benchmarking here we refer to the possibility of testing different computational pipelines in an easy manner. The goal is to test their reproductibility with respect to a particular problem of general interest.
For the general problem of Object Recognition (identification, categorization, detection, etc -- all fall in the same category here), we identify the following steps:
Users should be able to acquire training data from different inputs, including but not limited to:
full triangle meshes (CAD models);
360-degree full point cloud models;
partial point cloud views:
Computing higher level representation from the object's appearance (texture + depth) should be done:
The detected keypoint might also contain some meta-information required by some descriptors, like scale or orientation.
A higher level representation as mentioned before will be herein represented by a feature descriptor. Feature descriptors can be:
In addition, feature descriptors can be:
The distribution of features should be classifiable into distinct, separable classes. For local features, we identify two sets of techniques:
For global features, any general purpose classification technique should work (e.g., SVMs, nearest neighbors, etc).
In addition to classification, a substep of it could be considered Registration. Here we refine the classification results using iterative closest point techniques for example.
This pipeline should be able to evaluate the algorithm's performance at different tasks. Here are some requested tasks to support:
5.1 Metrics """""""""""
This pipeline should provide different metrics, since algorithms excel in different areas. Here are some requested metrics:
Here we describe a proposed set of classes that could be easily extended and used for the purpose of benchmarking object recognition tasks.
Training ^^^^^^^^^^^
Keypoints ^^^^^^^^^^^^
Descriptors ^^^^^^^^^^^^^^
Classification ^^^^^^^^^^^^^^^^^
Evaluation ^^^^^^^^^^^^^
The evaluation output needs to be one of the following: