docs/jupyter/t_pipelines/t_icp_registration.ipynb
import open3d as o3d
import open3d.core as o3c
if o3d.__DEVICE_API__ == 'cuda':
import open3d.cuda.pybind.t.pipelines.registration as treg
else:
import open3d.cpu.pybind.t.pipelines.registration as treg
import numpy as np
import sys
import os
import time
# monkey patches visualization and provides helpers to load geometries
sys.path.append('..')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = not "CI" in os.environ
This tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The inputs are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function draw_registration_result visualizes the alignment during the registration process. In this tutorial, we show different ICP variants, and the API for using them.
The function below visualizes a target point cloud and a source point cloud transformed with an alignment transformation. The target point cloud and the source point cloud are painted with cyan and yellow colors respectively. The more and tighter the two point-clouds overlap with each other, the better the alignment result.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
In general, the ICP algorithm iterates over two steps:
Different variants of ICP use different objective functions $E(\mathbf{T})$ [BeslAndMcKay1992] [ChenAndMedioni1992] [Park2017].
Note:
The Tensor based ICP implementation API is slightly different than the Eigen based ICP implementation, to support more functionalities.
PointClouds between which the Transformation is to be estimated. [open3d.t.PointCloud]
Note:
The initial alignment is usually obtained by a global registration algorithm.
</div>demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algorithm requires a good initial alignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
double for icp, and utility.DoubleVector for multi-scale-icp.1.0x - 3.0x voxel-size for each scale.# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportional to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial alignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
Float64 on CPU:0 device# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
Options:
target point-cloud to have normals attribute (of same dtype as position attribute).target point-cloud to have normals attribute (of same dtype as position attribute).source and target point-clouds to have colors attribute (of same dtype as position attribute).# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
Estimation Method also supports Robust Kernels: Robust kernels are used for outlier rejection. More on this in Robust Kernel section.
robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)
Method options:
Multi-Scale ICP it is a list of ICPConvergenceCriteria, for each scale of ICP, to provide more fine control over performance.relative_fitness and relative_rmse low as we just want to get an estimate transformation, and high for later iterations to fine-tune.# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
Optional lambda function, saves string to tensor dictionary of attributes such as "iteration_index", "scale_index", "scale_iteration_index", "inlier_rmse", "fitness", "transformation", on CPU device, updated after each iteration.
# Example callback_after_iteration lambda function:
callback_after_iteration = lambda updated_result_dict : print("Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
updated_result_dict["iteration_index"].item(),
updated_result_dict["fitness"].item(),
updated_result_dict["inlier_rmse"].item()))
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, callback_after_iteration)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
Now let's try with poor initial initialisation
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
As we can see, poor initial alignment might fail ICP convergence
Having large max_correspondence_distance might resolve this issue. But it will take longer to process.
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
We may resolve the above issues and get even better accuracy by using Multi-Scale ICP
Problems with using Vanilla-ICP (previous version):
max_correspondence_distance if the aligned point cloud does not have sufficient overlaps.These drawbacks can be solved using Multi-Scale ICP.
In Multi-Scale ICP, we perform the initial iterations on coarse point-cloud to get a better estimate of initial alignment and use this alignment for convergence on a more dense point cloud. ICP on coarse point cloud is in-expensive, and allows us to use a larger max_correspondence_distance. It is also less likely for the convergence to get stuck in local minima. As we get a good estimate, it takes fewer iterations on dense point-cloud to converge to a more accurate transform.
It is recommended to use Multi-Scale ICP over ICP, for efficient convergence, especially for large point clouds.
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
callback_after_iteration = lambda loss_log_map : print("Iteration Index: {}, Scale Index: {}, Scale Iteration Index: {}, Fitness: {}, Inlier RMSE: {},".format(
loss_log_map["iteration_index"].item(),
loss_log_map["scale_index"].item(),
loss_log_map["scale_iteration_index"].item(),
loss_log_map["fitness"].item(),
loss_log_map["inlier_rmse"].item()))
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
callback_after_iteration)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
no correspondences.In case of no_correspondences the fitness and inlier_rmse is 0.
max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
Information Matrix gives us further information about how well the point-clouds are aligned.
information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
Now that we have a basic understanding of the ICP algorithm and the API, let's experiment with the different versions to understand the difference
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Initial guess transform between the two point-cloud.
# ICP algorithm requires a good initial alignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
We first show a point-to-point ICP algorithm [BeslAndMcKay1992] using the objective
\begin{equation} E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}|\mathbf{p} - \mathbf{T}\mathbf{q}|^{2} \end{equation}
The class TransformationEstimationPointToPoint provides functions to compute the residuals and Jacobian matrices of the point-to-point ICP objective.
# Input point-clouds
demo_icp_pcds = o3d.data.DemoICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_icp_pcds.paths[1])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
The fitness score increases from 0.174722 to 0.372474. The inlier_rmse reduces from 0.011771 to 0.007761.
By default, icp runs until convergence or reaches a maximum number of iterations (30 by default).
It can be changed to allow more computation time and to improve the results further.
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
The final alignment is tight. The fitness score improves to 0.620972. The inlier_rmse reduces to 0.006581.
The point-to-plane ICP algorithm [ChenAndMedioni1992] uses a different objective function
\begin{equation} E(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2}, \end{equation}
where $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$. [Rusinkiewicz2001] has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm.
The class TransformationEstimationPointToPlane provides functions to compute the residuals and Jacobian matrices of the point-to-plane ICP objective.
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
The point-to-plane ICP reaches tight alignment within 30 iterations (a fitness score of 0.620972 and an inlier_rmse score of 0.006581).
This tutorial demonstrates an ICP variant that uses both geometry and color for registration. It implements the algorithm of [Park2017]. The color information locks the alignment along the tangent plane. Thus this algorithm is more accurate and more robust than prior point cloud registration algorithms, while the running speed is comparable to that of ICP registration.
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
demo_cicp_pcds = o3d.data.DemoColoredICPPointClouds()
source = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[0])
target = o3d.t.io.read_point_cloud(demo_cicp_pcds.paths[1])
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
We first run Point-to-plane ICP as a baseline approach. The visualization below shows misaligned green triangle textures. This is because a geometric constraint does not prevent two planar surfaces from slipping.
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
The core function for colored point cloud registration is registration_colored_icp. Following [Park2017], it runs ICP iterations (see Point-to-point ICP for details) with a joint optimization objective
\begin{equation} E(\mathbf{T}) = (1-\delta)E_{C}(\mathbf{T}) + \delta E_{G}(\mathbf{T}) \end{equation}
where $\mathbf{T}$ is the transformation matrix to be estimated. $E_{C}$ and $E_{G}$ are the photometric and geometric terms, respectively. $\delta\in[0,1]$ is a weight parameter that has been determined empirically.
The geometric term $E_{G}$ is the same as the Point-to-plane ICP objective
\begin{equation} E_{G}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big((\mathbf{p} - \mathbf{T}\mathbf{q})\cdot\mathbf{n}_{\mathbf{p}}\big)^{2}, \end{equation}
where $\mathcal{K}$ is the correspondence set in the current iteration. $\mathbf{n}_{\mathbf{p}}$ is the normal of point $\mathbf{p}$.
The color term $E_{C}$ measures the difference between the color of point $\mathbf{q}$ (denoted as $C(\mathbf{q})$) and the color of its projection on the tangent plane of $\mathbf{p}$.
\begin{equation} E_{C}(\mathbf{T}) = \sum_{(\mathbf{p},\mathbf{q})\in\mathcal{K}}\big(C_{\mathbf{p}}(\mathbf{f}(\mathbf{T}\mathbf{q})) - C(\mathbf{q})\big)^{2}, \end{equation}
where $C_{\mathbf{p}}(\cdot)$ is a precomputed function continuously defined on the tangent plane of $\mathbf{p}$. Function$\mathbf{f}(\cdot)$ projects a 3D point to the tangent plane. For more details, refer to [Park2017].
To further improve efficiency, [Park2017] proposes a multi-scale registration scheme.
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)