Back to Predictionio

Tuning and Evaluation

docs/manual/source/evaluation/index.html.md

0.14.02.3 KB
Original Source
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->

PredictionIO's evaluation module allows you to streamline the process of testing lots of knobs in engine parameters and deploy the best one out of it using statistically sound cross-validation methods.

There are two key components:

Engine

It is our evaluation target. During evaluation, in addition to the train and deploy mode we describe in earlier sections, the engine also generates a list of testing data points. These data points are a sequence of Query and Actual Result tuples. Queries are sent to the engine and the engine responds with a Predicted Result, in the same way as how the engine serves a query.

Evaluator

The evaluator joins the sequence of Query, Predicted Result, and Actual Result together and evaluates the quality of the engine. PredictionIO enables you to implement any metric with just a few lines of code.

We will discuss various aspects of evaluation with PredictionIO.