Back to Recommenders

Hyperparameter Tuning for Matrix Factorization Using the Neural Network Intelligence Toolkit

examples/04_model_select_and_optimize/nni_surprise_svd.ipynb

1.2.119.9 KB
Original Source

<i>Copyright (c) Recommenders contributors.

Licensed under the MIT License.</i>

Hyperparameter Tuning for Matrix Factorization Using the Neural Network Intelligence Toolkit

This notebook shows how to use the Neural Network Intelligence toolkit (NNI) for tuning hyperparameters of a matrix factorization model. In particular, we optimize the hyperparameters of Surprise SVD.

NNI is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system’s parameters, in an efficient and automatic way. NNI has several appealing properties: ease of use, scalability, flexibility and efficiency. NNI comes with several tuning algorithms built in. It also allows users to define their own general purpose tuners. NNI can be executed in a distributed way on a local machine, a remote server, or a large scale training platform such as OpenPAI or Kubernetes.

In this notebook we execute several NNI experiments on the same data sets obtained from Movielens with a training-validation-test split. Each experiment corresponds to one of the built-in tuning algorithms. It consists of many parallel trials, each of which corresponds to a choice of hyperparameters sampled by the tuning algorithm. All the experiments require a call to the same python script for training the SVD model and evaluating rating and ranking metrics on the test data. This script has been adapted from the Surprise SVD notebook with only a few changes. In all experiments, we maximize precision@10.

For this notebook we use a local machine as the training platform (this can be any machine running the reco_base conda environment). In this case, NNI uses the available processors of the machine to parallelize the trials, subject to the value of trialConcurrency we specify in the configuration. Our runs and the results we report were obtained on a Standard_D16_v3 virtual machine with 16 vcpus and 64 GB memory.

1. Global Settings

python
import sys
import json
import os
import surprise
import pandas as pd
import shutil
import subprocess
import yaml
import pkg_resources
from tempfile import TemporaryDirectory

import recommenders
from recommenders.utils.timer import Timer
from recommenders.datasets import movielens
from recommenders.datasets.python_splitters import python_random_split
from recommenders.evaluation.python_evaluation import rmse, precision_at_k, ndcg_at_k
from recommenders.tuning.nni.nni_utils import (check_experiment_status, check_stopped, check_metrics_written, get_trials,
                                      stop_nni, start_nni)
from recommenders.models.surprise.surprise_utils import predict, compute_ranking_predictions

print("System version: {}".format(sys.version))
print("Surprise version: {}".format(surprise.__version__))
print("NNI version: {}".format(pkg_resources.get_distribution("nni").version))

%load_ext autoreload
%autoreload 2

2. Prepare Dataset

  1. Download data and split into training, validation and test sets
  2. Store the data sets to a local directory.
python
# Parameters used by papermill

# Select Movielens data size: 100k, 1m
MOVIELENS_DATA_SIZE = '100k'
SURPRISE_READER = 'ml-100k'
tmp_dir = TemporaryDirectory()
TMP_DIR = tmp_dir.name
NUM_EPOCHS = 30
MAX_TRIAL_NUM = 10

# time (in seconds) to wait for each tuning experiment to complete
WAITING_TIME = 20
MAX_RETRIES = 40 # it is recommended to have MAX_RETRIES>=4*MAX_TRIAL_NUM

tmp_dir = TemporaryDirectory()
python
data = movielens.load_pandas_df(
    size=MOVIELENS_DATA_SIZE,
    header=["userID", "itemID", "rating"]
)

data.head()
python
train, validation, test = python_random_split(data, [0.7, 0.15, 0.15])
python
LOG_DIR = os.path.join(TMP_DIR, "experiments")
os.makedirs(LOG_DIR, exist_ok=True)

DATA_DIR = os.path.join(TMP_DIR, "data") 
os.makedirs(DATA_DIR, exist_ok=True)

TRAIN_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_train.pkl"
train.to_pickle(os.path.join(DATA_DIR, TRAIN_FILE_NAME))

VAL_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_val.pkl"
validation.to_pickle(os.path.join(DATA_DIR, VAL_FILE_NAME))

TEST_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_test.pkl"
test.to_pickle(os.path.join(DATA_DIR, TEST_FILE_NAME))

3. Prepare Hyperparameter Tuning

We now prepare a training script svd_training_nni.py for the hyperparameter tuning, which will log our target metrics such as precision, NDCG, RMSE. We define the arguments of the script and the search space for the hyperparameters. All the parameter values will be passed to our training script.

Note that we specify precision@10 as the primary metric. We will also instruct NNI (in the configuration file) to maximize the primary metric. This is passed as an argument in the training script and the evaluated metric is returned through the NNI python library. In addition, we also evaluate RMSE and NDCG@10.

The script_params below are the parameters of the training script that are fixed (unlike hyper_params which are tuned). In particular, VERBOSE, BIASED, RANDOM_STATE, NUM_EPOCHS are parameters used in the SVD method and REMOVE_SEEN removes the training data from the recommended items.

python
EXP_NAME = "movielens_" + MOVIELENS_DATA_SIZE + "_svd_model"
PRIMARY_METRIC = "precision_at_k"
RATING_METRICS = ["rmse"]
RANKING_METRICS = ["precision_at_k", "ndcg_at_k"]  
USERCOL = "userID"
ITEMCOL = "itemID"
REMOVE_SEEN = True
K = 10
RANDOM_STATE = 42
VERBOSE = True
BIASED = True

script_params = " ".join([
    "--datastore", DATA_DIR,
    "--train-datapath", TRAIN_FILE_NAME,
    "--validation-datapath", VAL_FILE_NAME,
    "--surprise-reader", SURPRISE_READER,
    "--rating-metrics", " ".join(RATING_METRICS),
    "--ranking-metrics", " ".join(RANKING_METRICS),
    "--usercol", USERCOL,
    "--itemcol", ITEMCOL,
    "--k", str(K),
    "--random-state", str(RANDOM_STATE),
    "--epochs", str(NUM_EPOCHS),
    "--primary-metric", PRIMARY_METRIC
])

if BIASED:
    script_params += " --biased"
if VERBOSE:
    script_params += " --verbose"
if REMOVE_SEEN:
    script_params += " --remove-seen"
python
# hyperparameters search space
# We do not set 'lr_all' and 'reg_all' because they will be overriden by the other lr_ and reg_ parameters

hyper_params = {
    'n_factors': {"_type": "choice", "_value": [10, 50, 100, 150, 200]},
    'init_mean': {"_type": "uniform", "_value": [-0.5, 0.5]},
    'init_std_dev': {"_type": "uniform", "_value": [0.01, 0.2]},
    'lr_bu': {"_type": "uniform", "_value": [1e-6, 0.1]}, 
    'lr_bi': {"_type": "uniform", "_value": [1e-6, 0.1]}, 
    'lr_pu': {"_type": "uniform", "_value": [1e-6, 0.1]}, 
    'lr_qi': {"_type": "uniform", "_value": [1e-6, 0.1]}, 
    'reg_bu': {"_type": "uniform", "_value": [1e-6, 1]},
    'reg_bi': {"_type": "uniform", "_value": [1e-6, 1]}, 
    'reg_pu': {"_type": "uniform", "_value": [1e-6, 1]}, 
    'reg_qi': {"_type": "uniform", "_value": [1e-6, 1]}
}
python
with open(os.path.join(TMP_DIR, 'search_space_svd.json'), 'w') as fp:
    json.dump(hyper_params, fp)

We also create a yaml file for the configuration of the trials and the tuning algorithm to be used (in this experiment we use the TPE tuner).

python
config = {
    "authorName": "default",
    "experimentName": "surprise_svd",
    "trialConcurrency": 8,
    "maxExecDuration": "1h",
    "maxTrialNum": MAX_TRIAL_NUM,
    "trainingServicePlatform": "local",
    # The path to Search Space
    "searchSpacePath": "search_space_svd.json",
    "useAnnotation": False,
    "logDir": LOG_DIR,
    "tuner": {
        "builtinTunerName": "TPE",
        "classArgs": {
            #choice: maximize, minimize
            "optimize_mode": "maximize"
        }
    },
    # The path and the running command of trial
    "trial":  {
      "command": sys.prefix + "/bin/python svd_training.py" + " " + script_params,
      "codeDir": os.path.join(os.path.split(os.path.abspath(recommenders.__file__))[0], "tuning", "nni"),
      "gpuNum": 0
    }
}
 
with open(os.path.join(TMP_DIR, "config_svd.yml"), "w") as fp:
    fp.write(yaml.dump(config, default_flow_style=False))

4. Execute NNI Trials

The conda environment comes with NNI installed, which includes the command line tool nnictl for controlling and getting information about NNI experiments.

To start the NNI tuning trials from the command line, execute the following command:

nnictl create --config <path of config_svd.yml>

In the cell below, we call this command programmatically.

You can see the progress of the experiment by using the URL links output by the above command.

python
# Make sure that there is no experiment running
stop_nni()
python
config_path = os.path.join(TMP_DIR, 'config_svd.yml')
nni_env = os.environ.copy()
nni_env['PATH'] = sys.prefix + '/bin:' + nni_env['PATH']
proc = subprocess.run([sys.prefix + '/bin/nnictl', 'create', '--config', config_path], env=nni_env)
if proc.returncode != 0:
    raise RuntimeError("'nnictl create' failed with code %d" % proc.returncode)
python
with Timer() as time_tpe:
    check_experiment_status(wait=WAITING_TIME, max_retries=MAX_RETRIES)

5. Show Results

The trial with the best metric and the corresponding metrics and hyperparameters can also be read from the Web UI

or from the JSON file created by the training script. Below, we do this programmatically using nni_utils.py

python
trials, best_metrics, best_params, best_trial_path = get_trials('maximize')
python
best_metrics
python
best_params
python
best_trial_path

This directory path is where info about the trial can be found, including logs, parameters and the model that was learned. To evaluate the metrics on the test data, we get the SVD model that was saved as model.dump in the training script.

python
svd = surprise.dump.load(os.path.join(best_trial_path, "model.dump"))[1]

The following function computes all the metrics given an SVD model.

python
def compute_test_results(svd):
    test_results = {}
    predictions = predict(svd, test, usercol="userID", itemcol="itemID")
    for metric in RATING_METRICS:
        test_results[metric] = eval(metric)(test, predictions)

    all_predictions = compute_ranking_predictions(svd, train, usercol="userID", itemcol="itemID", remove_seen=REMOVE_SEEN)
    for metric in RANKING_METRICS:
        test_results[metric] = eval(metric)(test, all_predictions, col_prediction='prediction', k=K)
    return test_results
python
test_results_tpe = compute_test_results(svd)
print(test_results_tpe)

6. More Tuning Algorithms

We now apply other tuning algorithms supported by NNI to the same problem. For details about these tuners, see the NNI docs. The only change needed is in the relevant entry in the configuration file.

In summary, the tuners used in this notebook are the following:

  • Tree-structured Parzen Estimator (TPE), within the Sequential Model-Based Optimization (SMBO) framework,
  • SMAC, also an instance of SMBO,
  • Hyperband
  • Metis, an implementation of Bayesian optimization with Gaussian Processes
  • a Naive Evolutionary algorithm
  • an Annealing method for sampling, and
  • plain Random Search as a baseline.

For more details and references to the relevant literature, see the NNI github.

python
# Random search
config['tuner']['builtinTunerName'] = 'Random'
if 'classArgs' in config['tuner']:
    config['tuner'].pop('classArgs')
    
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
stop_nni()
with Timer() as time_random:
    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
python
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_random = compute_test_results(svd)
python
# Annealing
config['tuner']['builtinTunerName'] = 'Anneal'
if 'classArgs' not in config['tuner']:
    config['tuner']['classArgs'] = {'optimize_mode': 'maximize'}
else:
    config['tuner']['classArgs']['optimize_mode'] = 'maximize'
    
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
stop_nni()
with Timer() as time_anneal:
    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
python
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_anneal = compute_test_results(svd)
python
# Naive evolutionary search
config['tuner']['builtinTunerName'] = 'Evolution'
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
stop_nni()
with Timer() as time_evolution:
    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
python
check_metrics_written(wait=WAITING_TIME, max_retries=MAX_RETRIES)
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_evolution = compute_test_results(svd)

The SMAC tuner requires to have been installed with the following command

nnictl package install --name=SMAC

python
# SMAC
config['tuner']['builtinTunerName'] = 'SMAC'
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
# Check if installed
proc = subprocess.run([sys.prefix + '/bin/nnictl', 'package', 'show'], stdout=subprocess.PIPE)
if proc.returncode != 0:
    raise RuntimeError("'nnictl package show' failed with code %d" % proc.returncode)
if 'SMAC' not in proc.stdout.decode().strip().split():
    proc = subprocess.run([sys.prefix + '/bin/nnictl', 'package', 'install', '--name=SMAC'])
    if proc.returncode != 0:
        raise RuntimeError("'nnictl package install' failed with code %d" % proc.returncode)
python
# Skipping SMAC optimization for now
# stop_nni()
with Timer() as time_smac:
#    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
    pass

python
#check_metrics_written()
#svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
#test_results_smac = compute_test_results(svd)
python
# Metis
config['tuner']['builtinTunerName'] = 'MetisTuner'
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
stop_nni()
with Timer() as time_metis:
    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
python
check_metrics_written()
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_metis = compute_test_results(svd)

Hyperband follows a different style of configuration from other tuners. See the NNI documentation. Note that the training script needs to be adjusted as well, since each Hyperband trial receives an additional parameter STEPS, which corresponds to the resource allocation r<sub>i</sub> in the Hyperband algorithm. In this example, we used STEPS in combination with R to determine the number of epochs that SVD will run for in every trial.

python
# Hyperband
config['advisor'] = {
  'builtinAdvisorName': 'Hyperband',
  'classArgs': {
    'R': NUM_EPOCHS,
    'eta': 3,
    'optimize_mode': 'maximize'
  }
}
config.pop('tuner')
with open(config_path, 'w') as fp:
    fp.write(yaml.dump(config, default_flow_style=False))
python
stop_nni()
with Timer() as time_hyperband:
    start_nni(config_path, wait=WAITING_TIME, max_retries=MAX_RETRIES)
python
check_metrics_written()
svd = surprise.dump.load(os.path.join(get_trials('maximize')[3], "model.dump"))[1]
test_results_hyperband = compute_test_results(svd)
python
test_results_tpe.update({'time': time_tpe.interval})
test_results_random.update({'time': time_random.interval})
test_results_anneal.update({'time': time_anneal.interval})
test_results_evolution.update({'time': time_evolution.interval})
#test_results_smac.update({'time': time_smac.interval})
test_results_metis.update({'time': time_metis.interval})
test_results_hyperband.update({'time': time_hyperband.interval})
python
algos = ["TPE", 
         "Random Search", 
         "Annealing", 
         "Evolution", 
         #"SMAC", 
         "Metis", 
         "Hyperband"]
res_df = pd.DataFrame(index=algos,
                      data=[res for res in [test_results_tpe, 
                                            test_results_random, 
                                            test_results_anneal, 
                                            test_results_evolution, 
                                            #test_results_smac, 
                                            test_results_metis, 
                                            test_results_hyperband]] 
                     )
python
res_df.sort_values(by="precision_at_k", ascending=False).round(3)

As we see in the table above, TPE performs best with respect to the primary metric (precision@10) that all the tuners optimized. Also the best NDCG@10 is obtained for TPE and correlates well with precision@10. RMSE on the other hand does not correlate well and is not optimized for TPE, since finding the top k recommendations in the right order is a different task from predicting ratings (high and low) accurately.
We have also observed that the above ranking of the tuners is not consistent and may change when trying these experiments multiple times. Since some of these tuners rely heavily on randomized sampling, a larger number of trials is required to get more consistent metrics. In addition, some of the tuning algorithms themselves come with parameters, which can affect their performance.

python
# Stop the NNI experiment 
stop_nni()
python
tmp_dir.cleanup()

7. Concluding Remarks

We showed how to tune all the hyperparameters accepted by Surprise SVD simultaneously, by utilizing the NNI toolkit. For example, training and evaluation of a single SVD model takes about 50 seconds on the 100k MovieLens data on a Standard D2_V2 VM. Searching through 100 different combinations of hyperparameters sequentially would take about 80 minutes whereas each of the above experiments took about 10 minutes by exploiting parallelization on a single D16_v3 VM. With NNI, one can take advantage of concurrency and multiple processors on a virtual machine and can use a variety of tuning methods to navigate efficiently through a large space of hyperparameters.

For examples of scaling larger tuning workloads on clusters of machines, see the notebooks that employ the Azure Machine Learning service.

References