docs/update_log/v0_0_1-to-v0_1_0.md
This section explains the API changes from v0.0.1.
In v0.0.1, the loss function and the optimization algorithm are treated as template parameter of network.
//v0.0.1
network<mse, adagrad> net;
net.train(x_data, y_label, n_batch, n_epoch);
From v0.1.0, these are treated as parameters of train/fit functions.
//v0.1.0
network<sequential> net;
adagrad opt;
net.fit<mse>(opt, x_data, y_label, n_batch, n_epoch);
In v0.0.1, the regression and the classification have the same API:
//v0.0.1
net.train(x_data, y_data, n_batch, n_epoch); // regression
net.train(x_data, y_label, n_batch, n_epoch); // classification
From v0.1.0, these are separated into fit and train.
//v0.1.0
net.fit<mse>(opt, x_data, y_data, n_batch, n_epoch); // regression
net.train<mse>(opt, x_data, y_label, n_batch, n_epoch); // classification
In v0.0.1, the default mode of weight-initialization in trian function is reset_weights=true.
//v0.0.1
std::ifstream is("model");
is >> net;
net.train(x_data, y_data, n_batch, n_epoch); // reset loaded weights automatically
net.train(x_data, y_data, n_bacth, n_epoch); // again we lost trained parameter we got the last training
//v0.1.0
std::ifstream is("model");
is >> net;
net.trian<mse>(opt, x_data, y_data, n_batch, n_epoch); // hold loaded weights
net.train<mse>(opt, x_data, y_data, n_bacth, n_epoch); // continue training from the last training
// weights are automatically initialized if we don't load models and train yet
net2.train<mse>(opt, x_data, y_data, n_batch, n_epoch);