Tutorials/FunctionalAPI/CNTK_200_GuidedTour.ipynb
from __future__ import print_function
import cntk
import numpy as np
import scipy.sparse
import cntk.tests.test_utils
cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system)
cntk.cntk_py.set_fixed_random_seed(1) # fix the random seed so that LR examples are repeatable
from IPython.display import Image
import matplotlib.pyplot
%matplotlib inline
matplotlib.pyplot.rcParams['figure.figsize'] = (40,40)
This tutorial exposes many advanced features of CNTK and is aimed towards people who have had some previous exposure to deep learning and/or other deep learning toolkits. If you are a complete beginner we suggest you start with the CNTK 101 Tutorial and come here after you have covered most of the 100 series.
Welcome to CNTK and the wonders of deep learning! Deep neural networks are redefining how computer programs are created. In addition to imperative, functional, declarative programming, we now have differentiable programming which effectively 'learns' programs from data. With CNTK, you can be part of this revolution.
CNTK is the prime tool that Microsoft product groups use to create deep models for a whole range of products, from speech recognition and machine translation via various image-classification services to Bing search ranking.
This tutorial is a guided tour of CNTK. It is primarily meant for users that are new to CNTK but have some experience with deep neural networks. The focus will be on how the basic steps of deep learning are done in CNTK, which we will show predominantly by example. This tour is not a complete API description. Instead, we refer the reader to the documentation and task-specific tutorials for more detailed information.
To train a deep model, you will need to define your model structure, prepare your data so that it can be fed to CNTK, train the model and evaluate its accuracy, and deploy it.
This guided tour is organized as follows:
MinibatchSource classTo run this tutorial, you will need CNTK v2 and ideally a CUDA-capable GPU (deep learning is no fun without GPUs).
So let us dive right in. Below we will introduce CNTK's programming model--networks are function objects and CNTK's data model. We will put that into action for logistic regression and MNIST digit recognition, using CNTK's Functional API. Lastly, CNTK also has a lower-level, TensorFlow/Theano-like graph API. We will replicate one example with it.
In CNTK, a neural network is a function object. On one hand, a neural network in CNTK is just a function that you can call to apply it to data. On the other hand, a neural network contains learnable parameters that can be accessed like object members. Complicated function objects can be composed as hierarchies of simpler ones, which, for example, represent layers. The function-object approach is similar to Keras, Chainer, Dynet, Pytorch, and the recent Sonnet.
The following illustrates the function-object approach with pseudo-code, using the example
of a fully-connected layer (called Dense in CNTK)::
# numpy *pseudo-code* for CNTK Dense layer (simplified, e.g. no back-prop)
def Dense(out_dim, activation):
# create the learnable parameters
b = np.zeros(out_dim)
W = np.ndarray((0,out_dim)) # input dimension is unknown
# define the function itself
def dense(x):
if len(W) == 0: # first call: reshape and initialize W
W.resize((x.shape[-1], W.shape[-1]), refcheck=False)
W[:] = np.random.randn(*W.shape) * 0.05
return activation(x.dot(W) + b)
# return as function object: can be called & holds parameters as members
dense.W = W
dense.b = b
return dense
d = Dense(5, np.tanh) # create the function object
y = d(np.array([1, 2])) # apply it like a function
W = d.W # access member like an object
print('W =', d.W)
print('y =', y)
Again, this is only pseudo-code. In reality, CNTK function objects are not actual Python lambdas.
Rather, they are represented internally as graph structures in C++ that encode the formula,
similar to TensorFlow and Theano.
This graph structure is wrapped in the Python class Function that exposes __call__() and __getattr__() methods.
The function object is CNTK's single abstraction used to represent different levels of neural networks, which are only distinguished by convention:
times(), __add__(), sigmoid()...)Dense(), Embedding(), Convolution()...). Layers map one input to one output.LSTM(), GRU(), RNNStep()). Step functions map a previous state and a new input to a new state.cross_entropy_with_softmax(), binary_cross_entropy(), squared_error(), classification_error()...).
In CNTK, losses and metric are not special, just functions.Higher-order layers compose objects into more complex ones, including:
Sequential(), For())Recurrence(), Fold(), UnfoldFrom(), ...)Networks are commonly defined by using existing CNTK functions (such as
specific types of neural-network layers)
and composing them using Sequential().
In addition, users can write their own functions
as arbitrary Python expressions, as long as those consist of CNTK operations
over CNTK data types.
Python expressions get converted into the internal representation by wrapping them in a call to
Function(). This is similar to Keras' Lambda().
Expressions can be written as multi-line functions through decorator syntax (@Function).
Lastly, function objects enable parameter sharing. If you call the same function object at multiple places, all invocations will naturally share the same learnable parameters.
In summary, the function object is CNTK's single abstraction for conveniently defining simple and complex models, parameter sharing, and training objectives.
(Note that it is possible to define CNTK networks directly in terms of its underlying graph representation similar to TensorFlow and Theano. This is discussed further below.)
CNTK can operate on two types of data:
The distinction is that the shape of a tensor is static during operation, while the length of a sequence depends on data. Tensors have static axes, while a sequence has an additional dynamic axis.
In CNTK, categorical data is represented as sparse one-hot tensors, not as integer vectors. This allows to write embeddings and loss functions in a unified fashion as matrix products.
CNTK adopts Python's type-annotation syntax to declare CNTK types (works with Python 2.7). For example,
Tensor[(13,42)] denotes a tensor with 13 rows and 42 columns, andSequence[SparseTensor[300000]] a sequence of sparse vectors, which for example could represent a word out of a 300k dictionaryNote the absence of a batch dimension. CNTK hides batching from the user. We want users to think in tensors and sequences, and leave mini-batching to CNTK. Unlike other toolkits, CNTK can also automatically batch sequences with different lengths into one minibatch, and handles all necessary padding and packing. Workarounds like 'bucketing' are not needed.
Let us put all of this in action for a very simple example of logistic regression. For this example, we create a synthetic data set of 2-dimensional normal-distributed data points, which should be classified into belonging to one of two classes. Note that CNTK expects the labels as one-hot encoded.
input_dim_lr = 2 # classify 2-dimensional data
num_classes_lr = 2 # into one of two classes
# This example uses synthetic data from normal distributions,
# which we generate in the following.
# X_lr[corpus_size,input_dim] - input data
# Y_lr[corpus_size] - labels (0 or 1), one-hot-encoded
np.random.seed(0)
def generate_synthetic_data(N):
Y = np.random.randint(size=N, low=0, high=num_classes_lr) # labels
X = (np.random.randn(N, input_dim_lr)+3) * (Y[:,None]+1) # data
# Our model expects float32 features, and cross-entropy
# expects one-hot encoded labels.
Y = scipy.sparse.csr_matrix((np.ones(N,np.float32), (range(N), Y)), shape=(N, num_classes_lr))
X = X.astype(np.float32)
return X, Y
X_train_lr, Y_train_lr = generate_synthetic_data(20000)
X_test_lr, Y_test_lr = generate_synthetic_data(1024)
print('data =\n', X_train_lr[:4])
print('labels =\n', Y_train_lr[:4].todense())
We now define the model function. The model function maps input data to predictions. It is the final product of the training process. In this example, we use the simplest of all models: logistic regression.
model_lr = cntk.layers.Dense(num_classes_lr, activation=None)
Next, we define the criterion function. The criterion function is
the harness via which the trainer uses to optimize the model:
It maps (input vectors, labels) to (loss, metric).
The loss is used for the SGD updates. We choose cross entropy.
Specifically, cross_entropy_with_softmax() first applies
the softmax() function to the network's output, as
cross entropy expects probabilities.
We do not include softmax() in the model function itself, because
it is not necessary for using the model.
As the metric, we count classification errors (this metric is not differentiable).
We define criterion function as Python code and convert it to a Function object.
A single expression can be written as Function(lambda x, y: expression of x and y),
similar to Keras' Lambda().
To avoid evaluating the model twice, we use a Python function definition
with decorator syntax. This is also a good time to tell CNTK about the
data types of our inputs, which is done via the decorator @Function.with_signature(argument types):
@cntk.Function.with_signature(cntk.layers.Tensor[input_dim_lr], cntk.layers.SparseTensor[num_classes_lr])
def criterion_lr(data, label_one_hot):
z = model_lr(data) # apply model. Computes a non-normalized log probability for every output class.
loss = cntk.cross_entropy_with_softmax(z, label_one_hot) # applies softmax to z under the hood
metric = cntk.classification_error(z, label_one_hot)
return loss, metric
print('criterion_lr:', criterion_lr)
print('W =', model_lr.W.value) # W now has known shape and thus gets initialized
The decorator will 'compile' the Python function into CNTK's internal graph representation.
Thus, the resulting criterion not a Python function but a CNTK Function object.
We are now ready to train our model.
learner = cntk.sgd(model_lr.parameters,
cntk.learning_parameter_schedule(0.1))
progress_writer = cntk.logging.ProgressPrinter(50)
criterion_lr.train((X_train_lr, Y_train_lr), parameter_learners=[learner],
callbacks=[progress_writer])
print(model_lr.W.value) # peek at updated W
The learner is the object that actually performs the model update. Alternative learners include momentum_sgd() and adam(). The progress_writer is a stock logging callback that prints the output you see above, and can be replaced by your own
or the stock TensorBoardProgressWriterto visualize training progress using TensorBoard.
The train() function is feeding our data (X_train_lr, Y_train_lr) minibatch by minibatch to the model and updates it, where the data is a tuple in the same order as the arguments of criterion_mn().
Let us test how we are doing on our test set (this will also run minibatch by minibatch).
test_metric_lr = criterion_lr.test((X_test_lr, Y_test_lr),
callbacks=[progress_writer]).metric
And lastly, let us run a few samples through our model and see how it is doing.
Oops, criterion knew the input types, but model_lr does not,
so we tell it using update_signature().
model_lr.update_signature(cntk.layers.Tensor[input_dim_lr])
print('model_lr:', model_lr)
Now we can call it like any Python function:
z = model_lr(X_test_lr[:20])
print("Label :", [label.todense().argmax() for label in Y_test_lr[:20]])
print("Predicted:", [z[i,:].argmax() for i in range(len(z))])
Let us do the same thing as above on an actual task--the MNIST benchmark, which is sort of the "hello world" of deep learning. The MNIST task is to recognize scans of hand-written digits. We first download and prepare the data.
input_shape_mn = (28, 28) # MNIST digits are 28 x 28
num_classes_mn = 10 # classify as one of 10 digits
# Fetch the MNIST data. Best done with scikit-learn.
try:
from sklearn import datasets, utils
mnist = datasets.fetch_mldata("MNIST original")
X, Y = mnist.data / 255.0, mnist.target
X_train_mn, X_test_mn = X[:60000].reshape((-1,28,28)), X[60000:].reshape((-1,28,28))
Y_train_mn, Y_test_mn = Y[:60000].astype(int), Y[60000:].astype(int)
except: # workaround if scikit-learn is not present
import requests, io, gzip
X_train_mn, X_test_mn = (np.fromstring(gzip.GzipFile(fileobj=io.BytesIO(requests.get('http://yann.lecun.com/exdb/mnist/' + name + '-images-idx3-ubyte.gz').content)).read()[16:], dtype=np.uint8).reshape((-1,28,28)).astype(np.float32) / 255.0 for name in ('train', 't10k'))
Y_train_mn, Y_test_mn = (np.fromstring(gzip.GzipFile(fileobj=io.BytesIO(requests.get('http://yann.lecun.com/exdb/mnist/' + name + '-labels-idx1-ubyte.gz').content)).read()[8:], dtype=np.uint8).astype(int) for name in ('train', 't10k'))
# Shuffle the training data.
np.random.seed(0) # always use the same reordering, for reproducability
idx = np.random.permutation(len(X_train_mn))
X_train_mn, Y_train_mn = X_train_mn[idx], Y_train_mn[idx]
# Further split off a cross-validation set
X_train_mn, X_cv_mn = X_train_mn[:54000], X_train_mn[54000:]
Y_train_mn, Y_cv_mn = Y_train_mn[:54000], Y_train_mn[54000:]
# Our model expects float32 features, and cross-entropy expects one-hot encoded labels.
Y_train_mn, Y_cv_mn, Y_test_mn = (scipy.sparse.csr_matrix((np.ones(len(Y),np.float32), (range(len(Y)), Y)), shape=(len(Y), 10)) for Y in (Y_train_mn, Y_cv_mn, Y_test_mn))
X_train_mn, X_cv_mn, X_test_mn = (X.astype(np.float32) for X in (X_train_mn, X_cv_mn, X_test_mn))
# Have a peek.
matplotlib.pyplot.rcParams['figure.figsize'] = (5, 0.5)
matplotlib.pyplot.axis('off')
_ = matplotlib.pyplot.imshow(np.concatenate(X_train_mn[0:10], axis=1), cmap="gray_r")
Let's define the CNTK model function to map (28x28)-dimensional images to a 10-dimensional score vector. We wrap that in a function so that later in this tutorial we can easily recreate it.
def create_model_mn():
with cntk.layers.default_options(activation=cntk.ops.relu, pad=False):
return cntk.layers.Sequential([
cntk.layers.Convolution2D((5,5), num_filters=32, reduction_rank=0, pad=True), # reduction_rank=0 for B&W images
cntk.layers.MaxPooling((3,3), strides=(2,2)),
cntk.layers.Convolution2D((3,3), num_filters=48),
cntk.layers.MaxPooling((3,3), strides=(2,2)),
cntk.layers.Convolution2D((3,3), num_filters=64),
cntk.layers.Dense(96),
cntk.layers.Dropout(dropout_rate=0.5),
cntk.layers.Dense(num_classes_mn, activation=None) # no activation in final layer (softmax is done in criterion)
])
model_mn = create_model_mn()
This model is a tad bit more complicated! It consists of several convolution-pooling layeres and two fully-connected layers for classification which is typical for MNIST. This demonstrates several aspects of CNTK's Functional API.
First, we create each layer using a function from CNTK's layers library (cntk.layers).
Second, the higher-order layer Sequential() creates a new function that applies all those layers
one after another. This is known forward function composition.
Note that unlike some other toolkits, you cannot Add() more layers afterwards to a sequential layer.
CNTK's Function objects are immutable, besides their learnable parameters (to edit a Function object, you can clone() it).
If you prefer that style, create your layers as a Python list and pass that to Sequential().
Third, the context manager default_options() allows to specify defaults for various optional arguments to layers,
such as that the activation function is always relu, unless overriden.
Lastly, note that relu is passed as the actual function, not a string.
Any function can be an activation function.
It is also allowed to pass a Python lambda directly, for example relu could also be
realized manually by saying activation=lambda x: cntk.ops.element_max(x, 0).
The criterion function is defined like in the previous example, to map maps (28x28)-dimensional features and according labels to loss and metric.
@cntk.Function.with_signature(cntk.layers.Tensor[input_shape_mn], cntk.layers.SparseTensor[num_classes_mn])
def criterion_mn(data, label_one_hot):
z = model_mn(data)
loss = cntk.cross_entropy_with_softmax(z, label_one_hot)
metric = cntk.classification_error(z, label_one_hot)
return loss, metric
For the training, let us throw momentum into the mix.
N = len(X_train_mn)
lrs = cntk.learning_parameter_schedule_per_sample([0.001]*12 + [0.0005]*6 + [0.00025]*6 + [0.000125]*3 + [0.0000625]*3 + [0.00003125], epoch_size=N)
momentums = cntk.learners.momentum_schedule([0]*5 + [0.7788007830714049], epoch_size=N, minibatch_size=256)
minibatch_sizes = cntk.minibatch_size_schedule([256]*6 + [512]*9 + [1024]*7 + [2048]*8 + [4096], epoch_size=N)
learner = cntk.learners.momentum_sgd(model_mn.parameters, lrs, momentums)
This looks a bit unusual.
First, the learning rate is specified as a list ([0.001]*12 + [0.0005]*6 +...). Together with the epoch_size parameter, this tells CNTK to use 0.001 for 12 epochs, and then continue with 0.005 for another 6, etc.
Second, the learning rate is specified per-sample, and momentum is specified per 256 sampels (i.e. the reference minibatch size). These values specify directly the weight with which each sample's gradient contributes to the model, and how its contribution decays as training progresses; independent of the minibatch size, which is crucial for efficiency of GPUs and parallel training. This unique CNTK feature allows to adjust the minibatch size without retuning those parameters. Here, we grow it from 256 to 4096, leading to 3 times faster operation towards the end (on a Titan-X).
Alright, let us now train the model. On a Titan-X, this will run for about a minute.
progress_writer = cntk.logging.ProgressPrinter()
criterion_mn.train((X_train_mn, Y_train_mn), minibatch_size=minibatch_sizes,
max_epochs=40, parameter_learners=[learner], callbacks=[progress_writer])
test_metric_mn = criterion_mn.test((X_test_mn, Y_test_mn), callbacks=[progress_writer]).metric
CNTK also allows networks to be written in graph style like TensorFlow and Theano. The following defines the same model and criterion function as above, and will get the same result.
images = cntk.input_variable(input_shape_mn, name='images')
with cntk.layers.default_options(activation=cntk.ops.relu, pad=False):
r = cntk.layers.Convolution2D((5,5), num_filters=32, reduction_rank=0, pad=True)(images)
r = cntk.layers.MaxPooling((3,3), strides=(2,2))(r)
r = cntk.layers.Convolution2D((3,3), num_filters=48)(r)
r = cntk.layers.MaxPooling((3,3), strides=(2,2))(r)
r = cntk.layers.Convolution2D((3,3), num_filters=64)(r)
r = cntk.layers.Dense(96)(r)
r = cntk.layers.Dropout(dropout_rate=0.5)(r)
model_mn = cntk.layers.Dense(num_classes_mn, activation=None)(r)
label_one_hot = cntk.input_variable(num_classes_mn, is_sparse=True, name='labels')
loss = cntk.cross_entropy_with_softmax(model_mn, label_one_hot)
metric = cntk.classification_error(model_mn, label_one_hot)
criterion_mn = cntk.combine([loss, metric])
print('criterion_mn:', criterion_mn)
Once you have decided your model structure and defined it, you are facing the question on feeding your training data to the CNTK training process.
The above examples simply feed the data as numpy/scipy arrays. That is only one of three ways CNTK provides for feeding data to the trainer:
The train() and test() functions accept a tuple of numpy or scipy arrays for their minibatch_source arguments.
The tuple members must be in the same order as the arguments of the criterion function that train() or test() are called on.
For dense tensors, use numpy arrays, while sparse data should have the type scipy.sparse.csr_matrix.
Each of the arguments should be a Python list of numpy/scipy arrays, where each list entry represents a data item. For arguments declared as Sequence[...], the first axis of the numpy/scipy array is the sequence length, while the remaining axes are the shape of each token of the sequence. Arguments that are not sequences consist of a single tensor. The shapes, data types (np.float32/float64) and sparseness must match the argument types as declared in the criterion function.
As an optimization, arguments that are not sequences can also be passed as a single large numpy/scipy array (instead of a list). This is what is done in the examples above.
Note that it is the responsibility of the user to randomize the data.
MinibatchSource class for Reading DataProduction-scale training data sometimes does not fit into RAM. For example, a typical speech corpus may be several hundred GB large. For this case, CNTK provides the MinibatchSource class, which provides:
At present, the MinibatchSource class implements a limited set of data types in the form of "deserializers":
ImageDeserializer).HTKFeatureDeserializer, HTKMLFDeserializer).The following example of using the ImageDeserializer class shows the general pattern.
For the specific input-file formats, please consult the documentation
or data-type specific tutorials.
image_width, image_height, num_channels = (32, 32, 3)
num_classes = 1000
def create_image_reader(map_file, is_training):
transforms = []
if is_training: # train uses data augmentation (translation only)
transforms += [
cntk.io.transforms.crop(crop_type='randomside', side_ratio=0.8) # random translation+crop
]
transforms += [ # to fixed size
cntk.io.transforms.scale(width=image_width, height=image_height, channels=num_channels, interpolations='linear'),
]
# deserializer
return cntk.io.MinibatchSource(cntk.io.ImageDeserializer(map_file, cntk.io.StreamDefs(
features = cntk.io.StreamDef(field='image', transforms=transforms),
labels = cntk.io.StreamDef(field='label', shape=num_classes)
)), randomize=is_training, max_sweeps = cntk.io.INFINITELY_REPEAT if is_training else 1)
Instead of feeding your data as a whole to CNTK's train() and test() functions which implement a minibatch loop internally,
you can realize your own minibatch loop and call the lower-level APIs train_minibatch() and test_minibatch().
This is useful when your data is not in a form suitable for the above, such as being generated on the fly as in variants of reinforcement learning. The train_minibatch() and test_minibatch() methods require you to instantiate an object of class Trainer that takes a subset of the arguments of train(). The following implements the logistic-regression example from above through explicit minibatch loops:
# Recreate the model, so that we can start afresh. This is a direct copy from above.
model_lr = cntk.layers.Dense(num_classes_lr, activation=None)
@cntk.Function.with_signature(cntk.layers.Tensor[input_dim_lr], cntk.layers.SparseTensor[num_classes_lr])
def criterion_lr(data, label_one_hot):
z = model_lr(data) # apply model. Computes a non-normalized log probability for every output class.
loss = cntk.cross_entropy_with_softmax(z, label_one_hot) # this applies softmax to z under the hood
metric = cntk.classification_error(z, label_one_hot)
return loss, metric
# Create the learner; same as above.
learner = cntk.sgd(model_lr.parameters, cntk.learning_parameter_schedule(0.1))
# This time we must create a Trainer instance ourselves.
trainer = cntk.Trainer(None, criterion_lr, [learner], [cntk.logging.ProgressPrinter(50)])
# Train the model by spoon-feeding minibatch by minibatch.
minibatch_size = 32
for i in range(0, len(X_train_lr), minibatch_size): # loop over minibatches
x = X_train_lr[i:i+minibatch_size] # get one minibatch worth of data
y = Y_train_lr[i:i+minibatch_size]
trainer.train_minibatch({criterion_lr.arguments[0]: x, criterion_lr.arguments[1]: y}) # update model from one minibatch
trainer.summarize_training_progress()
# Test error rate minibatch by minibatch
evaluator = cntk.Evaluator(criterion_lr.outputs[1], [progress_writer]) # metric is the second output of criterion_lr()
for i in range(0, len(X_test_lr), minibatch_size): # loop over minibatches
x = X_test_lr[i:i+minibatch_size] # get one minibatch worth of data
y = Y_test_lr[i:i+minibatch_size]
evaluator.test_minibatch({criterion_lr.arguments[0]: x, criterion_lr.arguments[1]: y}) # test one minibatch
evaluator.summarize_test_progress()
In our examples above, we use the train() function to train, and test() for evaluating.
In this section, we want to walk you through the advanced options of train():
CNTK makes distributed training easy. Out of the box, it supports three methods of distributed training:
Simple data-parallel training distributes each minibatch over N worker processes, where each process utilizes one GPU. After each minibatch, sub-minibatch gradients from all workers are aggregated before updating each model copy. This is often sufficient for convolutional networks, which have a high computation/communication ratio.
1-bit SGD uses 1-bit data compression with residual feedback to speed up data-parallel training by reducing the data exchanges to 1 bit per gradient value. To avoid affecting convergence, each worker keeps a quantization-error residual which is added to the next minibatch's gradient. This way, all gradient values are eventually transmitted with full accuracy, albeit at a delay. This method has been found effective for networks where communication cost becomes the dominating factor, such as full-connected networks and some recurrent ones. This method has been found to only minimally degrade accuracy at good speed-ups.
BlockMomentum improves communication bandwidth by exchanging gradients only every N minibatches. To avoid affecting convergence, BlockMomentum combines "model averaging" with the residual technique of 1-bit SGD: After N minibatches, block gradients are aggregated across workers, and added to all model copies at weight of 1/N, while a residual keeps (N-1)/N times the block gradient, which is added to the next block gradient, which then is in turn applied at a weight of 1/N and so on.
Processes are started with and communicate through MPI. Hence, CNTK's distributed training works both within a single server and across multiple servers. All you need to do is
distributed_learner objectmpiexecPlease see the example below when we put all together.
The callbacks parameter of train() specifies actions that the train() function
executes periodically, typically every epoch.
The callbacks parameter is a list of objects, where the object type decides the specific callback action.
Progress trackers allow to log progress (average loss and metric)
periodically after N minibatches and after completing each epoch.
Optionally, all of the first few minibatches can be logged.
The ProgressPrinter callback logs to stderr and file, while TensorBoardProgressWriter
logs events for visualization in TensorBoard.
You can also write your own progress tracker class.
Next, the CheckpointConfig class denotes a callback that writes a checkpoint file every epoch, and automatically restarts training at the latest available checkpoint.
The CrossValidationConfig class tells CNTK to periodically evaluate the model on a cross-validation data set,
and then call a user-specified callback function, which can then update the learning rate of return False to indicate early stopping.
Lastly, TestConfig instructs CNTK to evaluate the model at the end on a given test set.
This is the same as the explicit test() call in our examples above.
Let us now put all of the above examples together into a single training. The following example runs our MNIST example from above with logging, TensorBoard events, checkpointing, CV-based training control, and a final test.
# Create model and criterion function.
model_mn = create_model_mn()
@cntk.Function.with_signature(cntk.layers.Tensor[input_shape_mn], cntk.layers.SparseTensor[num_classes_mn])
def criterion_mn(data, label_one_hot):
z = model_mn(data)
loss = cntk.cross_entropy_with_softmax(z, label_one_hot)
metric = cntk.classification_error(z, label_one_hot)
return loss, metric
# Create the learner.
learner = cntk.learners.momentum_sgd(model_mn.parameters, lrs, momentums)
# Wrap learner in a distributed learner for 1-bit SGD.
# In this example, distributed training kicks in after a warm-start period of one epoch.
learner = cntk.train.distributed.data_parallel_distributed_learner(learner, distributed_after=1, num_quantization_bits=1)
# Create progress callbacks for logging to file and TensorBoard event log.
# Prints statistics for the first 10 minibatches, then for every 50th, to a log file.
progress_writer = cntk.logging.ProgressPrinter(50, first=10, log_to_file='my.log')
tensorboard_writer = cntk.logging.TensorBoardProgressWriter(50, log_dir='my_tensorboard_logdir',
rank=cntk.train.distributed.Communicator.rank(), model=criterion_mn)
# Create a checkpoint callback.
# Set restore=True to restart from available checkpoints.
epoch_size = len(X_train_mn)
checkpoint_callback_config = cntk.CheckpointConfig('model_mn.cmf', epoch_size, preserve_all=True, restore=False)
# Create a cross-validation based training control.
# This callback function halves the learning rate each time the cross-validation metric
# improved less than 5% relative, and stops after 6 adjustments.
prev_metric = 1 # metric from previous call to the callback. Error=100% at start.
def adjust_lr_callback(index, average_error, cv_num_samples, cv_num_minibatches):
global prev_metric
if (prev_metric - average_error) / prev_metric < 0.05: # did metric improve by at least 5% rel?
learner.reset_learning_rate(cntk.learning_parameter_schedule_per_sample(learner.learning_rate() / 2))
if learner.learning_rate() < lrs[0] / (2**7-0.1): # we are done after the 6-th LR cut
print("Learning rate {} too small. Training complete.".format(learner.learning_rate()))
return False # means we are done
print("Improvement of metric from {:.3f} to {:.3f} insufficient. Halving learning rate to {}.".format(prev_metric, average_error, learner.learning_rate()))
prev_metric = average_error
return True # means continue
cv_callback_config = cntk.CrossValidationConfig((X_cv_mn, Y_cv_mn), 3*epoch_size, minibatch_size=256,
callback=adjust_lr_callback, criterion=criterion_mn)
# Callback for testing the final model.
test_callback_config = cntk.TestConfig((X_test_mn, Y_test_mn), criterion=criterion_mn)
# Train!
callbacks = [progress_writer, tensorboard_writer, checkpoint_callback_config, cv_callback_config, test_callback_config]
progress = criterion_mn.train((X_train_mn, Y_train_mn), minibatch_size=minibatch_sizes,
max_epochs=50, parameter_learners=[learner], callbacks=callbacks)
# Progress is available from return value
losses = [summ.loss for summ in progress.epoch_summaries]
print('loss progression =', ", ".join(["{:.3f}".format(loss) for loss in losses]))
Unfortunately, MPI cannot be used from a Jupyter notebook; hence, the distributed_learner above actually has no effect.
You can find the same example
as a standalone Python script under Examples/1stSteps/MNIST_Complex_Training.py to run under MPI, for example under MSMPI as
mpiexec -n 4 -lines python -u Examples/1stSteps/MNIST_Complex_Training.py
Your ultimate purpose of training a deep neural network is to deploy it as part of your own program or product. Since this involves programming languages other than Python, we will only give a high-level overview here, and refer you to specific examples.
Once you completed training your model, it can be deployed in a number of ways.
The first step in all cases is to make sure your model's input types are known by calling update_signature(), and then to save your model to disk after training:
model_mn.update_signature(cntk.layers.Tensor[input_shape_mn])
model_mn.save('mnist.cmf')
Deploying your model in a Python-based program is easy: Since networks are function objects that are callable, like a function, simply load the model, and call it with inputs, as we have already shown above:
# At program start, load the model.
classify_digit = cntk.Function.load('mnist.cmf')
# To apply model, just call it.
image_input = X_test_mn[8345] # (pick a random test digit for illustration)
scores = classify_digit(image_input) # call the model function with the input data
image_class = scores.argmax() # find the highest-scoring class
# And that's it. Let's have a peek at the result
print('Recognized as:', image_class)
matplotlib.pyplot.axis('off')
_ = matplotlib.pyplot.imshow(image_input, cmap="gray_r")
Models can be deployed directly from programs written in other programming languages for which bindings exist. Please see the following example programs for an example similar to the Python one above:
Examples/Evaluation/CNTKLibraryCPPEvalCPUOnlyExamples/CNTKLibraryCPPEvalCPUOnlyExamples.cppExamples/Evaluation/CNTKLibraryCSEvalCPUOnlyExamples/CNTKLibraryCSEvalExamples.csTo deploy a model from your own web service, load and invoke the model in the same way.
To deploy a model via an Azure web service, follow this tutorial: Examples/Evaluation/CNTKAzureTutorial01
This tutorial provided an overview of the five main tasks of creating and using a deep neural network with CNTK.
We first examined CNTK's Functional programming and its tensor/sequence-based data model.
Then we considered the possible ways of feeding data to CNTK, including directly from RAM,
through CNTK's data-reading infrastructure (MinibatchSource), and spoon-feeding through a custom minibatch loop.
We then took a look at CNTK's advanced training options, including distributed training, logging to TensorBoard, checkpointing, CV-based training control, and final model evaluation.
Lastly, we briefly looked into model deployment.
We hope this guided your have you a good starting point for your own ventures with CNTK. Please enjoy!