docs/capsule_networks/index.html
[View code on Github](https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/capsule_networks/ init.py)
This is a PyTorch implementation/tutorial of Dynamic Routing Between Capsules.
Capsule network is a neural network architecture that embeds features as capsules and routes them with a voting mechanism to next layer of capsules.
Unlike in other implementations of models, we've included a sample, because it is difficult to understand some concepts with just the modules. This is the annotated code for a model that uses capsules to classify MNIST dataset
This file holds the implementations of the core modules of Capsule Networks.
I used jindongwang/Pytorch-CapsuleNet to clarify some confusions I had with the paper.
Here's a notebook for training a Capsule Network on MNIST dataset.
32importtorch.nnasnn33importtorch.nn.functionalasF34importtorch.utils.data
This is squashing function from paper, given by equation (1).
vj=1+∥sj∥2∥sj∥2∥sj∥sj
∥sj∥sj normalizes the length of all the capsules, whilst 1+∥sj∥2∥sj∥2 shrinks the capsules that have a length smaller than one .
37classSquash(nn.Module):
52def\_\_init\_\_(self,epsilon=1e-8):53super().\_\_init\_\_()54self.epsilon=epsilon
The shape of s is [batch_size, n_capsules, n_features]
56defforward(self,s:torch.Tensor):
∥sj∥2
62s2=(s\*\*2).sum(dim=-1,keepdims=True)
We add an epsilon when calculating ∥sj∥ to make sure it doesn't become zero. If this becomes zero it starts giving out nan values and training fails. vj=1+∥sj∥2∥sj∥2∥sj∥2+ϵsj
68return(s2/(1+s2))\*(s/torch.sqrt(s2+self.epsilon))
This is the routing mechanism described in the paper. You can use multiple routing layers in your models.
This combines calculating sj for this layer and the routing algorithm described in Procedure 1.
71classRouter(nn.Module):
in_caps is the number of capsules, and in_d is the number of features per capsule from the layer below. out_caps and out_d are the same for this layer.
iterations is the number of routing iterations, symbolized by r in the paper.
82def\_\_init\_\_(self,in\_caps:int,out\_caps:int,in\_d:int,out\_d:int,iterations:int):
89super().\_\_init\_\_()90self.in\_caps=in\_caps91self.out\_caps=out\_caps92self.iterations=iterations93self.softmax=nn.Softmax(dim=1)94self.squash=Squash()
This is the weight matrix Wij. It maps each capsule in the lower layer to each capsule in this layer
98self.weight=nn.Parameter(torch.randn(in\_caps,out\_caps,in\_d,out\_d),requires\_grad=True)
The shape of u is [batch_size, n_capsules, n_features] . These are the capsules from the lower layer.
100defforward(self,u:torch.Tensor):
u^j∣i=Wijui Here j is used to index capsules in this layer, whilst i is used to index capsules in the layer below (previous).
109u\_hat=torch.einsum('ijnm,bin-\>bijm',self.weight,u)
Initial logits bij are the log prior probabilities that capsule i should be coupled with j. We initialize these at zero
114b=u.new\_zeros(u.shape[0],self.in\_caps,self.out\_caps)115116v=None
Iterate
119foriinrange(self.iterations):
routing softmax cij=∑kexp(bik)exp(bij)
121c=self.softmax(b)
sj=i∑ciju^j∣i
123s=torch.einsum('bij,bijm-\>bjm',c,u\_hat)
vj=squash(sj)
125v=self.squash(s)
aij=vj⋅u^j∣i
127a=torch.einsum('bjm,bijm-\>bij',v,u\_hat)
bij←bij+vj⋅u^j∣i
129b=b+a130131returnv
A separate margin loss is used for each output capsule and the total loss is the sum of them. The length of each output capsule is the probability that class is present in the input.
Loss for each output capsule or class k is, Lk=Tkmax(0,m+−∥vk∥)2+λ(1−Tk)max(0,∥vk∥−m−)2
Tk is 1 if the class k is present and 0 otherwise. The first component of the loss is 0 when the class is not present, and the second component is 0 if the class is present. The max(0,x) is used to avoid predictions going to extremes. m+ is set to be 0.9 and m− to be 0.1 in the paper.
The λ down-weighting is used to stop the length of all capsules from falling during the initial phase of training.
134classMarginLoss(nn.Module):
155def\_\_init\_\_(self,\*,n\_labels:int,lambda\_:float=0.5,m\_positive:float=0.9,m\_negative:float=0.1):156super().\_\_init\_\_()157158self.m\_negative=m\_negative159self.m\_positive=m\_positive160self.lambda\_=lambda\_161self.n\_labels=n\_labels
v , vj are the squashed output capsules. This has shape [batch_size, n_labels, n_features] ; that is, there is a capsule for each label.
labels are the labels, and has shape [batch_size] .
163defforward(self,v:torch.Tensor,labels:torch.Tensor):
∥vj∥
171v\_norm=torch.sqrt((v\*\*2).sum(dim=-1))
L labels is one-hot encoded labels of shape [batch_size, n_labels]
175labels=torch.eye(self.n\_labels,device=labels.device)[labels]
Lk=Tkmax(0,m+−∥vk∥)2+λ(1−Tk)max(0,∥vk∥−m−)2 loss has shape [batch_size, n_labels] . We have parallelized the computation of Lk for for all k.
181loss=labels\*F.relu(self.m\_positive-v\_norm)+\182self.lambda\_\*(1.0-labels)\*F.relu(v\_norm-self.m\_negative)
k∑Lk
185returnloss.sum(dim=-1).mean()