Back to Annotated Deep Learning Paper Implementations

Train a Graph Attention Network v2 (GATv2) on Cora dataset

docs/graphs/gatv2/experiment.html

latest3.8 KB
Original Source

homegraphsgatv2

View code on Github

#

Train a Graph Attention Network v2 (GATv2) on Cora dataset

11importtorch12fromtorchimportnn1314fromlabmlimportexperiment15fromlabml.configsimportoption16fromlabml\_nn.graphs.gat.experimentimportConfigsasGATConfigs17fromlabml\_nn.graphs.gatv2importGraphAttentionV2Layer

#

Graph Attention Network v2 (GATv2)

This graph attention network has two graph attention layers.

20classGATv2(nn.Module):

#

  • in_features is the number of features per node
  • n_hidden is the number of features in the first graph attention layer
  • n_classes is the number of classes
  • n_heads is the number of heads in the graph attention layers
  • dropout is the dropout probability
  • share_weights if set to True, the same matrix will be applied to the source and the target node of every edge
27def\_\_init\_\_(self,in\_features:int,n\_hidden:int,n\_classes:int,n\_heads:int,dropout:float,28share\_weights:bool=True):

#

37super().\_\_init\_\_()

#

First graph attention layer where we concatenate the heads

40self.layer1=GraphAttentionV2Layer(in\_features,n\_hidden,n\_heads,41is\_concat=True,dropout=dropout,share\_weights=share\_weights)

#

Activation function after first graph attention layer

43self.activation=nn.ELU()

#

Final graph attention layer where we average the heads

45self.output=GraphAttentionV2Layer(n\_hidden,n\_classes,1,46is\_concat=False,dropout=dropout,share\_weights=share\_weights)

#

Dropout

48self.dropout=nn.Dropout(dropout)

#

  • x is the features vectors of shape [n_nodes, in_features]
  • adj_mat is the adjacency matrix of the form [n_nodes, n_nodes, n_heads] or [n_nodes, n_nodes, 1]
50defforward(self,x:torch.Tensor,adj\_mat:torch.Tensor):

#

Apply dropout to the input

57x=self.dropout(x)

#

First graph attention layer

59x=self.layer1(x,adj\_mat)

#

Activation function

61x=self.activation(x)

#

Dropout

63x=self.dropout(x)

#

Output layer (without activation) for logits

65returnself.output(x,adj\_mat)

#

Configurations

Since the experiment is same as GAT experiment but with GATv2 model we extend the same configs and change the model.

68classConfigs(GATConfigs):

#

Whether to share weights for source and target nodes of edges

77share\_weights:bool=False

#

Set the model

79model:GATv2='gat\_v2\_model'

#

Create GATv2 model

82@option(Configs.model)83defgat\_v2\_model(c:Configs):

#

87returnGATv2(c.in\_features,c.n\_hidden,c.n\_classes,c.n\_heads,c.dropout,c.share\_weights).to(c.device)

#

90defmain():

#

Create configurations

92conf=Configs()

#

Create an experiment

94experiment.create(name='gatv2')

#

Calculate configurations.

96experiment.configs(conf,{

#

Adam optimizer

98'optimizer.optimizer':'Adam',99'optimizer.learning\_rate':5e-3,100'optimizer.weight\_decay':5e-4,101102'dropout':0.7,103})

#

Start and watch the experiment

106withexperiment.start():

#

Run the training

108conf.run()

#

112if\_\_name\_\_=='\_\_main\_\_':113main()

labml.ai