layer

Single graph attention network for predicting crystal properties.

Note

Some abbreviations used in Layer class:

Abbreviations

Full name

dist

distance matrix

feat

features

ft

features

src

source node

dst

destination node

e

e_i_j: refer to: https://arxiv.org/abs/1710.10903

a

alpha_i_j: refer to: https://arxiv.org/abs/1710.10903

att

attention mechanism

act

activation function

class Layer
__init__(self, in_dim, out_dim, num_heads, device='cuda', bias=True, negative_slope=0.2)
Parameters:
  • in_dim (int) – Depth of node representation in the input of this AGAT Layer.

  • out_dim (int) – Depth of node representation in the output of this GAT layer.

  • num_heads (int) – Number of attention heads.

  • device (str) – Device to perform tensor calculations and store parameters.

  • bias (bool) – Whether the dense layer uses a bias vector.

  • negative_slope (float) – Negative slope coefficient of the LeakyReLU activation function.

forward(self, feat, dist, graph)

Forward this AGAT Layer.

Parameters:
  • feat (torch.tensor) – Input features of all nodes (atoms).

  • dist (torch.tensor) – Distances between connected atoms.

  • graph (DGL.graph) – A graph built with DGL.

Returns:

dst: output features of all nodes.

Rtype dst:

torch.tensor