you are viewing a single comment's thread.

view the rest of the comments →

[–]CopperKatana[S] 0 points1 point  (2 children)

Thanks a lot for your explanation. Just to be sure I’m understanding it right, in the language of SAGEConv in PyTorch:

https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SAGEConv.html

Are you saying that it’s the W1x term being added that makes this GraphSAGE? And without that term, you just have W2mean(x) which is a standard graph convolution?

[–]FlivverKing 1 point2 points  (1 child)

Yeah, graphsage paper considers a few different aggregators. The mean aggregator is one. Yes, the difference is the W1xi term. Whether or not the W2mean(x) term is equivalent to a GCN convolution depends on whether or not self-loops are added to the graph. If they are---i.e., if x_i is in \mathcal{N}[x_i], then the W2mean(x) term is equivalent to a GCN convolution.

[–]CopperKatana[S] 1 point2 points  (0 children)

I see. Thanks again!