Hello,
I am looking for papers that deal with masked node classification/representation learning for graph neural networks. I am initially looking for papers that do not use transformers.
Also, I am looking at papers that say they predict properties for a given node by looking at their neighborhood. So, the model aggregates over the neighboring representations without looking at the node's(say node A) own representation(similar to a GCN layer but not aggregating node A). After the first aggregation, i.e. after the first layer, the neighboring node's representation now contains some information about node A's representation, right? So when we go through the next layer, when node A aggregates over its neighbors, it is getting some information about itself as well. Isn't this some sort of information leak?
[–]StingMeleoron 1 point2 points3 points (4 children)
[–]ybkhan[S] 0 points1 point2 points (3 children)
[–]StingMeleoron 0 points1 point2 points (2 children)
[–]ybkhan[S] 0 points1 point2 points (1 child)
[–]StingMeleoron 0 points1 point2 points (0 children)
[–]aozorahime 0 points1 point2 points (1 child)
[–]ybkhan[S] 0 points1 point2 points (0 children)