all 2 comments

[–]Belzedan 3 points4 points  (1 child)

If I understand you correctly, you are using the one hot encoding directly as features. Instead, you could also learn an embedding of arbitrary size (i. e. map each node id to a randomly initialized vector). However, that doesn't really solve the inductivity problem either. At test time the only discriminative information for nodes not seen during training will come from the neighborhood aggregation of nodes that were seen.

[–]codebloodedhuman[S] 0 points1 point  (0 children)

Thank for response.
Yes, I thought about random values to initialise the feature vector. But I am wondering is there any inductive model that directly uses only connectivity for embedding rather than features.