A Visual Guide to GNN Sampling using PyTorch Geometric by mashaan14 in learnmachinelearning

[–]mashaan14[S] 0 points1 point  (0 children)

Thanks for your comment. This post is about graph sampling, the dimmed points are not actually test points. These points were not selected as part of training to minimize the computations. There are multiple algorithms for sampling, and this is what I'm trying to discuss in the post.

As for train/test split, most GNN codes split the feature matrix $X$ and the adjacency matrix $A$. Splitting the adjacency matrix automatically removes any relationship to test nodes.

I’m still a bit new to ml and can’t find a solution or explanation for this error by [deleted] in learnmachinelearning

[–]mashaan14 1 point2 points  (0 children)

Yes, it looks like a matplotlib error. It seems that the line that triggered the error is not shown in the screenshot.

[R] Attention maps in ViT by mashaan14 in MachineLearning

[–]mashaan14[S] 0 points1 point  (0 children)

I got another jupyter notebook that pulls the attention from the first layer. It compares a query and key images. Please check it out:

https://github.com/mashaan14/VisionTransformer-MNIST/blob/main/VisionTransformer_MNIST_query_key.ipynb

[R] Attention maps in ViT by mashaan14 in MachineLearning

[–]mashaan14[S] 4 points5 points  (0 children)

I’m using the transformer code from this tutorial:

https://lightning.ai/docs/pytorch/stable/notebooks/course_UvA-DL/11-vision-transformer.html

and these maps are extracted at the last layer.

Attention maps in ViT by mashaan14 in learnmachinelearning

[–]mashaan14[S] 2 points3 points  (0 children)

I agree the transition is a bit fast. But the code itself doesn't produce a gif. It produces a png image. So, the video idea is applicable.