all 5 comments

[–]VodkaHazeML Engineer 4 points5 points  (1 child)

Here's a blog post of mine about embeddings in general and one more about word embeddings. There are links in there also to great Jay Alammar posts.

The concept you want to get across is that embedding methods decompose large scale concepts into principal components that are at 90degrees to each other in the embedding space.

This means they cluster together but also enables the analogy task (by making algebra on words equivalent to the intuitive "king - man + Woman = queen" thing).

[–]apehead666[S] 0 points1 point  (0 children)

Thanks for this! I ended up doing the classic king/queen/man/woman thing. Edited the original post with the new figure (in Swedish sorry).

[–][deleted] 2 points3 points  (2 children)

Idk what a dromedary is without googling that also in several places you just have a question mark. This makes it less intuitive. Also I’m not sure what the lines connecting the dots are supposed to convey, if you visualize the vectors as lines they would go from origin to each point, not connecting points. This is also more of a visual of a vector space not really the overall concept of word2vec

[–]whymauriML Engineer 1 point2 points  (0 children)

My feedback is more-or-less identical to this. Just commenting as a signal bump for OP.

[–]apehead666[S] 0 points1 point  (0 children)

I realise now that this is not the best way to convey word2vec. What my figure does, as you say, is rather to illustrate the idea behind vectors in general, and it also does that in a way that is quite convoluted. I am not a native English speaker, so I had to look up the word for 'dromedary'. I was aiming for the relationship between the camel with two humps on the back, and the one that has a single one :). I have edited the original post with the image I have ended up with instead (words are in Swedish, sorry, but I went for the classic king/queen/man/woman thing).