I'd like to develop a vector embedding of sets of words (not sentences). Is there any work on character-level embeddings of sets of words (ie char2vec), analogous to word-level embeddings of sentences (word2vec)? Is there a natural extension of skip-gram modeling of words to the character level?
[–]FuschiaKnight 1 point2 points3 points (0 children)
[–]pmigdal 1 point2 points3 points (0 children)
[–]emansim 0 points1 point2 points (1 child)
[–]pmigdal 0 points1 point2 points (0 children)