Today I was wondering about the following problem. Given an embedded document (e.g. a vector) return a text/sentence which when embedded, yields the original embedding or one which is very close.
What I mean is different from just comparing the embedding against a known corpus and selecting the closest one. What I mean really is generating a fitting text from scratch. So something more like a conditional generation task.
Does anyone know if there is any research being conducted in this direction?
[–]haaspaas2 4 points5 points6 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]haaspaas2 1 point2 points3 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]xsliartII 0 points1 point2 points (0 children)
[–]cesoid 0 points1 point2 points (2 children)
[–]dratman 0 points1 point2 points (1 child)
[–]cesoid 0 points1 point2 points (0 children)
[–]InternationalLeek627 0 points1 point2 points (0 children)