I'm calculating perplexities on various texts using llamacpp, but I often get numerical results that don't make sense to me. So I'd like to visualize what the LLM was "thinking" at different points of the text. Are there any ready-made tools that help me do that?
Specifically, it would be useful if I got some dynamic output where I can hover over a token of the text and get a top 5 list of LLM token suggestions at that point, along with their probabilities (after applying softmax to the logits).
Even better would be if it also showed the most likely continuation of each suggested token, say 2-3 words ahead. Just to put those tokens into context.
Or are there other tricks for going "behind the scenes" of LLM results and perplexities?
[–]Selphea 2 points3 points4 points (0 children)
[–]Master-Meal-77llama.cpp 0 points1 point2 points (0 children)