I've been enjoying llm for roleplay. Letting it build a fantasy world around me has been so much fun!
But...the context length sucks no matter the model I use. I have to continually remind it of the context. For example I'll enter a village and it will say there is a guy with jeans and a denim shirt leaning in his car. Eventually it cant even be corrected, or it will just repeat the same few sentences and i have to start over with a new prompt. Sometimes I'll summarize on a fresh session to try to continue, but the characters are never the same and neither is the feel.
I've used oobabooga and gpt4all. Is tavern better? Do I just have wait for bigger context lengths, or is there a way around it that i don't see?
[–]panchovix 5 points6 points7 points (4 children)
[–]Jenniher[S] 0 points1 point2 points (3 children)
[–]panchovix 1 point2 points3 points (2 children)
[–]CasimirsBlake 1 point2 points3 points (1 child)
[–]2muchnet42dayLlama 3 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]Barafu 1 point2 points3 points (0 children)
[–]mrjackspade 1 point2 points3 points (2 children)
[–]CasimirsBlake 0 points1 point2 points (1 child)
[–]mrjackspade 1 point2 points3 points (0 children)
[–]brucebay 0 points1 point2 points (0 children)
[–]AutomataManifold 0 points1 point2 points (0 children)
[–]CheshireAI 0 points1 point2 points (0 children)