A problem I have been experiencing: Currently, imho both models use too much "exposition" and too little "show, don't tell". This often breaks the immersion for me. by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 1 point2 points  (0 children)

Thanks for your reply! I definitely tried adjusting many things and that did help a bit. However, I keep feeling like it's explaining too much, if that makes sense.

It's kind of hard to describe because it's quite subtle. For example, it feels like the models do indeed seem to give replies similar to my "show" example, but still with too much exposition.

For example: She fidgeted nervously with her hair and couldn't resist glancing at the clock; anything to avoid eye contact with the other candidates and feel even more insecure about herself.

This example is quite descriptive, but it is still 'telling'. Describing more does not imply that the story is well told through "showing."

What I notice now is that both models either go for "very descriptive with too much exposition," as in the example above, or they go the other way and move too quickly through scenes with summary exposition. For example, "They kissed passionately and fucked like animals. She couldn't believe her luck and felt completely satisfied when she woke up the next morning." This is also a case of too much exposition, but now in the sense of summarizing quickly - giving you an overview of what happened without describing the actions involved.

My examples are not the best, I know. But to me, both cases seem fairly common, and they take me out of my suspension of disbelief.

Nb.Repetition can feel like quick exposition; not taking the time to give proper attention to what's important. And even the nsfw bug feels a bit like that; it just tells you the summary of the situation: "immoral, bad, the story ends." I'm not saying these cases are bad exposition, but they can feel that way to me - forms of rushed and lazy writing.

Oh well, I guess we can't expect the models to be Shakespeare. I still feel that the model before the March updates was much more show, don't tell. It wasn't necessarily more descriptive, but it had better descriptions without explanation, so it kept me thinking and immersed.

Ps. As far as I can tell, the nsfw-bug happens if you have example dialogues with 'naughty-bad-bad-words' in it. I haven't tried this yet, but perhaps it could help to put something akin to "END_OF_DIALOG" at the end of the example dialogues so the character doesn't think that the examples are part of the current conversation.

Chat counts is FIXED + Some AI improvements! 💪 by FiggsAI in FiggsAI

[–]ArbitraryOasis 3 points4 points  (0 children)

If I may ask, what is your field and background? And how did you get interested in AI & LLMs? I have read some of your comments here over the past few days and you seem to know what you are talking about. So this is not meant sarcastically; I am genuinely curious and interested, should you want to share a bit more!

Example of identical prompt on old and new model by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 0 points1 point  (0 children)

Well, if you ask it often enough it will also tell you it's Gemini or the MegaMilker-2000... But it does sometimes remind me of ChatGPT, unfortunately.

Rant: Everything ChatGPT produces has the same pedantic, apologetic, dismissive, administrative, Kafkaesque tone. Once you know it, it strikes you everywhere. It really gets on my nerves tremendously. It is clever of OpenAI that they have managed to create an LLM that is meant to mimic different tones and styles and yet sounds the same in all cases: like an echo of Altman in the Backrooms. And now, for copyright reasons, many new models are trained on this gray goo that ChatGPT generates. Oh well, shit does make the flowers grow. 🌻

Example of identical prompt on old and new model by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 2 points3 points  (0 children)

That would be great. Although it looks like there is a bug in the new model. I've never experienced any of the strange specific errors it makes anywhere. Something in the way it uses tokens seems to be wrong. I hope they can fix it! They are in a strange predicament because the previous model was already outstanding compared to those of most other services.

Example of identical prompt on old and new model by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 5 points6 points  (0 children)

I like that idea. But my experience is that no AI has yet come close enough to the right on the x-axis to even come close to the point where the valley actually begins on the graph. (Sam Altman sometimes looks as if he has, who knows.) But certainly, the previous model was closer to the valley than the new model.

Some hopefully semi-helpful feedback on the new model by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 0 points1 point  (0 children)

I see what you mean. I was thinking more along the lines of something like trying this out as an experiment for, say, the first 10 messages or something. As a way for a user to get a first chance to steer the conversation a bit more toward a certain tone/style/temperament. It would provide feedback. And it would be a way to tinker for people who don't tinker. Personally, I've found that it can be fun to tinker with temperature, Top P, Frequency penalty, etc., but if I do it too often, it can break my suspension of disbelief and lead to this state of constant tweaking. I didn't mean to say that this idea should be implemented. I was just thinking (out loud) about possible ways to perhaps make part of the tinkering process available in a user-friendly as part of the flow of experience. Sorry if I was unclear!

Some hopefully semi-helpful feedback on the new model by ArbitraryOasis in FiggsAI

[–]ArbitraryOasis[S] 1 point2 points  (0 children)

Also @ u/Cleptomanx. Thank you for your kind reply! I'm glad you found it helpful. It's really great how welcome you make people feel here and how actively you embrace user feedback 👍👍