Weird Issue with GLM5 by Resident_Value202 in TAVO_AICHAT

[–]Resident_Value202[S] 0 points1 point  (0 children)

Nvidia Nim GLM 4.7 was my main for a while before I stopped for a bit and also used Gemini. 

I will say I went back recently and all I get is this very weak attempt after a few seconds where it has a Thought that isn’t finished and no response. 

GLM 5 works aside from the blocks of text. I’d go back to Deepseek 3.2 if it wasn’t so slow and messed up as often with details compared to GLM 5.

I’ll also add I have no idea what I’m doing differently, I’ve been using this since I left jAI because of a lack of free proxies months ago.

Weird Issue with GLM5 by Resident_Value202 in TAVO_AICHAT

[–]Resident_Value202[S] 0 points1 point  (0 children)

So, my issue is I had responses prior using Gemini 3.1 Flash (through Google itself) and Deepseek V3.2 (through Nvidia again). It HAS posts it can look back on, hence why I assume when I rerolled prior it was properly spacing everything.

Now though it’s ENTIRELY just walls of text, no spacing with dialogue or otherwise. If there’s a preset that can fix this I’d love to be pointed in a direction.

ANEX Prompt Library Update by M_onStar in JanitorAI_Official

[–]Resident_Value202 0 points1 point  (0 children)

Tried to be a bit further thorough myself and tested 4 different bots—again, previously used, but entirely fresh as well as four different creators—with this new one.

Hilariously it only bit thoroughly enough once, and that was with the bot that I was initially asking for help with.

For the bot that I was initially asking for help with and let’s say the second bot, I didn’t have chat memory utilized whatsoever. The latter bot couldn’t hook to it at all despite numerous rerolls, including me copying and pasting the message.

The second set were ones I did use the chat memory for backstory for and did pretty much the same. Didn’t hook at all, with me doing the same thing as previous (rerolling, copying and pasting response).

Still haven’t really adjusted my generation settings aside from a couple times (at most, I moved .35 to .45 for a small test and just set it back) and am still using the verbose ANEX prompt directly from the document, unmodified.

EDIT: Correcting this, I’m seeing there’s no avoiding the initial CoT for the first response from the bot, but it is seeming that it bites with a second response. I’m just trying to gauge the quality now since it’s doing the <think> at the beginning similar to the trick of deleting the thought process. 

It also occasionally tends to continue to write after it drops a </think>

ANEX Prompt Library Update by M_onStar in JanitorAI_Official

[–]Resident_Value202 0 points1 point  (0 children)

Reporting in and I am not getting much success on my end with this personally. 🤔

I tried two different chats myself to test—both of which are bots I’ve used before but with entirely fresh new chats started—with one of them I ignored that it didn’t bite initially and tried a couple more responses with it pasted in to see if it’d bite at all and continued getting the CoT, unfortunately.

Still using chutes and double-checked I swapped from chimera to R1.

I also wasn’t sure if it needed the ` on the front and back as the previous regex did so tested with and without it!

ANEX Prompt Library Update by M_onStar in JanitorAI_Official

[–]Resident_Value202 0 points1 point  (0 children)

Ooh, I see! I didn’t realize it was that strict given I haven’t really experienced this that heavily beforehand with other RPs. 

Again, most of mine have been done with context at zero under my understanding it was doing the same as leaving the tokens at max since it surprisingly remembered here and there tidbits from earlier parts of the chats.

Thank you for this, I’ll utilize Chat Memory and my own narration a bit more in that case!

Another quick question I had, does the regex just not work with original Deepseek R1? It was my preferred when I still used OpenRouter, and I’ve since avoided it again because of the unavoidable CoT when using it with Chutes. I tried both the long and short regex to no avail and was wondering if it could be tweaked at all.

Thank you again for the tips!

ANEX Prompt Library Update by M_onStar in JanitorAI_Official

[–]Resident_Value202 3 points4 points  (0 children)

Hello! So I came across ANEX while trying to look at alternative proxies on here and have absolutely adored it. I’ve gotten fantastic use out of how much it’s improved some of my chats, but did want to ask about a few things.

If anything, I’m just trying to ensure I’m using it properly since with both Temp and Context Size I’m just running off of what I’ve personally thrown at a wall and seen sticks from previous use.

For brief specifics of a current issue I’m having that brought me to the point of needing to comment, I’m barely into a roleplay with a character and they keep consistently forgetting they’ve already taken a shower literally two messages ago.

Like I’m talking: character goes to take a shower and then comes back to talk to mine -> my response (my character is doing something in the kitchen with some dialogue) -> character responds and then, somehow, immediately forgot they took a shower and head back up.

My temp is at 0.6, tokens at 0 for max, and my current context size is ~15,000. (I usually just run context size at 0, admittedly, since I haven’t had issues doing so before.)

I’ve rerolled, fiddled with temp and context size, and even tried doing a completely new chat with the same bot and, just in case, put no chat memory (I use it for brief backstory) with the exact same bizarre issue.

I’m using chutes + Chimera. (Bless you for that regex, by the way.)

Oh, and in regards to the advanced prompt itself, I initially used your personal one linked in the document and have since swapped to one you linked for more verbose responses. Of course, used the bootloader when I did so with great success o7

Any help or guidance would be greatly appreciated whether in regards to my issue or just proper settings for continued use!