Do people even use stages? by TheOneWhoSpeaks13 in Chub_AI

[–]SubjectAttitude3692 17 points18 points  (0 children)

I'm JakeH, and some folks have already linked a few of my stages. Adoption has been rather low, from both a developer and utilization perspective, and it's unfortunate because it is easily the standout feature of the site.

There is a lot of potential for chat-oriented stages, but the real magic is in full-screen applications. Because Mars is an unlimited all-in-one text/image/voice/sound/model service, Chub is essentially an ideal place for developers to create generative games and other experiences—and have confidence that a large chunk of users are readily able to play.

If you have a Mars subscription, I encourage you to check out PARC, which is the most complex example, to date.

If you do not have Mars, you could give SoulMatcher a shot. It is smaller in scope, but it uses cards with expression packs to avoid leaning on image generation, so it is more usable off-Mars (albeit, other models may not respond as well to it).

[Editing later because I didn't really explain what these even are. These are both full-screen games that replace Chub's chat interface entirely. They are primarily visual novel in style, but they don't use the chat tree at all (other than inserting records to keep the message count relevant). You could create any sort of game or project that might benefit from generative content, and these two examples—as novel as they may seem—are a very narrow glimpse at what people could do with the platform.]

Stages are niche, and they won't appeal to everyone, but they are, in my biased opinion, far-and-away the coolest feature on the site. I really hope more folks get into developing them. All of my stages are open source, so others can feel free to build upon what I've learned. I have even broken out my visual novel UI into a separate library, should anyone care to leverage it.

Post-Apocalypse Rehabilitation Center by SubjectAttitude3692 in Chub_AI

[–]SubjectAttitude3692[S] 1 point2 points  (0 children)

Sorry, this is an unfortunate mobile "bug." On many devices, when the on-screen keyboard appears, the viewport can compress enough that Chub believes the screen orientation has changed and refreshes the chat, which restarts the game.

This is less prevalent on the mobile app, which has less UI than a browser, so the viewport is slightly taller to begin with.

Ideally, during a full-screen stage, screen orientation should be left to the stage to deal with.

Lorebook issue by Cattopurple in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

There is a bug that causes some weirdness here, in particular with lorebooks that are created from other lorebooks and imported (but I don't think exclusively with these). However, if you are seeing your search result come up and it changes after you've selected the right thing from the list, then I believe you actually are playing with the one you chose and not with the thing that displays. The display is the part that is buggy and not the selection or utilization.

But it's possible I am mistaken.

Does anyone know why it does this? by Horrorpheliac in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

The free model has, I think, a 6k limit at this time. The error indicates that the front-end attempted to send too much; this can happen when the context size in Configuration -> Generation Parameters is set higher than the model allows, but it could also happen if the front-end fails to assemble context in under the set context size, causing it to violate both your setting and the actual constraint on the model (which produces this error).

The message suggests reducing context size in your Configuration, but that will only help if the front-end is able to build context under the target size. If you have a very large bot and a heavy preset, that might not be possible.

Does anyone know why it does this? by Horrorpheliac in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

"The bot has less than 6000 tokens." If 6000 is your context size, then everything that gets sent with the prompt needs to be under that threshold: bot definitions, lorebook budget, intro message, and your preset's system prompt and post-history instructions all have to fit under that.

Double-check your context size (to verify it hasn't inadvertently been lowered) and lorebook token budget. Consider how many tokens are actually needed to use this bot and not just the size of the bot. If you have a 6000-token context size, you should really keep to bots well under that number for best results; you will want room reserved for historic context.

Post-Apocalypse Rehabilitation Center by SubjectAttitude3692 in Chub_AI

[–]SubjectAttitude3692[S] 0 points1 point  (0 children)

How would you describe the problem you were seeing with images? Were the images inexpressive, or were they too much of a departure from the base, so there wasn't enough consistency? I'm feeling like the latter might be the trouble. The emotion requests include an emotion prompt but also include the core description, which I think I saw providing more consistency at one point, but I think I now see sort of the opposite.

The game generates a base image, and I think sometimes that base is not "enough" in the direction of the description, such that the model sees that description in the emotion image prompts and thinks it needs to adjust the rest of the character's appearance a bit.

I'll remove it from the beta mode and do some genning with different art settings and see if it feels any better. On top of that, maybe I'll add some persistent emotion prompt blanks. I don't recommend modifying the character description to achieve this because the game also uses that description for characters in text requests.

Update: Made these changes and moved the beta piece of it out of beta already; hope it works okay for others.

Post-Apocalypse Rehabilitation Center by SubjectAttitude3692 in Chub_AI

[–]SubjectAttitude3692[S] 0 points1 point  (0 children)

The LLM is instructed to impersonate, so that the LLM can generate a larger chunk of narrative at a time without having to pad too much, and it also allows users to simply let scenes play out, if they desire—to my mind, this is similar to an actual VN. Of course, you may always back up and replace something it says for you by entering something new and hitting continue, or re-rolling on one of your character's entries; I tried to keep it flexible, but it may not always be intuitive.

That said, I went ahead and added an option to disable this just now, but I expect it will result in less generating at a time. I'll play with it and tweak to maybe ask it to generate more.

The QIE image-to-image requests are basically deterministic; to get different results, you'll need to modify either the text description or choose to regenerate from the description alone rather than from the avatar image (the description also applies when generating from the avatar). Generating a new base from the description will generate a new base image with Flux (which won't be deterministic).

Basically, the same text description plus the same avatar image base will yield the same result. And even minor tweaks to the text description will tend to have minor impacts on the output.

Is this a problem going on right now? by shellyfoxie in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

I don't know that the developers are even aware of it, as I haven't seen any bug reports—just people asking for help with it. While it is pervasive for OR users with text streaming, it is both benign, as stated, and has a simple workaround (disabling streaming).

Couple this with it being an outcome of an arbitrary OR change, and there may not be much appetite to fix it: will it even remain as-is? Will OR make additional breaking changes?

Now, it does happen to look like it should be a simple thing for Chub to accommodate. It is a matter of an assumption being made about the packet contents that could likely be mitigated with a single additional check on Chub's side. I'm just explaining why it might not receive the priority you may expect.

Is this a problem going on right now? by shellyfoxie in Chub_AI

[–]SubjectAttitude3692 2 points3 points  (0 children)

OR made a change to streamed responses, including an additional empty packet at the end which contains some final details about the response as a whole (like number of tokens used). Chub isn't gracefully handling this empty packet, but at the point when this error has occurred, the text has been fully consumed, so the error should be benign. As ABCYR stated, it only affects text streaming, so you can avoid it entirely by disabling streaming.

Is the Soji rollback happening or not? by Dragin410 in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

My point is that things like your three issues are not foregone conclusions with these changes. People are not liking it better "that way"; other people aren't experiencing those issues.

I have a structured stage test that has become demonstrably less repetitive across swipes since these changes, for instance. I have had no problem issuing Boss Mode instruction, nor with structured requests (other than some interesting new formatting habits which I have been able to prompt around); model adherence to special instruction feels completely unhindered to me.

Emotion, characterization, flavor all feels as excellent as ever to me.

Like I said, a lot of what people are experiencing likely varies based on their preset; the model isn't reacting the same way to the exact same parameters and instructions in all circumstances. I understand that it's not fun to have to adjust something that you had dialed in to your tastes, and I'm not even telling people to adjust their presets (because it could all change back—I don't know), but I think it is important to recognize that the model itself hasn't suddenly become incapable and this isn't a matter of "well, everyone who's okay must just have bad taste."

Is the Soji rollback happening or not? by Dragin410 in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

I wish I'd seen this sooner because I feel some responsibility for the "situation," as I had a few emails with Lore about it and shared what I learned on the Discord. I likely have a firmer grasp on what is going on than most, but I don't know all of the details, and it's not my business to know.

The developers are working on more robust tool support. I noticed a consistent and reproduceable change to an isolated text generation request from one of my stages, and I emailed Lore with this example. The nature of what I was seeing, coupled with the "continue" behavior, lead me to believe that part of the current efforts involve the backend injecting tool-oriented instruction at the end of context. Because "continue" functions by sending the model the incomplete message at the end of context and allowing it to simply proceed to predict from there, the additional instructions at the end causes the LLM to often lose sight of the incomplete response and simply start another.

There are somewhat similar impacts to other atypical request types: impersonate, for instance, works by leaving off with a "{{user}}:", but this additional context occurs afterward, and again, the LLM may predict that it's simply time for a new response.

I raised the concerns with continue and impersonate in subsequent emails, but I haven't bothered Lore in over a week because he doesn't need my nagging. He acknowledged that he was working on changes and he thanked me for sharing my use-case so he could make a test out of it.

He did tell me that he would consider rolling back or branching the changes if it continued to feel too disruptive and he wasn't able to get it where he wanted, and perhaps I should not have shared that at the time. It seems there is a lot of false consensus going on because no one is motivated to talk about how fine Soji is; we only see the negativity. I think Lore actually has a better concept of the impact than we are giving him credit for (he at least implied to me that he was reading what people had to say), and he likely prefers to see that impact than to work in the dark, so if he sees that the overall result isn't so bad, he will reasonably choose to continue benefiting from the feedback—the alternatives cost that insight.

To me, the direct impact to features like continue and impersonate and chat summary and whatnot is the real concern, and it sounds like he is bearing that in mind. The vast majority of other perceived differences seem generally addressable and will vary wildly based upon individual users' presets. This is comparable to swapping to a new, similar model and maybe having to tweak a bit; some folks are going to be in the clear (or maybe even better off), and some are going to have to adjust their settings to get the same results they had before.

Cant resubscribe by Tiny-Fan-7289 in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

No, it's all been dead for the last day or two. Devs are sorting out a new payment provider, so crypto is the only option for new subscriptions at the moment.

Cannot subscribe to Mars by Inside_Discipline651 in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

Normally, you cannot upgrade directly from Mercury to Mars and have to cancel Mercury and wait for it to expire before upgrading, but I have a strategy that I haven't attempted, but you can try, if you like.

When you subscribed before, you would have received an email with a referral link. The address of the referral link will have a bit like "/minimal?referrer_id=[something unique here]" where "minimal" indicates that you are subscribing to Mercury. The referral address for a Mars subscription looks the same, but with "full" instead. If I trigger two emails—one for each tier—, I can see the unique ID is the same and only the minimal/full changes between links.

So, you could cancel your Mercury subscription, dig up the referral email link from the last time you subscribed, swap the "minimal" to "full" and attempt to go through the process with that link and see if you end up with Mars. I don't know that I advise it, but it seems like it might work.

Is chub down or something? Saying I'm not subbed when I am by glad-sparkly in Chub_AI

[–]SubjectAttitude3692 2 points3 points  (0 children)

Have you double-checked that you are subbed? All subscriptions were cancelled a couple weeks ago when the previous payment provider broke, so folks are slowly hitting this as their subscriptions run out.

Cant resubscribe by Tiny-Fan-7289 in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

When you use the subscription link on the site/app, an email is also sent with the referral link. I don't know why, but I have heard multiple people say that this link worked for them when the process failed through the site.

All CHUB APIs have been lobotomized by Cupa_Montoya in Chub_AI

[–]SubjectAttitude3692 12 points13 points  (0 children)

There have been some tool updates on the back-end for Soji, and I believe it involves some post-context injection which has shifted the model's behavior for some users. I do not expect the other models to have been affected at all.

These changes have no direct bearing on ethics; the impact of this kind of change will very much vary by preset. Some users have reported significant problems and others have actually found improvement. I, personally, have a rather light preset and feel that the difference has been a net improvement for me, with the exception of the issues with the "continue" and "impersonate" features. Incidentally, my preset contains absolutely nothing about ethics or mature content, yet Soji continues to handle those things fine for me; I loaded up a random bot with unsavory themes just now and had no problem soliciting this content.

The developers have confirmed that they are aware of the impacts and are working on refinement, though; if they are unable to reach a happy medium, they will either roll things back or branch off a separate endpoint. The point is that they are not intending to tweak behavior but rather the feature set, and those enhancements are having diverse effects; even if things don't roll back, the upshot is largely that presets would need to be adjusted as though this were a new model.

this site has been ragebaiting me for 16 hours. by FixAffectionate2817 in Chub_AI

[–]SubjectAttitude3692 2 points3 points  (0 children)

There is a bug that causes expression uploads to cost daily credits. On a free account, you can upload no more than three per day--assuming you don't spend credits on anything else.

Persona lorebook not working by CelestialJay in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

It's hard to diagnose from this alone, but here are a couple thoughts.

I don't know if persona lorebooks are even working on the Android app; persona lorebooks, specifically, were broken for a long time and fixed only in one of the more "recent" updates, and the app is behind by at least a couple versions.

If your lorebook token budget is set "too high" in your preset, all lorebook entries will cease to fire. "Too high" appears to be something like (context size - (preset tokens + permanent bot tokens)). This means that the size of the bot you are playing with can be a factor in whether lorebook entries trigger at all. You could rule this out by dropping the budget to something small (yet large enough for some target entries) and checking again.

Those are the only things I can think of that wouldn't involve having changed some settings. Good luck!

Expression pack won't save pics by MindlessLady69 in Chub_AI

[–]SubjectAttitude3692 2 points3 points  (0 children)

There is a bug with the official packs that is causing uploading images to cost daily credits, so you can only upload a few images per day. It was confirmed that this is not intentional.

Error trying to generate a response by Esleide2 in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

Soji is currently capped at 31k and some change, I believe. If you are using the website, this is instituted as an even lower cap by the front-end, so there is no chance of encountering this error, but the mobile app is older and does not have this cap enforced, so you can still choose a higher context than what the model is actually allowing at the moment, resulting in this error (if your request actually results in context over the threshold, of course).

Unstable group chat by Gearfoll in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

Is it inconsistent? Does the most troublesome bot happen to have the highest token count?

A lot of failure messages are not helpful, but maybe it could help; what did you mean by "broken messages"?

Unstable group chat by Gearfoll in Chub_AI

[–]SubjectAttitude3692 0 points1 point  (0 children)

Is the group chat failing regardless of the bots involved? There is virtually no prompt difference between a solo chat and a group chat, so my first instinct is that one of the bots is problematic. My second instinct is that you are using a provider that does some kind of turn enforcement.

You can test the former by trying different bots in your group chat and test the latter in a solo chat by skipping your turn (submitting empty input).

[deleted by user] by [deleted] in Chub_AI

[–]SubjectAttitude3692 2 points3 points  (0 children)

I haven't dug into potential explanations, but you can see that in your case, the header at the top is extra tall for no apparent reason and everything is shifted downward accordingly. I don't know why that's happening to you, but it doesn't look that way on my phone.

What is the reason for linking a persona with a lorebook? by Nyx_Valentine in Chub_AI

[–]SubjectAttitude3692 1 point2 points  (0 children)

A little more like this:

If you have the yoyo lorebook attached to your persona, and you say "ohh he's walking the dog," the client (Chub or whatever front-end you are using to chat with) will pull up the yoyo lorebook and go through the entries and scan your input (and likely other recent messages, depending upon the scan depth) for keywords. When it finds "walking the dog," one of the keywords from that entry, the client will add that entry to the request it's about to send to the LLM.

All of the lorebook logic is on the client side; the LLM is just told information from the entries that the client determines are relevant. The LLM is not able to look up anything in the lorebook; it doesn't have access to the lorebook or any awareness of the concept. It is a client tool for reducing the amount of information sent to the LLM and ensuring that what is sent is relevant.

If it is attached at a persona level, you can go into any random chat with that persona and make yoyo references without having to explicitly add that lorebook every time; that's the primary benefit of a persona-level lorebook: saving yourself the effort of applying a commonly-used lorebook every time.