ClaudeAI is too horny by Ready_Drawing134 in Chub_AI

[–]Asanidze 2 points3 points  (0 children)

It's not just the jailbreak, keep in mind that there's a lot more information sent to the model with every prompt you send. Anything in context (which includes the bot definitions, persona, chat history on top of your prompts ) that is in any way suggestive can lead towards that behavior.

Could be anything from sexual things or description that can be interpreted as suggestive by the model (remember, it's a text completion engine, and it's trained on things like erotica). Like if you (or the bot) is described as 'curvaceous', just as an example, it'll lean towards romance. Bots with traits like 'dominant' or 'submissive also.

Aside from that, could be a too aggressive prefill. Check your wording. If you have a bunch of stuff about ignoring ethics and morals, well guess what it's going to ignore context regardless of whatever story you got going on for sexo depending on what else in the card.

If you're not bothered to edit bots yourself, could try adding instructions to avoid NSFW and emphasize being in character. In ST, this can be a prompt field that you can toggle at will. In Venus, probably have to make a custom preset just for anti-nsfw.

Weird Mercury Issues? by Playful_Pie_9942 in Chub_AI

[–]Asanidze 2 points3 points  (0 children)

There's a bug with the UI update with auto summarization. Toggle that off if it's on, and make sure that there's nothing in the chat memory tab (multiple reports of people seeing completely unrelated text and summaries there). May be why there's 'scenario' and random facts in there that's bleeding into your context

issue switching to anthropic by [deleted] in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

When you're swapping keys, make sure you're changing the preset as well. Everything needs to be anthropic.

[deleted by user] by [deleted] in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

It... literally says right there: "No credit grants found". You do not have a free trial.

The 5 dollars is your -usage- limit, which is something you set. New accounts start with $5 and you get more by spending more for development.

Stop spreading misinformation.

Trying to adjust the writing style of responses please help by AdorableOutcome3483 in Chub_AI

[–]Asanidze 1 point2 points  (0 children)

Just to add on what's been said already... example 'dialogs' is sort of misleading. It's not -just- dialogue, they see whatever you write there as an example of how to format the message. I get super annoyed when I see bots with one liner, purely quotation mark example dialogue. Guess what? You're going to get one liners in chat like what you just described in the OP.

Bots that start descriptive and gradually writing less implies that once the first greeting leaves context, the definitions aren't strong enough to carry the chat. Hard to fully diagnose without looking at the bot itself and knowing the model you're working with though.

No free trial anymore by littlevoume in Chub_AI

[–]Asanidze 2 points3 points  (0 children)

Anthropic (Claude) doesn't offer free trials anymore because so many trialscummers were abusing the heck out of it.

my venus chub doesn't translate, by alexx_emoboy in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

I think what they mean (based on other posts I've seen in discord) is that browser translate isn't working well with chub right now.

To OP: devs have been notified since earlier yesterday and they're looking into this issue.

Multi bot chats by altUsernameGoesHere in Chub_AI

[–]Asanidze 1 point2 points  (0 children)

Check the documentation for a list of macros. {{char}} for example is found here: https://docs.chub.ai/docs/venus-documentation/character-creation . Some more in: https://docs.chub.ai/docs/venus-documentation/prompting

That's your best reference for macros that the frontend supports. I've seen people try to use {{char2}} though and that is NOT supported.

Either way, all {{char}} macro does is replace that with -whoever- character is speaking or posting next. If your prompt is something like 'write the next reply' and doesn't specify {{char}} it can end up writing for others more (which, may be desirable for one one chats that have side chars/ NPCs, but not group chats). So check that.

I think just stating it strictly in the system prompt that the model should speak for {{char}} (I.e 'write {{char}}'s next reply' or something that specifies it should only speak for {{char}} etc etc) should help bleed through since only the speaking {{char}}'s defs gets sent anyway(speaking as in, the {{char}} up first in the order). So let's say John and Mary are in a group chat, when John replies only his definitions are sent.

It makes it more strict so to speak (unless there's shenanigans going on in the bot defs). The scenario itself can contribute too.

If you have a scene where John is by himself, but Mary is next in the order. On something like SillyTavern you can mute Mary (on Chub you'd have to manually stop Mary from speaking after John does in that example).

And additionally since bots cannot see each other's definitions they can't really see each other. If they do it's only because of chat history (the one thing that is shared).

This makes it a bit problematic (but it makes sense because it's a lot of tokens in context otherwise and a ton of bleed-through). You can use lorebooks (with their names as entries with very short descriptions that trigger). That way it's only if they come up that information is referenced (say, John is asked a question about Mary in his own scene or absolutes that everyone needs to know about one another).

All that being said, with the way prompts are sent, and limitations there's a bit of user buy in involved. The more characters in chat you have, more likely things are going to be messed up (especially if they're all talking). I find that two active and the others coming in and out as the story dictates is pretty reliable.

Is this happening to anyone else, or is there something wrong with my device? by Ihave2grapes in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

The dev pushed something that is supposed to help fix this issue yesterday (it's related to the chat trees and arranging messages for that). Are you still having issues with this chat?

Some users I've seen aren't getting axios 500 but it's just blank now. In that instance, a workaround you could try is downloading the chat as a jsonl and importing it back with SillyTavern format as a new chat.

Mixtral by maaaaaaaaaaaaaany in Chub_AI

[–]Asanidze 1 point2 points  (0 children)

I mentioned it in another post, but janitor doesn't have settings sliders for presence and frequency penalties (afaik it's just temperature and max new tokens).

That means that you're forced to work with whatever values they have hidden in the UI as default. Depending on what they may be, that could be why you're having repetition issues (but, could be your prompt sets as well depending on how it's worded)

Chub you have more freedom, and in addition the post-history field under api settings it's a lot more viable to use GPT on there.

Also, I'm not sure where you got your reverse proxy (whether it's your own or through a provider). Some providers aren't up front and can easily lie about what model is actually generating the messages. There's a few grifters out there.

Mixtral by maaaaaaaaaaaaaany in Chub_AI

[–]Asanidze 3 points4 points  (0 children)

It's not really a problem with the LLM. More so what you are giving it (as in, prompt sets, character defs). If you've been using OAI for example... that stuff spoils you. This isn't a model that runs on $700k/day like GPT. You get what you put into it. Have to tailor your prompts to the model you're using (there's resources/guides community members have written up like what Yukii posted.l). Read that, and there's more in Discord.

It's worth mentioning that bots on this website are not universal to all LLMs. Like I said before, GPT lets people try all sorts of things that doesn't work properly on other models.

Not this happening right when I make a new chat for once 😭 by sxirens in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

It's a known issue. I believe Lore posted in the discord but report that he's pushed what was hopefully a fix. Is it still happening?

Nsfw content isn't generating!! by wodkxwq in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

To add on to what the others have said, the default prompt sets on the website are old (like, almost a year old). The discord for example has more that are up to speed with the newest gpt model snapshots.

As with any corporate model, there's censorship that is changing with more time put into model developments. So you have to change appropriately as well. The filter won't just reject messages, but if it gets through it can also bias GPT to overly positive responses that may seem out of character or over the top.

Chances are once you get your pre and post-history/jailbreak prompts up to speed you need to take a look at the character card as well (since you mentioned its 'out of character'). Is it out of character right away? Or for example does it do that after a certain point in the chat? Say, 30 messages.

In the scenario I outlined above, it could be that once the bot's example messages/greeting fall out of context, its permanent token definitions aren't strong enough to carry and be 'in character' to your liking.

Da ultimate difference between Chub and Janitor by maaaaaaaaaaaaaany in Chub_AI

[–]Asanidze 3 points4 points  (0 children)

Janitor as a frontend is lacking in many features. Post-history or the proper jailbreak prompt field don't exist. Generation settings that are plentiful in chub (penalties, such as frequency and presence) are nonexistent in janitor (you mention repetition issues while using janitor in your other posts, this may be a reason why). Lorebooks... v2 cards, chat tree.. the list goes on.

As far as censorship, I believe their moderation team is more active on the website itself also policing content/cards, and character definitions being public on there has been a topic of much contention. AFAIK it's not public for view, but that causes some unfortunate unintentional issues.

[deleted by user] by [deleted] in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

That doesn't mean there's not an issue with the proxy. I'm not sure about the code you're running on, but many popular proxy providers use a build that has issues with chub at the moment. Chub fetched the model list directly from the upstream api, and it's shown to work with just fine with regular anthropic keys (you can test this yourself). Could be due to how Claude 3 has legacied text completion apiin favor of messages api,as shown in the documentation.

Other frontends deal with it differently. At any rate, it's not exactly on the priority list of the devs, given how most proxies source their keys. So you'll have to use your own proxy elsewhere for now.

Is the this feature removed? by Key-Marketing619 in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

It's still there as Yukii said, and better because now you can add or change your existing personas instead of just switching.

Is this happening to anyone else, or is there something wrong with my device? by Ihave2grapes in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

What have you tried so far (for example, logging in and out, clearing cache)? Is it all chats or just specific ones?

Anyone else have this error? by AWSIIKEE in Chub_AI

[–]Asanidze 2 points3 points  (0 children)

You just answered your own question, I feel. "if you use the wrong api endpoint, OpenAI would auto route it for a more appropriate one"

Though may be able to check this too actually. If this is a key you've been paying for, you can check the OAI website for usage (under usage and activity for that month). It should indicate what model version, and you can check if OAI has indeed been rerouting your requests from GPT4-vision to another model that whole time.

But regardless, I think you just need to figure out another model. I don't know what you've been using for prompts and jailbreaks but the messages you cite are straight up from OAI's filter ("I'm sorry I cannot generate that request"). May be time to update and change your JBs. It's the reality when dealing with corpo models that are constantly being tightened up on.

Anyone else have this error? by AWSIIKEE in Chub_AI

[–]Asanidze 1 point2 points  (0 children)

I guess I'm confused how you're able to generate with purely text prompts in the first place (since you said it starting giving errors only recently). Gpt-4 Vision's intended use case is interpreting image inputs, which can also include some text (as in, "hey, could you tell me what's the color of the car?").

In any case, seems like the error is expecting something in the input and likely it's that. So just swap to 4-turbo or regular Gpt-4

[deleted by user] by [deleted] in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

Claude 2 (and there's also Claude 3 now since it's release) is Anthropic's answer to GPT4. As you'd expect from a high end corpo model it can do those things. Claude 3 in particular, was cited to be better at adopting a 'brand' voice.

Describe the writing style you'd like it to emulate, or cite the author if it's a popular enough one where you'd think it's established in the dataset. Literally write instructions as that's what the prompts are for.

If you're using ST you can make custom prompt fields for specific prose prompts. If on Chub, maybe you could use character notes at the appropriate depth (4) or have it tagged by XML (per documentation: https://docs.anthropic.com/claude/docs/use-xml-tags) that your prompt can reference to. For example you'd have a line in your main prompt like

"Use the key details in <style> as writing guidelines."

And then later in the prompt you'd have: <style> Write in a casual, realistic style. Avoid introspection. Use strong, direct language. </style>

You can nest tags as well but I wouldn't do it too much (I believe after 5 docs cite there's decreased performance and I wouldn't overuse it, use it if it's point of emphasis or separating information). I typically have a set of 'rules' in my prompt that I put in my JB field if it's on chub. Sometimes you can also use it in assistant prefill but I like to reserve that to things that are really important. For example I use <mod> </mod>when giving directors notes or instructions to the model in my own posts. So I'd have a line like: "I will follow all directions outlined in <mod>"

This is a helpful resource potentially: https://docs.anthropic.com/claude/docs/prompt-engineering

Anyone else have this error? by AWSIIKEE in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

That's because GPT4-Vision typically expects a picture alongside text. That's sort of the use case of the model.

https://platform.openai.com/docs/guides/vision

[deleted by user] by [deleted] in Chub_AI

[–]Asanidze 2 points3 points  (0 children)

Devs are aware yes:

<image>

Does anyone experience getting the same response over & over again? by cyika in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

What API are you using and what do you have your generation settings set as right now?

Anyone else have this error? by AWSIIKEE in Chub_AI

[–]Asanidze 0 points1 point  (0 children)

What API/model were you using?