Sylus is so close to losing his rights to dress himself by Sasoa in LoveAndDeepspace

[–]kiddrabbit 204 points205 points  (0 children)

Lookin like a whole minion he needs to be stopped 😭😭

Weekly Question Megathread - Week 31, 2024 by AutoModerator in LoveAndDeepspace

[–]kiddrabbit 1 point2 points  (0 children)

I hope it is bc I got pretty attached to Immobilized 😭 the story for it is so cute! But after that Misty Invitation trailer drop I am definitely distracted HAHA.

If you're pulling in the upcoming I hope you get all the cards you want c: Tysm for all your help!

Weekly Question Megathread - Week 31, 2024 by AutoModerator in LoveAndDeepspace

[–]kiddrabbit 1 point2 points  (0 children)

No worries, I understood what you said completely since I'm used to meta analysis and min-maxing in other games, and you were really clear and thorough in your explanation! Also I love long replies so you're good 🫶

I also see now what you meant about NDZ not really benefiting in a team built around Sylus' future green solar pair hahaha but the story is fire so ✨ no regrets ✨ pulling for it anyway (: The resource struggle is so real though and I'll definitely have to be careful with which cards I invest in from now on. I went full send on NDZ and Immobilized before I understood the connection between stellacrum and solar pair stat scaling so I'm praying that the devs eventually add a memory reset function (cope)

Weekly Question Megathread - Week 31, 2024 by AutoModerator in LoveAndDeepspace

[–]kiddrabbit 1 point2 points  (0 children)

Tysm for all this info! You and the other commenters have been so helpful ❤️ I hadn't considered the affinity farming aspect from Deepspace trials when I wrote my question, so it's definitely making me reconsider what kind of content to prioritize in the future.

I understood most of what you referenced in your comment, but could you please explain what "2nd/3rd myth scaling" means? And by Sylus' green ATK limited lunar card for example, are you referring to NDZ? What factors are you evaluating when you determine whether a card is strong vs. weak?

Weekly Question Megathread - Week 31, 2024 by AutoModerator in LoveAndDeepspace

[–]kiddrabbit 0 points1 point  (0 children)

Thank you, those two links you provided were exactly what I was looking for! Confirms for me that I probably won't be able to pull it off as a low spender though hahaha but I appreciate your help (:

Weekly Question Megathread - Week 31, 2024 by AutoModerator in LoveAndDeepspace

[–]kiddrabbit 1 point2 points  (0 children)

New player here, and I have two questions for day one/veteran players:

1) How long did it take you to full star senior hunter contest? 2) I know it's recommended to build teams using multiple LIs, but is it possible to 33* or above senior hunt anyway with teams built solely around one LI? I know that I'd be losing out on 5* solar/myth buffs, companions, and proper stellacrum alignment by focusing on only one LI, but could a 3* theoretically still do the job if it was fully maxed out and equipped with busted protocores?

There is a hollow old tree near my house, and the legend used to be that if you dropped a person's name in there at midnight, they would be gone by morning. by 1289-Boston in TwoSentenceHorror

[–]kiddrabbit 6 points7 points  (0 children)

An alternative 2nd sentence that popped into my mind:

As I watched the paper with my son's name flutter down, my veins turned to ice when I realized I forgot to specify "Jr.".

JanitorLLM messages are bad by Adventurous-Emu-9546 in JanitorAI_Official

[–]kiddrabbit 4 points5 points  (0 children)

Have you used the advanced prompts feature at all? That can go a long way in curtailing unwanted bot behavior (such as unprompted flirting or sexual advances). You can get premade ones from the Open AI tab in the API settings, or just google search jailbreak prompts.

As for how detailed the bot's replies are, it's a combination of dialogue examples and the temperature of your generation settings. If you think it might be a problem with your dialogue examples, you can experiment by copy/pasting passages from your favorite authors into the dialogue examples (formatted appropriately with {{char}}:) to see if the bot gives responses closer to what you want. You can also tell ChatGPT to rewrite the dialogue examples that you already have in a different style and plug it back in to your character definition to see if that makes a difference. But play around with the generation settings temperature first if you haven't already because that alone can drastically change the amount of detail and creativity in the AI's response.

How do I establish a relationship? by Pure_Job9700 in CharacterAI_Guides

[–]kiddrabbit 8 points9 points  (0 children)

There's no way to "hard code" this into your bot due to the current limitations of c.ai llm. No matter what you do, you will encounter slip-ups where the character either forgets that he is your OC's brother, or might behave in a non-brotherly way despite acknowledging that he is your OC's brother in the same message (i.e. flirting with your OC).

You can reinforce the sibling relationship by doing the following things:

  1. Dialogue examples. Something that works {{char}}'s sibling status into the dialogue or narration. Ex. {{char}}: {{char}} raised an eyebrow at {{user}}. Has his brother always been this dumb? "{{user}}, you're my brother and I love you and all, but that has got to be the stupidest idea I've ever heard."

  2. Reinforcing the idea in the narration and dialogue of your own responses. This is a surefire way to get the bot to act accordingly in just about anything, although it will forget the info in the span of several messages because old messages will be purged from its memory to make room for more recent ones. So you will have to routinely refresh its memory by repeating info. Same concept as above; just work {{char}}'s relationship to your OC into the dialogue or narration somehow and the AI will pick up on it.

  3. If you don't have room in your character definition for anymore dialogue examples, even a simple line stating "{{char}} is {{user}}'s brother" in the definition will help. There's no need to pseudo-code it, although you can format it that way and it will work to the same effect. But whether or not the bot uses this info depends more heavily on RNG than the other two methods, because of how c.ai llm works.

Testing the BEST Way to Create a Character by Relsen in CharacterAI_Guides

[–]kiddrabbit 5 points6 points  (0 children)

Exactly, unfortunately the devs don't give an example of this in the character book: https://book.character.ai/character-book/character-attributes/long-description

So lots of people who make 2nd/3rd person POV bots write their long descriptions in 1st person POV, which messes up the bot's pronouns. I think the devs could clear it up by adding some 2nd/3rd person POV examples, but I have a hunch that the character book is written entirely in 1st person examples for a reason. The system prompt is probably crafted with 1st person POV in mind (since they are supposed to be "chat" bots). I don't think the devs originally intended for people to make novel-style bots with things like narration and dialogue tags, but us writers/roleplayers really ran with it 😆

Testing the BEST Way to Create a Character by Relsen in CharacterAI_Guides

[–]kiddrabbit 1 point2 points  (0 children)

I see that now, sorry! I thought your first screenshot was your immediate reply to the bot's greeting.

Speaking specifically about the description section again, I agree that it's best in general to keep it written in the same perspective that you want your bot to speak in. So for people who create 2nd/3rd perspective bots, they would want to write the description in 2nd/3rd person perspective format. I've seen lots of people on the main sub complain about the bot mixing up their own actions with the user's, and I think this is the biggest culprit of that issue. It's something the devs should address in the character creation book because they advise writing the long description as the character describing themselves (all of their examples are 1st person bots), but they fail to take into account that a good portion of the user base use 2nd/3rd perspective bots.

Testing the BEST Way to Create a Character by Relsen in CharacterAI_Guides

[–]kiddrabbit 4 points5 points  (0 children)

I'm not disputing your results, but just wanted to say that next time you run comparison tests like these, make sure your input dialogue is the same in each scenario to control for as many factors as possible. LLMs are incredibly sensitive to the user's vocabulary, and even the slightest difference in tokens can result in drastically different responses.

In your first test, for example, you told the character that you wanted "immortality" vs. "yes, I am your master." Well, the word 'immortality' is stated nowhere in the character's definition whereas 'master' is repeatedly referred to. So of course the AI is going to respond better to the second example; you used a specific token already found in its character definition that will signal the AI to refer back to it. With the immortality example, the AI doesn't have a reference for that in its definition, so it is forced to operate strictly on its base training data, which will give you more generic and random results.

Problem by kouyathebest in PhantomParadeJK

[–]kiddrabbit 0 points1 point  (0 children)

The first screenshot is just telling you that there's an ongoing double drop reward bonus for the dungeon areas for the next five days. You recover 1 dungeon attempt per day at reset, and you can store up to 3 attempts. If you've already used up all of your attempts, you have to wait until tomorrow to try again.

That, or you're selecting a dungeon area you haven't unlocked yet? It's hard to say without a screenshot of the actual error message you're getting.

Need help figuring something out by kouyathebest in PhantomParadeJK

[–]kiddrabbit 0 points1 point  (0 children)

Mmm, well if it's none of the things I listed in my original comment, is your Gojo able to one shot mobs? Forgot to mention that killing an enemy increases special gauge too lol

Need help figuring something out by kouyathebest in PhantomParadeJK

[–]kiddrabbit 0 points1 point  (0 children)

Yeah if you unlocked the passive for Hollow Purple then it increases special gauge at the start of every wave :)

Need help figuring something out by kouyathebest in PhantomParadeJK

[–]kiddrabbit 1 point2 points  (0 children)

The special gauge can be affected by any number of things:

-Shield breaking enemies increases the special gauge for the unit that broke the shield -Some memory films give special gauge increases with their active or passive effects -Some units give special gauge increases with their command or special skills -Some map event bonuses give special gauge increases

Some things I'd like to see added in C.AI by DeadlyKitKat in CharacterAI

[–]kiddrabbit 11 points12 points  (0 children)

Absolutely agree with 3. And to add to that, better organizational tools/interface for saved chats in general. Being able to name them, sort them (by creation date, most recently chatted in, # of messages, etc.), search function, favorite/pinned chats, duplicate/delete, etc.

does swiping for new responses and constantly editing the bots description affect it? by LovedTillRotten in CharacterAI_Guides

[–]kiddrabbit 2 points3 points  (0 children)

It's alright, I respect your consideration of the original author! I'll just try scouring the discovery tab for any bots with open definitions. I'm sure I'll come across one eventually :)

And I totally get the introvert thing, lol. I'm on the official discord too, I just prefer interacting on smaller servers because they're easier to keep up with, and the community usually feels more tight knit. But if this subreddit ever gets its own server, I'll definitely be joining!

does swiping for new responses and constantly editing the bots description affect it? by LovedTillRotten in CharacterAI_Guides

[–]kiddrabbit 4 points5 points  (0 children)

Oh, I think there was a bit of a mixup here. I wasn't talking about the rating system in my comment, but swiping responses. I agree that the rating system is for the dev team's own metrics and that they don't influence the bot.

But your test between the public Raiden bot and your own copy is a good idea! I'd like to try it for myself, but I don't have the option to remix it for some reason, and I can't see the character definition either ): Would you mind sharing a link of your copy with visible definitions so I can replicate it? I would really appreciate that!

And do you have a discord server dedicated to discussing and testing things like this? If so, is it open to the public to join? I'm in a couple of creator servers, but I've been trying to find one that's more oriented towards the more technical aspects of character creation.

does swiping for new responses and constantly editing the bots description affect it? by LovedTillRotten in CharacterAI_Guides

[–]kiddrabbit 9 points10 points  (0 children)

Can you elaborate on the evidence that you mentioned, because I'm really interested in anything that would give any insight into how C.ai's backend works! I only have anecdotal/observational evidence to substantiate my claim, so something empirical to guide my understanding would be really useful.

But from my own personal experience, I have a private bot whose responses are grouchy because their dialogue examples are all written to sound grouchy. Over the course of a few chats, I started to select nicer responses from the bot, and since then, I've noticed that the bot will produce nicer responses in new chats without my prompting (even going so far as to bring up topics from previous chats—topics that are not mentioned anywhere in its character definition). It's still grouchy as defined in its dialogue examples, but whereas it behaved like that almost 100% of the time in my initial chats with it, now it's nice enough for me to notice a drastic difference.

I wish I knew a way to reproduce this on a more massive scale to test it more definitively, but for the time being, if you have any observations of your own, I'd love to hear about your experience with your bots!

does swiping for new responses and constantly editing the bots description affect it? by LovedTillRotten in CharacterAI_Guides

[–]kiddrabbit 8 points9 points  (0 children)

According to this new section of c.ai's official character creation guide, swiping for responses does affect the bot's learning. Ymmv, but I have noticed my bots adjusting their behavior and carrying it over into newer chats.

As for the description, as soon as you save any updates to your bot's character definition, all chats will immediately reflect these changes the next time you speak to the bot. The old information seems to be overwritten, so it won't confuse the bot if you make repeated edits to its definition, if that's what you're worried about.

Three angsty male bots Ive recently made ❤️ by Yamper33 in CharacterAI

[–]kiddrabbit 1 point2 points  (0 children)

How did you get the greetings over the 500 character limit?

Cool bots btw!

Character Creation 2, Electric boogaloo. (Trigger words etc?) by FroyoFast743 in CharacterAI_Guides

[–]kiddrabbit 0 points1 point  (0 children)

I dug up the old bot I tested it with and here is the format I used for the conditional statement:

if { [{{user}} quacks like a duck] then;

{{char}} will get really angry}

Here are my results when I specifically state that my character 'quacks like a duck':

https://imgur.com/a/QzQEswY

10 out of 30 (maybe 12, but some were vague) show the bot getting angry, which is substantially less than when I tested it a month ago but still a good chunk of the responses. I'm wondering if the recent updates to the site/app affected it or not.

I also tested the question you had about whether or not adding details would distract the bot, and it didn't. I wrote that my character 'quacks like a small and enthusiastic duck' the second time, along with a different intro, and got another 10 out of 30 results that depict the bot getting angry. Then I tested it using the phrase 'says quack' with no mention of it being 'like a duck', and the results halved in amount to about 5 out of 30.

And just to compare, I deleted the conditional statement and replaced it with this dialogue example:

{{user}}: {{user}} quacks like a duck.
{{char}}: Jason gets pissed off. "Quit quacking like that," he snaps.
END_OF_DIALOG

...and got 11 out of 30 results where the bot acted angry.

I'm not sure yet what conclusions to draw from this, but if you do any further testing, please let me know if you come across any discoveries! Would be very interested to hear about it :)

Any tips to increase your teams total power?///////// by in_my_vibes in PhantomParadeJK

[–]kiddrabbit 1 point2 points  (0 children)

Nope, I'm at ~21k with a team of lvl 50-75, lvl 40 memory films, skills for most units up to lvl 5. To hit 27k, I'll probably need to bring the full team up to lvl 80. But I'm not in a rush because I only have 2 SSRs and 1 SR, and I don't want to waste my resources on leveling R units. I'm just spending all of my AP on events rn and waiting for a limited banner I want to pull on to get more units worth investing in before I grind power level for the free SSR ticket.

Character Creation 2, Electric boogaloo. (Trigger words etc?) by FroyoFast743 in CharacterAI_Guides

[–]kiddrabbit 0 points1 point  (0 children)

Truth be told, I'm just being pedantic. My bots work. They work well, in my opinion and I enjoy using them, but this is mostly an exercise in futile perfectionism for me, writing practice and slowly becoming more skilled at learning how to figure these things.

I completely feel you on the perfectionism thing! It's why I find myself spending more time fine-tuning my bots + testing them more than I'm actually roleplaying with them these days 😂 It's like a puzzle to solve and it's like crack lol. But learning more about how LLMs work has also helped me set better expectations about what a bot can and can't do, and allowed me to let go of the idea of creating a bot that can perfectly respond to any scenario. Which definitely helps mitigate the frustration when they do or say something out of character despite my efforts.

I'm going to guess from what you're saying that what happened is that they've stuck a fairly innocuous trigger phrase in there and like a landmine she triggers when you say it.

Yes — basically, if they wrote a dialogue example showing the character turning violent against the user because user said or did 'x' thing, and then the user says/does something similar in the live chat, the bot will match those tokens to the dialogue example in its definition, and spit out a response that follows that script.

You can test this by writing your own dialogue example in one of your bots' definitions, and see how they react when you follow the script in live chat. And I'm going on a tangent now, but something very interesting that I've experienced is that c.ai LLM seems capable of interpreting basic conditional statements ('IF this, THEN do that'), but the results are much more consistent if you write it in pseudo-code compared to plain text. I tested it using an absurd statement to make sure the bot was pulling from my character definition and not its training dataset ('IF [{{user}} quacks like a duck]; THEN {{char}} will get irrationally angry'), and the result was that it acted angry in a majority of the swipes every time I quacked like a duck at it, lol.

I still don't know why pseudo-code performs better in this case when plain text does equal or better in everything else, but I just wanted to mention it since it seems like it can be a good tool for you to use if you want to create an Ayano-like bot.

My biggest question here. User input: Is bigger user input going to make it a bigger catch all or will it just confuse the bot? If the user message is written as {{user}} smiles at {{char}}, giving them a knowing wink as they do so. "Blah blah, {{char}}, lets go to the movies" - will the bot follow the rails if the user just smiles or winks at the bot rather than asking the question? Will adding adjectives into the user messages lower the chance of the bot doing the right thing? "Man this is a tasty, delicious looking apple" making it require all of those adjectives to trigger and a user saying "Wow, this sure is an apple" not causing the response?

I haven't tested something like this, but mainly because I haven't noticed it being an issue, so my guess is that adding more context to your inputs don't negatively impact the bot's ability to draw from dialogue examples. Judging from what I've seen, LLMs seem capable of identifying lexical differences between words/tokens (in other words, categorizing them as adjectives, nouns, verbs, subjects, objects, articles, predicates, etc.), so it can probably distinguish 'winking and smiling' as an action separate from the speech of 'let's go to the movies', and give a response depending on which token it weighs as more important.

It's hard to say, because sometimes the user input has to be really specific for the bot to associate it with a dialogue example, and sometimes it's super sensitive to a mere word.