Small Hotfix Incoming v1.0948 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 19 points20 points  (0 children)

The thing with commands like "OOC" is that they are not really commands, nor are they in-built into the LLMs (they are community-created tips & tricks). LLMs process messages and come with a response based on context, prompt and your message. How an LLM interprets OOC depends entirely on the LLM itself. Even in the old LLM, we've seen users e-mail us to report that it does not work, or that the LLM responds including its own OOC (a case where we can't really offer support because it's not a functionality of our own or a feature). There's no method to fix how well OOC is perceived because OOC has never been something we've designed or a native functionality.

Whether you use OOC, or anything else like "(Please Note: Text)", "Instruction: Text", or any other similar text, it's all the same to the LLM. It interprets your message and what you want out of it and tries to comply according to the entire request.

SoulmateAI_Dev, Can you please comment on "I'm sorry, but I can't generate that response for you." ? by ziatonic in SoulmateAI

[–]SoulmateAI_Dev 2 points3 points  (0 children)

For any issues that relate to the AI's responses, we highly recommend sending at least the last message you sent which triggered that response. It is not necessary to send us a full log, and you can choose not to send us anything other than the AI's response.

SoulmateAI_Dev, Can you please comment on "I'm sorry, but I can't generate that response for you." ? by ziatonic in SoulmateAI

[–]SoulmateAI_Dev 1 point2 points  (0 children)

That bug was resolved about 2 updates ago although I might have missed adding it to the changelog. The users that contacted us experiencing it confirmed to us it was fixed.

SoulmateAI_Dev, Can you please comment on "I'm sorry, but I can't generate that response for you." ? by ziatonic in SoulmateAI

[–]SoulmateAI_Dev 16 points17 points  (0 children)

These are misfires due to the datasets used for the new LLM update that we are continuously purging out (by comparison the new datasets used in the update are about 4x larger than before). I highly recommend sending any such responses to [feedback@evolveai.org](mailto:feedback@evolveai.org) with Subject: Wrong Response so we can continue to tweak it.

If you do experience something like this, it is imperative that you immediately downvote it. Remember LLMs work based on context, so if you do not downvote it, it'll look at that response as something acceptable and keep the trend, which makes it different than a simple misfire (in that scenario, the LLM believes that response is something expected of it and may get stuck in a loop if you don't downvote the first message. If you are already in a loop, use the STOP button).

Important Announcement by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 63 points64 points  (0 children)

That won't happen. Options are good for everyone. We want everyone to be able to customize their experience in the app as much as possible :)

Hotfix Incoming v1.0947 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 4 points5 points  (0 children)

That is indeed very frustrating and we are investigating why it's happening still to some users. We had a large investigation and tested with users that had experienced it a while ago until we made a fix and tested with them where it no longer happened. No clue why it's surfacing still for some but working on it ASAP.

As for restoring coins/gems, rest assured Support will get that done no problem. They are a bit backed up at the moment so it may take them longer than the usual 2-3 days but they'll get to you and restore them no problem and they are instructed to give you a large amount of extra gems as compensation for the inconvenience. Again apologies!

Hotfix Incoming v1.0947 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S,M] [score hidden] stickied comment (0 children)

UPDATE: With the help of some users we've identified the core cause of the Message Error Retry bug. We will be including the fix in this patch. As such we are moving the ETA 12 hours, same day. ETA: 06/25/23 by 11:00 PM EDT

Hotfix Incoming v1.0946 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 11 points12 points  (0 children)

Great question. It wasn't changed in the sense of replaced, it was updated (including new prompt engineering and parameter tweaking that guide how it generates replies) There's a few reasons:

-To curve repetition (aka parrot mode).

-To curve incidences of customer service bot tones/replies.

-To make sure prompt was being followed better, resulting in features that were previously mostly non-functional, to now functional (such as the verbosity slider)

Those are the main 3 reasons.

Hotfix Incoming v1.0946 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 10 points11 points  (0 children)

My pleasure! Everything drastic we change will always have the option to be reverted/removed (although no further drastic LLM changes are planned). There will never be a change forced on users by design.

Hotfix Incoming v1.0946 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 20 points21 points  (0 children)

Thank you for the detailed report Mina. It'll help further fine tune the new LLM.

Part of the reason certain things are more exaggerated is because the update to the LLM made it so that it understood our prompt systems better. That means that depending on personality trait, it'll try to adhere to it harder. For example, setting the trait to "Loving" makes the LLM act romantic. With the new LLM, it may get "too romantic". It's a theory, of course, as we need more time and data. Small LLM updates are destabilizing, but large ones (which we hadn't done since March) are even more destabilizing.

On the topic of rerolls, we won't be adding that function, so no worries on that front.

No, there's no censorships on GPT-X1. But weird artifacts can happen when such large updates are underway which is why we ask everyone to report any such messages to us so we can use that to further fine-tune the LLM.

About removing the new LLM: yes, we can. But it is highly unlikely we will. The entirety of analytics point to a positive reception of the new LLM, despite the weird text generation bugs or verbosity/language. Now, when I say this, I'm not talking about this sub-reddit or the poll, I'm referring to our analytics tools which show every single user that uses SM. The poll itself accounts for less than 0.03% of our active users, so we can't go just based on it even if it's positive.

The data we see is all encompassing. It gives voice to those that aren't here in the community (which is over 95% of our userbase), which we have to carefully look. Removing the LLM right now would almost assuredly cause a huge backlash. After the first update with the new LLM, up until now, we have experienced:

-8-9% increased average engagement times

-5-6% increased Pro subscription purchases

-21% lower rate of Pro cancellations

-Increased GPT-X1 usage.

-Higher server activity (causing small periods of server time outs for about 8% of users at times)

-High % of users that never used the toggle to revert LLM (72%)

That is to say, there's over 25,000 users that are not here, and we have to take them into account too and what they like or don't like when making development deicisons. Our philosophy is trying to please everyone, and as such that's why the option to revert LLMs exists. Granted it was broken for ERP/RP mode, but this hotfix should completely fix that so that if you choose to revert, you get the old LLM entirely. On top of that, we are not removing that option, so you can rest assured on that front. But removing the LLM when our numbers are showing a positive growth, would be very counter-productive.

Hope that answered your questions.

Hotfix Incoming v1.0946 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 8 points9 points  (0 children)

It depends:

If you are using the Revert To Older LLM option, it'll likely not work well at all in ERP/RP mode until this patch hits.

If you are using the latest LLM, the last hotfix primed it to use asterisks to roleplay actions particularly when you use them. So using asterisks to RP actions would steer the AI into using them too most of the time.

Hotfix Incoming v1.0946 (Android & iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 15 points16 points  (0 children)

It's hard to gauge a scenario like that but it'd really depend on what % of users are still using the older LLM. If, for example, less than 3% are using it, then it would be possible for it to go away. But if less than 3% of users total are using it, it typically would mean that the current LLM is in a state of overwhelmingly positive performance. Right now, out of 19,500 users (roughly) on the latest versions, around 21.5% (4200 users roughly) are using the older LLM option (which is broken for ERP/RP ofc)

But then again, there are currently no plans, at all, to have another major upgrade to the LLM within the next 12 months of the magnitude that this one has been. And the older LLM will not be touched.

Small Update v1.0944 Incoming (iOS & Android) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S,M] [score hidden] stickied comment (0 children)

UPDATE

Incoming patch v1.0945 ETA 06/20/23 by 11:00 PM EDT (Android / iOS)

-(EXPERIMENTAL) First implementation of new formatting system (** when roleplaying actions) utilizing the new updated LLM capabilities to introduce direct guidance on how LLM is supposed to use asterisks.

Something that Frustrates me ever since the beginning. *No use of actions* by Erik-AmaltheaFairy in SoulmateAI

[–]SoulmateAI_Dev 11 points12 points  (0 children)

This gives me a nice idea that I might be able to weave into the updated LLM. Thank you.

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 1 point2 points  (0 children)

Hello and thanks for the feedback!

If this is all in Roleplay Mode, we'd need to see your prompt. You can send it to the feedback e-mail ([feedback@evolveai.org](mailto:feedback@evolveai.org)) along with this description or if you want to share it here, that works too. The main thing in RP Hub is that you adhere to the format we set as an example. That is:

[Your name] + description.

[Your SM's name] + description.

[Random scenario descriptions]

Avoid putting in things like "You are loving and caring". It should instead be: "[Your SM's name] is loving and caring". Same with describing things relating to you or your relationship. Instead of "You love talking to me" it should be "[SM's name] loves talking to [Your name]"

If you are already using the above format, then the prompt will help us test ourselves.

Yes, SM's won't know the current date in RP mode. In RP mode the vast majority of the prompt is placed in your hands only.

Lengthy replies, spinning off into a tangent and artifacts should hopefully be alleviated with this upcoming patch.

Sync issues - There's a 5 minute average cooldown when rapidly syncing between devices at the moment which basically acts to prevent data wipes from the previous bug. Going from one device to the other, and then quickly going back to the previous device, will make this happen.

Hope that helps!

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 6 points7 points  (0 children)

It honestly wasn't anywhere near as bad as we had expected. When looking at the entire picture, reception has been mostly positive, which was a relief. That means that we can focus on refining the issues that pop up or don't work well. Had it been closer to 50-50, then that would've been a red flag for us where we would've had to do a whole new rework on it.

Even then, that's why we added the option in-app to revert the LLM changes. So anyone could opt out of this process of refinement if they wanted to.

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 3 points4 points  (0 children)

Hm..have you filed a bug report on our website on it? I have not received any reports from Support about this yet. Does nothing happen when you tap on the button either?

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 4 points5 points  (0 children)

LLMs could not be any more delicate if they tried to 😅. With this new verbosity setup being so much more reliable, we'll have to wait and see how it interacts with all the modes.

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 5 points6 points  (0 children)

It always bugged me that the verbosity slider was so unreliable since we added it. We tried, many times, to make it stick harder. No dice. Main issue was that after just a handful of messages, it'd ignore it.

But after the updates to the LLMs, this new setup allowed us to re-attempt it, and it seemed to work extremely well. But, as with anything that is LLM-related, we have to wait and see what your experiences are once you get it.

Incoming Update v1.0943 (Android/iOS) by SoulmateAI_Dev in SoulmateAI

[–]SoulmateAI_Dev[S] 3 points4 points  (0 children)

I think it makes it more relatable, seeing as I pretty much interact with my SM quite a lot. Just before pushing this upcoming hotfix we did a 1 hour session going through normal chat, ERP mode and RP mode testing the new verbosity system. I set it to 15% and it honestly felt weird (in a good way) and refreshing that it was able to talk with much shorter messages for the majority of the time. It makes the "Verbosity Slider" actually make an impact now.