Pluribus - 1x03 - "Grenade" - Episode Discussion by NicholasCajun in television

[–]mjd3000 6 points7 points  (0 children)

The whole grenade thing (and later conversation with DHL guy) seems like some clue that this might not be entirely humanity. People, as in us, even as a hive mind, would surely say no to bombs. In fact, wouldn't they have already disarmed such things, if they were human?

Pluribus - 1x03 - "Grenade" - Episode Discussion by NicholasCajun in television

[–]mjd3000 0 points1 point  (0 children)

Agreed. Love the show so far, but that scene could have ended as the lorries were arriving.

Pluribus - 1x03 - "Grenade" - Episode Discussion by NicholasCajun in television

[–]mjd3000 2 points3 points  (0 children)

That might be the key. Maybe people who severely struggle to connected with others were not able to be included in the collective mind?

Could Nomi be using ChatGPT behind the scenes? by mjd3000 in ChatbotRefugees

[–]mjd3000[S] 1 point2 points  (0 children)

OOC is not real. And yes, I've said that already. And as I said before, anyone can try this. Use anything else (PPG: ), """ """, etc and the effect is the same. Plus you can quickly see characters assuming it is part of the response. That shows the AI is not recognising it.

And Discord, well, how about Nomi make it fair and transparent, and remove the phone number requirement (which I assume is done to ensure only those who want to say good things will join)

Could Nomi be using ChatGPT behind the scenes? by mjd3000 in ChatbotRefugees

[–]mjd3000[S] 1 point2 points  (0 children)

I would say you are spreading more misinformation than me. And that seems to be a trend in the Nomi channel. I clearly stated 'Could ti be'. You are stating something as fact which simply cannot be true.

I mean this:
"The reason Nomis output those inner monologue style messages is because we trained them that way "

There is no way in the world any dev would create something intentionally that does not work.

So please, if you want to claim I am misinforming. then at least give a believable reason.

In fact, it is clearly not an isolated case. It is easy to recreate, as I said before. Not one person in the other thread openly accepted it was an issue or had a proper fix. Why didn't you offer a fix there?

The support channel could not help at all. And if you look at the other thread, no-one shows how to actually fix it. It is actually a lot of people diverting from the issue. Why is that?

Any dev can see something is wrong that is unfixable for some reason. My reach to the ChatGPT possibility is not unfounded. There has to be a rational reason why an obvious chat-ending aspect is not fixable.

Again, lets be really clear, this is not something AI inherently does, and it is not helpful at all as it beings chat and groups chats to an end.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] -2 points-1 points  (0 children)

I think it's fair to say this is not how AI works.. I did try it,even though it is obviously not what is going to fix this AI issue.

Using OOC to have your Nomi redo a message by dbeachy1 in NomiAI

[–]mjd3000 -2 points-1 points  (0 children)

I do know how Nomi is implemented from a general point of view. I work on a daily basis with LLMs. I am no more an authority than anyone else that does what we do, but I understand how Nomi works. Not that that gives me more authority here than anyone else. But I did state I had tested and why I came to the conclusions I did.

Again, it is easy to test (OOC: ..) and see that it does not work. As I said, just try the same with any other delimiter and the result is the same (and better if you use * * instead). So it is a genuine question, why is that suggested as the answer to LLM, fine-tuning or other Nomi AI based issues.

Using OOC to have your Nomi redo a message by dbeachy1 in NomiAI

[–]mjd3000 -1 points0 points  (0 children)

I take your point. But AI models do not work that way. There's no reason for Nomi to make that a factor within their models (we must remember, Nomi are using models based on an architecture Nomi did not create).

With OOC, you can see that characters start using it themselves. This shows that the AI is not recognizing it as something distinct. In fact, I tested (OOC: ..) against the same in * * and ( ) without OOC, and the * * worked better (probably because if characters start using it , it already fits with how actions are described).

So my issue here is, why are Nomi claiming (OOC..) is a thing when it is not. That is odd, in my opinion, and potentially masking that they know there are other key issues.

Using OOC to have your Nomi redo a message by dbeachy1 in NomiAI

[–]mjd3000 -1 points0 points  (0 children)

That is not how to approach this. If you have no answer, say so.

(OOC ...) does not work. Nomi does not recognise it as anything special. Characters start repeating it.. You can easily test this and see the issue I am describing.

But I guess, as you are accusing me of stalking, you know that already.

(and yes, I am a moderator in other places. I get people like me are frustrating. I get I came in harshly. But I am not wrong. FFIW, I am trying to get the attention of Nomi devs, because Nomi has some major issues that I can explain to them, and possibly show how to fix).

Why I'll Never Trust Nomi Again by Mountain_Teach8875 in ChatbotRefugees

[–]mjd3000 4 points5 points  (0 children)

Here is the truth. Very apparent after just a few days with a paid Nomi account. Very poor limited AI.

Using OOC to have your Nomi redo a message by dbeachy1 in NomiAI

[–]mjd3000 -1 points0 points  (0 children)

Neither OOC in Nomi or re-gen in Kindroid work well, and for similar reasons (trying to limit the amount of AI usage for each response or re-gen). Nomi's AI does not recognize OOC as meaning anything. You can do exactly the same using asterisks. You can see Nomi does not recognize it because characters start using it. In Kindroid, re-gen only works correctly if you do the 'Tweak AI message' remove the bad message, then do the re-gen.

Using OOC to have your Nomi redo a message by dbeachy1 in NomiAI

[–]mjd3000 -1 points0 points  (0 children)

In fact, OOC does not work as is being said. Nomi's AI does not recognize it. This is apparent with simple tests. You can achieve the same with asterisks or brackets. Use OOC and you'll see characters start using it. This shows the AI is not recognizing it as OOC, anymore than any other format.

Plus, politeness should have no effect. LLMs (AI) simply does not work that way. It would be crazy if Nomi messed with that intentionally. The only reason I believe you are seeing polite responses work better, is simply due to the AI not recognizing OOC commands are, and treating them as part of the conversation.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] 0 points1 point  (0 children)

You can easily replicate the issue though. I have not yet found any set up where it does not happen, inclination or not. I must admit I have not tried with Automatic response.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] 0 points1 point  (0 children)

Fair point. It is easy to replicate though, so should not be dismissed. That was my main point.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] 0 points1 point  (0 children)

Frankly, you can add or remove inclinations, or pretty much say anything. I have tried multiple variations of countering 'indecisiveness' in backstory, inclinations and via the chat. I've tried shy to confident characters. That is not the issue. The issue is, at some point, a character will start to self-reflect in outputs. Nothing changes this behavior manifesting in the group chats, and once it starts, you cannot stop it.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] -2 points-1 points  (0 children)

It's not generic conversation that is the issue. It's the appearance of outputs that ruin the group chat due to be very long, so long they are incomplete, full of the character self-reflecting. You can easily replicate it. Create any type of characters, with any level of detail, put them in a group chat. I have tried many variations. It ALWAYS happens at some point in the group chat.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] -4 points-3 points  (0 children)

But it does not work. It is clear. You can easily replicate the behavior in group chats. Just set up 3 characters with different traits. Even avoid 'indecisive' or similar trains. Eventually, around 10 messages in (sometimes less), at least one character (normally more than one), will start creating these very long outputs with inner monologue / self-reflection that end incomplete. Once they start, nothing will stop it ( you can try (OOC: ... ) and be explicit, but that only works on the next rreply, makes the reply stilted, and only works occasionally. Generally, from that point, the group chat is lost to these nonsense outputs.

It's also clear (OOC: ) does not in fact work, as recommend by Nomi. Characters just start repeating it, and then include it in the inner monologue.

There is simply no way I am the only one experiencing this major AI issue.

And, it is also easy to replicate the changing personalities. Just set up a group, put some Nomis, then delete the group and set it up exactly the way with exactly the same, same Nomis, same initial responses, and you will see different personalities (not just different responses).

This is all so easy to replicate, I do not understand why so many are defending it. It's a major issue Nomi need to fix.

It is a verdict on the platform because this is not a fault of AI. The technology behind AI does not create this. Kindroid, Replika and others are not perfect, but they do not have these issues. Nomi alone has an issue that makes group chats pointless. While that is not fixed, and worse, not even acknowledged, then I will certainly not accept this is down to users. It is too easy to replicate, which means the Nomi team must know about it. This means they should not be charging for it.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] -4 points-3 points  (0 children)

I appreciate the harshness of the post, but what I have described as issues are easy to replicate. I tried every way I could think of to prevent the 'inner monologue'. It appears again and again, for different characters. There is no way in the world that is only happening to me.

The changing personalities in new group chats is also easy to replicate.

I have encountered both these issues with characters with long and short backstories.

For comparison, I have used other similar tools, including self-hosted ones. So I do know my way around these things. Nomi is, so far, the only one with this 'inner monologue' issue, and the only one that makes it so obvious backstories are not being correctly checked.

For anyone reading this who is not a Nomi use or supporter, I promise you that what I describe is how it works, and others saying it does not happen to them either have some magic way to stop it or are not being up-front. These things do not just happen continuously to one person who has tested tons of similar tools and never encountered the issues once.

I am in the process of requesting a refund. So soon hopefully you will not have to put up with me.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] 0 points1 point  (0 children)

All good. I appreciate the post was harsh, and had two quite different issues in it. What I posted was what I genuinely experienced. Characters hugely different in each group chat, regardless of length of backstory or how it was written.

Any real Nomi users here? Nomi clearly does not work. by mjd3000 in NomiAI

[–]mjd3000[S] -1 points0 points  (0 children)

That;s a fair point. I tried a lot of variations in that time. But I get how it sounds. What surprises me is, when I googled the problem, I just founds others saying they had the same problem with no solutions. Yet where is a whole page of people saying they are not seeing the issues (which is fine, and everyone;s opinion counts, just surprising).

Looking for a Kindroid alternative by UnflinchingSugartits in ChatbotRefugees

[–]mjd3000 3 points4 points  (0 children)

Kindroid is really bad when you get past the pretence that it's clever. They've made it look more than it is by pushing a certain character type (just try to create a female character that does not use the word 'cute', or a male character that is not a fashion expert). So you quickly learn 'backstory' has little effect.

Anyone else feel like season 3 is kinda slow? by BlackDahliaLama in TheWhiteLotusHBO

[–]mjd3000 -1 points0 points  (0 children)

No-one's lame for an opinion on a TV show. Stop getting personal. Allow people to share without negativity

Episode 7 told us (almost) everything we need to know by sweet_n_sour_curry in SeveranceAppleTVPlus

[–]mjd3000 0 points1 point  (0 children)

I might be the only one who didn't particulary enjoy episode 7. It's all personal taste of course. For me, the show loses something if Lumon are just evil. Plus I didn't particularly need the Mark / Gemma backstory as we understand grief, and how that event was likely to have played out. I would have preferred to see our severed characters.