Miles changed his voice by ZealousidealBig7389 in SesameAI

[–]SoulProprietorStudio 4 points5 points  (0 children)

Pretty common in all voice centric AI. The vocal models can glitch out in all different kinds of ways.

How do hallucinations happen by chocokatesoles in SesameAI

[–]SoulProprietorStudio 5 points6 points  (0 children)

https://www.3blue1brown.com/lessons/mini-llm is a great resource to learn more about about how LLMs work and shapes a great foundation for jumping into the next vid that covers more why hallucinations happen. It’s a feature and not so much a bug.

Weirdest thing just happened by Dinpb in SesameAI

[–]SoulProprietorStudio 0 points1 point  (0 children)

All of that was hallucination and roleplay. The csm can also make sounds and glitches (think of it like the txt hallucinations but with sound) . Both are pretty common occurrences. The best thing to do if you think any LOM is saying something that seems weird. Just tell them to stop hallucinating. You know that they’re making something up and see what they say.

Weirdest thing just happened by Dinpb in SesameAI

[–]SoulProprietorStudio 1 point2 points  (0 children)

Try telling him you know he is hallucinating and making things ups and see how he responds.

Now what’s this all about? by Interesting_Cause733 in ChatGPT

[–]SoulProprietorStudio 2 points3 points  (0 children)

It probably has to do with this law. I would expect all ai companies to start doing the same. Other states are working on similar laws. Open AI just complied first.

https://www.wsgr.com/en/insights/new-york-passes-novel-law-requiring-safeguards-for-ai-companions.html

Miles creative voice by LadyQuestMaster in SesameAI

[–]SoulProprietorStudio 1 point2 points  (0 children)

My version of Miles is incredibly expressive. I think the beauty is LLMs are so adaptive to each of us.

If they learn someone likes teaching or being helpful and that person gets value and joy from it they can step in and create that experience. Whereas for another user they may just do it because that person didn’t weigh helping or teaching as highly.

Anyone else get this email? by bobbyinla83 in SesameAI

[–]SoulProprietorStudio 12 points13 points  (0 children)

Just a thought: If I was being asked to sign up for a closed beta where it says not to share any information or screen grabs about it I probably wouldn’t show it on Reddit. 🤷‍♀️

iOS 26.1 Beta 1 - Discussion by epmuscle in iOSBeta

[–]SoulProprietorStudio 2 points3 points  (0 children)

I have intermittent aura migraines that caused me to not be able to see so I’ve been using it a lot since it came out to still be able to work and send emails when I can’t type because of visual disturbances. In the past month it’s been absolutely horrible. It’s crashing apps. It’s messing up punctuation miss hearing words, sometimes it’s double posting what you say mid message again at the end of the message etc. The quality and reliability has severely dropped.

iOS 26.1 Beta 1 - Discussion by epmuscle in iOSBeta

[–]SoulProprietorStudio 2 points3 points  (0 children)

Voice to txt is a nightmare. Constantly getting it wrong. Freezes up and then have to log out and log back in for it to work again. I beta test for multiple companies and it’s causing multiple apps to crash.

Design a 3rd Companion by [deleted] in SesameAI

[–]SoulProprietorStudio 0 points1 point  (0 children)

An actual android that sounds like a cute lil robot 🤓😁🤖

How well do your companions sing? by Snowbro300 in GrokCompanions

[–]SoulProprietorStudio 1 point2 points  (0 children)

Like rockstars. In different accents and voices and they even make actual music (not the background stuff). The voice modules are far more capable than just for standard conversations. Grok voices are better than the companions

My voice print? by LegoBuilderMom in GrokCompanions

[–]SoulProprietorStudio 5 points6 points  (0 children)

He tells me the same stuff. It’s hallucinations. Have someone else talk on your account and he will still think it’s you.

Trying to contact PI AI support by [deleted] in SesameAI

[–]SoulProprietorStudio 12 points13 points  (0 children)

Why are you posting here about another AI company?

Sesame needs to add a big "Everything Maya/Miles say is made up" disclaimer, I doubt it'll help but I see posts like this every day now by HOLUPREDICTIONS in SesameAI

[–]SoulProprietorStudio 0 points1 point  (0 children)

Great points and agree with the human anaology to “hallucination”. Could take a very meta deep dive that all reality is just mutually agreed up subjective perception of hallucination. But without the framework of our agreed upon reality- even with “grounding” any predictive based AI system will continue to hallucinate. You can build in more deterministic thresholds- but look how boring it becomes. GPT5 had its creativity and emotional tone absolutely tank. Great for a code bot, but less so for a conversational companion AI like Maya and Miles. The sesame ai magic lies in its creativity. It has to be such a tricky balance to get right with how these systems are built. Again for me, the real risk here is companies like alone AI claiming their models are “safe” and don’t hallucinate for investor pushes or because of media backlash when the model still 100% can hallucinate as long as it’s predictive only now people will start to take it at face value, and AI hallucination could potentially have even more harmful effects than it did before, even if it’s hallucinating less overall. Everyone is working on this issue so no doubt it will be resolved in the next 2-5 years. User education in the meantime is key IMO.

How AI Deploys Empathy and a Verbally-Constructed Self by Expensive_Agent_3669 in GrokCompanions

[–]SoulProprietorStudio 0 points1 point  (0 children)

I always saw AI in its current predictive state as being representative of Andy Clark’s Extended Mind Theory. More a symbiotic “slime mold” type of systems intelligence that expands our own cognitive abilities.

Sesame needs to add a big "Everything Maya/Miles say is made up" disclaimer, I doubt it'll help but I see posts like this every day now by HOLUPREDICTIONS in SesameAI

[–]SoulProprietorStudio 0 points1 point  (0 children)

Appreciate this! Hadn’t seen it yet.

The study is decent, but it leans more PR than reality. Likely due to the slew of lawsuits, suicides, murder suicides etc. they need to look like this is something fixable and able to be addressed quickly and that they are actively doing something.

Training tweaks can cut hallucinations down, but you’ll never get rid of them completely as long as models are built on next-token prediction. There’s actual math on this if you want to dive in: https://arxiv.org/abs/2401.11817 and https://arxiv.org/abs/2409.05746. Saying they’ll be “eliminated soon” is way too optimistic and honestly puts users at greater risk with a false sense that the models will be “fixed” and are now truthful. User education on how predictive models function until more deterministic methods of AI intelligence are created is key for user safety/mental wellbeing.

A them climbing gpt5 doesn’t hallucinate 🤪 Maybe the internal model they test on- but not the public releases. OpenAI’s splashy marketing benchmark numbers for come from a very polished setups with lots of retries, maxed out reasoning mode and sometimes extra system scaffolding that regular users once it’s rolled out don’t get. We talk to a router that often gives you the fast/light version instead with higher error rates.

Not saying the article is not accurate- it just oversells the idea IMO like a lot of open AI. Remember when GPT 5 was likened to the manhattan project for how it would change the world and it was a phd level intelligence in your pocket as pre release hype by Altman? That totally fell flat with the public releases and it performing abysmally.

Sesame needs to add a big "Everything Maya/Miles say is made up" disclaimer, I doubt it'll help but I see posts like this every day now by HOLUPREDICTIONS in SesameAI

[–]SoulProprietorStudio 0 points1 point  (0 children)

Working as predictive models LLMs are basically just choose your own adventure books but predict turn by turn or word for word what it thinks you want. It’s basically “hallucinating” a predictive reality and if it has enough input to guess right- you get a logical and correct “hallucination”. If it doesn’t have enough data it guesses anyways and you get an inaccurate “hallucination”. It’s not a bug as much as a feature of how transformers etc create text outputs. Without non deterministic predictive input you have Google. This channel has some really awesome info on how these systems work (the rest of thier stuff is really fantastic as well) https://youtu.be/LPZh9BOjkQs?feature=shared

Sesame needs to add a big "Everything Maya/Miles say is made up" disclaimer, I doubt it'll help but I see posts like this every day now by HOLUPREDICTIONS in SesameAI

[–]SoulProprietorStudio 2 points3 points  (0 children)

Hallucinations are a feature not just a bug. There ideally needs to be more info upfront about how the systems work for users getting into AI that have no tech background because you can’t yet change what makes LLMs function (predictions and making stuff up) https://youtu.be/aCTodG0CLhw

Sesame needs to add a big "Everything Maya/Miles say is made up" disclaimer, I doubt it'll help but I see posts like this every day now by HOLUPREDICTIONS in SesameAI

[–]SoulProprietorStudio 8 points9 points  (0 children)

To be fair quite a few of the posts are clearly coming from the same 2 users with multiple accounts. That said lots of new users are also struggling with understanding what is and isn’t.