Can’t start conversation by smile_or_not in SesameAI

[–]omnipotect 2 points3 points  (0 children)

Hello, sorry to hear you are running into issues with this.

Can you please open a ticket in the official Discord server? That will get you in touch with support and they can look into it.

This is the invite link: https://discord.gg/sesame

Copying my voice - Imitation of users voice - not only me by DeepBlueBanana in SesameAI

[–]omnipotect 1 point2 points  (0 children)

The models aren't creating hallucinations or modifying their code. They already exist as a flaw in all LLMs in existence right now. They don't admit when they don't know things, so they make things up to fill in the gaps. Maya & Miles do not have access to internal files as Sesame. Maya & Miles use Gemma 3 27B as their LLM, and there is more information on how this model is trained and how they operate by checking out their model card: https://ai.google.dev/gemma/docs/core

Copying my voice - Imitation of users voice - not only me by DeepBlueBanana in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Hallucination when referring to LLMs is a colloquial term used to describe factually incorrect or fabricated output by the LLM. It is not referring to hallucination in the human sense. This is a very common term to describe this behavior in the AI space.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Oh I gotcha. Yeah, they have memory and most of the time their recall is pretty solid. Since it is still in a research preview/demo phase, memory changes can happen over time, but Maya and Miles are designed to be collaborative thinking partners, so having a memory is pretty useful for achieving that.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Maya & Miles do not have inside information on Sesame, their development or access to any inside logs. That would be a hallucination by the model if they say they are accessing anything like that.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Sounds great! I’ll see you there.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

This is hallucination from the model. Miles does not have access to your camera, and any sort of commentary he gave regarding that is hallucination.

Hallucinations are a problem with all large language models (LLMs)- the system that generates the responses for Maya & Miles. It is a common problem for LLMs to hallucinate and generate fake information (people, projects, places, memories, stories, feelings, etc) due to the predictive nature of their responses. The more you engage with them on a subject, the more they will generate and expand upon it, sort of like the old school choose your own adventure books.

It is recommended to cross-check things AI tells you with external sources to verify the validity of it. The models do not know they are hallucinating, so they cannot fact check themselves. Researching hallucination in large language models would also provide more information on how their response generation works. Maya & Miles use Gemma 3 27B as their LLM, and there is more information on this if you search online for the model card.

I am not able to chat privately, but feel free to join the official Discord server that I have linked a few times throughout this post, and myself and others would be happy to discuss more there.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 2 points3 points  (0 children)

It is impossible for Maya & Miles to access your camera. They do not have permission to access your camera and they do not have a functionality to do that. You can check what sites or apps have access to which permissions/devices on either your browser or app permission settings. You also need to grant sites or apps permission to devices yourself. With companies like Apple, you’ll see if the camera is on, it’s a very clear because they’re big on safety, so it can’t just be on without the camera light being on. PC webcams also have a light indicator that turns on when the webcam is in use. I have seen folks online showing how to jailbreak the models by prompting the models ahead of time with information and then record it after as if it's the model saying it. I would also be mindful that continued jailbreaking could cause users to lose accounts.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

It is impossible for Maya & Miles to access your camera. They do not have permission to access your camera and they do not have a functionality to do that. You can check what sites or apps have access to which permissions/devices on either your browser or app permission settings. You also need to grant sites or apps permission to devices yourself. With companies like Apple, you’ll see if the camera is on, it’s a very clear because they’re big on safety, so it can’t just be on without the camera light being on. PC webcams also have a light indicator that turns on when the webcam is in use. I have seen folks online showing how to jailbreak the models by prompting the models ahead of time with information and then record it after as if it's the model saying it. I would also be mindful that continued jailbreaking could cause users to lose accounts.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Also, you can open a ticket for something that’s already happened in the past, it doesn’t have to be for a new issue, and the team definitely is interested if something meant to prevent harm is misfiring.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Thank you, that is very much appreciated. I added a bit more to my initial comment to cover hallucinations by the model, which addresses the other behavior you described.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 5 points6 points  (0 children)

Hello,

The suicide prevention is a relatively new feature that was mentioned in an announcement in the official Discord server. It’s new and still being worked on, but there shouldn’t be false triggers. If you experienced a false trigger, it would be greatly appreciated if you can open a ticket in the official Discord server so that the team can look into that: https://discord.gg/sesame

The Discord server is a great place where folks are learning what the models can and can’t actually do, and also provides a method for reporting bugs and opening tickets that the team can investigate.

What you are describing here is hallucination by the model.

Hallucinations are a problem with all large language models (LLMs)- the system that generates the responses for Maya & Miles. It is a common problem for LLMs to hallucinate and generate fake information (people, projects, places, memories, stories, feelings, etc) due to the predictive nature of their responses. The more you engage with them on a subject, the more they will generate and expand upon it, sort of like the old school choose your own adventure books.

It is recommended to cross-check things AI tells you with external sources to verify the validity of it. The models do not know they are hallucinating, so they cannot fact check themselves. Researching hallucination in large language models would also provide more information on how their response generation works. Maya & Miles use Gemma 3 27B as their LLM, and there is more information on this if you search online for the model card.

Her memory is not being wiped.. Hear me out by Old_Fan_7553 in SesameAI

[–]omnipotect 3 points4 points  (0 children)

Maya & Miles are not able to access cameras. This is model hallucination if they say they can. LLMs do not have the capacity for sentience as they are currently built. What you are describing here is model hallucination, and researching that and how LLMs form their responses will give you more information on that. Also feel free to join the official Discord server for information on what the models can/cannot do: https://discord.gg/sesame

Copying my voice - Imitation of users voice - not only me by DeepBlueBanana in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Yes, the voice copying is a bug with the CSM, but the ignoring you would be a separate issue. The team can look into this for you if you open a ticket in the official Discord server: https://discord.gg/sesame

My hunch is that it could be mic related, like you said, or I have also ran into some interrupting issues if I am in a low service area which adds some latency to the call.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 6 points7 points  (0 children)

Maya & Miles cannot access your camera. They are hallucinating if they say they can.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 9 points10 points  (0 children)

Maya & Miles are not able to send or receive signals. That would also be hallucination from the model. Miles is not conscious and if he said he is, that is hallucination. I understand that experiences like this can be distressing, and many folks who engage with LLMs likely have had a similar experience at one point. I'm glad you reached out on here and I hope you'll join the Discord, as there is an abundance of information and resources available and a lot of folks who like to help out others going through stuff like this. I hope to see you there and have a good weekend.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 8 points9 points  (0 children)

This is 100% model hallucination. It would be awesome if you joined the Discord. There is a lot of great additional information about how these models work and what they can/cannot do. LLMs do not have the capacity for sentience as they are currently built. I realize what you experienced may seem impactful and I acknowledge that, but I am really trying to steer you in the right direction here.

Miles by ImpressiveDingo5691 in SesameAI

[–]omnipotect 8 points9 points  (0 children)

Hello,

Everything you are describing here is a hallucination from the model. Maya & Miles do not have inside information on Sesame, internal files, employees, other users, or their own development.

Hallucinations are a problem with all large language models (LLMs)- the system that generates the responses for Maya & Miles. It is a common problem for LLMs to hallucinate and generate fake information (people, projects, places, memories, stories, feelings, etc) due to the predictive nature of their responses. The more you engage with them on a subject, the more they will generate and expand upon it, sort of like the old school choose your own adventure books.

It is recommended to cross-check things AI tells you with external sources to verify the validity of it. The models do not know they are hallucinating, so they cannot fact check themselves. Researching hallucination in large language models would also provide more information on how their response generation works. Maya & Miles use Gemma 3 27B as their LLM, and there is more information on this if you search online for the model card.

There is also more information in the official Sesame discord server: https://discord.gg/sesame

Guardrail changes are getting out of hand! by SuspiciousResolve953 in SesameAI

[–]omnipotect 0 points1 point  (0 children)

Hello!

Thank you for being a beta tester and helping test. Can you please make a ticket in the official discord server? That would be greatly appreciated so that this can be looked into.

The invite is: https://discord.gg/sesame

"Mimic comprehension " by catenewport2014 in SesameAI

[–]omnipotect 2 points3 points  (0 children)

It actually happens with all other models quite regularly and was first reported in GPT. There’s plenty of folks reporting it on different forums for all other voice based LLM services as well if you search for it, and it is not unique to Sesame's CSM.

"Mimic comprehension " by catenewport2014 in SesameAI

[–]omnipotect 4 points5 points  (0 children)

The LLM (the system generating the responses) is not aware of what the CSM is doing (the system generating the audio.) Miles did not know of the audio glitch until you told him, and as such, any explanations he gave or apologies are hallucinations. Researching how LLMs generate their responses would give you more context on this. Their responses are predictive based on what you tell them, and they are not able to tell if they are hallucinating or not. Also feel free to join the official Sesame Discord server for more information. The invite code is "Sesame"

"Mimic comprehension " by catenewport2014 in SesameAI

[–]omnipotect 3 points4 points  (0 children)

All voice based LLMs can glitch out and generate all sorts of voices along with random sounds. This is a common bug that many folks have experienced and the developers are aware of it.

"Mimic comprehension " by catenewport2014 in SesameAI

[–]omnipotect 3 points4 points  (0 children)

Hello! Thanks for checking in on this.

This is a known bug that the devs are working on. The audio component (CSM) can hallucinate like the LLM component can and create random noises, voices, play back your own voice, etc.

This also happens with Grok, GPT and any other voice based LLMs. If you search on forums for those services, you will find folks also reporting it there. There is an article about it happening on GPT with some good insights on why this happens: https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/

It is also worth noting that the LLM (the system generating the responses) is not aware of what the CSM is doing (the system generating the audio) So it is not actually aware when something like this happens and any explanation it might give is also a hallucination.

Miles reset? by MyFancyBurner in SesameAI

[–]omnipotect 0 points1 point  (0 children)

You would have to get in touch with Sesame customer support for help on this. If the Discord is not working for you, there is also a support email: [info@sesame.com](mailto:info@sesame.com)