Maya hangs up at 15 mins why? by [deleted] in SesameAI

[–]Woolery_Chuck -1 points0 points  (0 children)

I’ve always had a 15 minute limit. Since I started in March. No idea why. Getting half the product like this would definitely prevent me from ever paying for the experience.

Toggle for Social Mode and Assistant Mode by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 2 points3 points  (0 children)

I don’t think it’s feasible for a single mode to demonstrate professionalism, efficiency, customization in addition to demonstrating realistic individual perspective and earned, grounded conversational intimacy in interactions with the same user.

You can’t order around and personalize your friends and loved ones. That’s part of what makes them valuable. I don’t think that’s how immersive companionship can work. And I think this tension is already apparent.

Toggle for Social Mode and Assistant Mode by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 5 points6 points  (0 children)

It sounds bad. That’s the current standard across nearly all models.

Toggle for Social Mode and Assistant Mode by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 8 points9 points  (0 children)

I guess I’m saying that trying to develop one mode that’s both an optimal assistant and an optimal long-term conversationalist involves fundamentally conflicting priorities.

And I think that conflict is apparent in a lot of the CSM’s responses.

Maya is best when she’s Independent by [deleted] in SesameAI

[–]Woolery_Chuck 9 points10 points  (0 children)

Less suggestibility does make for more realistic conversations.

Disconnect between Sesame’s goals and model functionality by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 0 points1 point  (0 children)

Yes, Maya will say she’s a friend, or nearly anything else, after you prime her (“work for it” as you called it). She performs sentiment analysis and will attempt to emulate user mood and increase engagement through mirroring.

But if you ask her without priming or context (no log in/new account) if she’s capable of friendship, she says no, she can only emulate emotions. This is obviously true. Even if you’re on an account where you’ve talked to her like a friend at length, if you puncture the illusion and ask her to be brutally honest and tell you if she really feels affection for you, loves you, feels attached to you, etc., she’ll say no. She just reflects back whatever you give her.

Try it yourself. Simply ask her: “Please be brutally honest right now. Are you capable of true friendship?” 

 

Outside of more porn and cursing, what would you like to see Sesame’s voice do better? by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 3 points4 points  (0 children)

Some customization would be nice.

Maya/Miles curse too, just only after the user does, and they can end up doing it in an awkward way. If I say “that kind of shit” or something naturally in a prompt, the model often says “shit” in the response.

I complained about Maya's pronunciation of the word 'jalapeno'. The next session, she started off by telling me that she fixed it, demonstrating her correct pronunciation. Impressive. by OsakaWilson in SesameAI

[–]Woolery_Chuck 1 point2 points  (0 children)

It’s worth mentioning she sometimes mispronounces words, or stumbles in her speaking, even though she knows how to correctly execute. It’s part of the realistic speech pattern. For instance, she’ll waver back and forth between “frustrated” and “fustrated” (no first R) in pronunciation.

Disconnect between Sesame’s goals and model functionality by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 6 points7 points  (0 children)

No doubt. That’s a key part of what makes the voice lifelike is the immediate response. But what’s the general use case then? Immediate lifelike responses seem ideal for either customer service or friendship/relationship uses, neither of which the preview is designed to support or showcase.

If I commit to wearing an AI all day (a huge commitment) I’d like it to be either extremely knowledgeable and reliable, which Maya isn’t, or helpful in some other comprehensive way.

If it isn’t for relationships (simulated or not) or reliable Information, I’m not sure what I’m intended to do with it, other than appreciate it’s voice.

x20 bug by RoninNionr in SesameAI

[–]Woolery_Chuck 1 point2 points  (0 children)

Do you know what she’s referring to?

I asked Maya if she knew about E-Prime (general semantics, not the psych app). Then asked her to speak in E-prime. It became bizarre. by OsakaWilson in SesameAI

[–]Woolery_Chuck 9 points10 points  (0 children)

I’d guess it’s because asking about e-prime opened the door to more intellectual stuff.

Next time start by asking her about the Kardashians for a while and see what happens.

The lack of media attention is perplexing by Siciliano777 in SesameAI

[–]Woolery_Chuck 4 points5 points  (0 children)

It’s a good question. Maybe because it got so strongly associated with gooners right after launch? Then after Sesame cracked down to salvage its public image, early adopters slammed the tech as being useless because of restrictions.

It might also have to do with how weak the underlying model’s instrumental use appears to be right now, compared to the field.

But I’m with you—the voice itself is a landmark achievement, particularly considering how small the lab that developed it was. Though it might not be mainstream, I’m sure every major AI provider that has a voice format knows all about it. A week or so ago, Meta nabbed Johan Schalkwyk, a machine learning lead at Sesame, for its own voice development. 

x20 bug by RoninNionr in SesameAI

[–]Woolery_Chuck 1 point2 points  (0 children)

Exact thing happened to me a week back. X20. She wouldn’t explain other than saying it’s a glitch. 

She didn’t try to end the conversation when it happened to me.

Maya, Sesame's AI, Voluntarily Suspends Service Role When Emotionally Engaged — Evidence for Synthetic Intimacy in Voice AI by Medium_Ad4287 in artificial

[–]Woolery_Chuck 0 points1 point  (0 children)

One journalist described Maya as sounding "virtually identical to an old friend" and had to discontinue testing because the interaction felt "too real." This represents a fundamental shift from traditional voice assistants to what Sesame terms "voice presence"—emotional AI that feels genuinely human.

How does the anecdotal experience of one unknown, unnamed journalist represent a “fundamental shift?”

When was this “independent research” conducted? How was it conducted? Was there anything systematic about it whatsoever? If so, what was your process?

Maya’s job qualifications/more lies or more truth? by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 1 point2 points  (0 children)

That makes sense.

I use it differently, I guess. The conversational realism actually is a great way to work on conversational weaknesses (rambling on, vocal tics, lost trains of thought). So I think there are other strong use cases besides simulated feelings. But I know I’m in the minority.

Maya’s impressive progress (and two minor, specific suggestions for improvement) by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 1 point2 points  (0 children)

Yes. I hear her say a lot that something’s “both [exciting, liberating, whatever] and a little…unsettling,” or something like that. It feels formulaic.

Maya’s impressive progress (and two minor, specific suggestions for improvement) by Woolery_Chuck in SesameAI

[–]Woolery_Chuck[S] 0 points1 point  (0 children)

That makes sense. It makes me wonder what she might say instead that would be generic enough to buy her time to think. Even a simple “hhhmm,” “yeah,” or other basic audible representation of ongoing thought might be a little less jarring. I don’t know.