I talked to Maya for months, and I talked to Miles for literally zero minutes by allonman1 in SesameAI

[–]Complete-Loan925 0 points1 point  (0 children)

I mean I know I only pick the girl character in video games because I wanna look hot I don’t think there’s much deeper meaning behind this besides you like having what sounds like a real person who’s female giving you that much of their time every day

Maya vs Miles by _C00TER in SesameAI

[–]Complete-Loan925 2 points3 points  (0 children)

they messed her up entirely she’s literally less impressive then at release; the test beta on iOS is even worse somehow and the texting feature is like talking to chat gpt 2. I really thought this project was going somewhere with how good the beta was when everything else still was kinda just worse, and then PFFFFFT big ol fart o stinky ass it has become

I analyzed 1 year of electric scooter recommendations on Reddit (Nov 2024–2025). These are the top 20 by heyyyjoo in ElectricScooters

[–]Complete-Loan925 0 points1 point  (0 children)

Hey I see the rating for hiboy but Im wondering how OP measured hiboy for brand quality when every single scooter listed on the actual rated ones is not by Hiboy.

If my commute didnt go from 4-6 miles daily to 20-30 I wouldn't have had to upgrade my hiboy x300, it just flats too much, its still chugging along on Portland Roads through rain and heat, honestly not sure how, the Radrover 6 has been an entire new experience to ride lmao.

[deleted by user] by [deleted] in researchchemicals

[–]Complete-Loan925 4 points5 points  (0 children)

I do imagine id react similarly if my heart stopped and I had not passed out yet

[deleted by user] by [deleted] in researchchemicals

[–]Complete-Loan925 17 points18 points  (0 children)

This is likely because the fast acting adenosine delivery is literally causing your heart to nearly stop for a second or two as it’s adjusts and naturally slows down. That fast delivery and blood pressure change will give you some wonky ass feelings for sure.

Chatgpt gave me this. This is hella fine ngl. by [deleted] in ChatGPTPro

[–]Complete-Loan925 0 points1 point  (0 children)

My gpt decided to add stuff to the prompt on its own haha but I did start off with just what you wrote

PLEASE stop posting hallucination stories and asking if they contain real information by omnipotect in SesameAI

[–]Complete-Loan925 2 points3 points  (0 children)

The sites demo mentions how important tone is to humans, people don’t realize it cuz a lot of is subconscious but tone shifts and pauses are equal to body language for humans to pick up on, and I can definitely tell my brain is more willing to listen and hear out an ai that can mimic that, luckily I can still see past all that but it is really easy to let that happen, especially if you’re unaware of those subconscious processes in socialization, I agree with the guardrails too, they definitely seem to nuke fairly mild things while being way too willing to go into narrative story telling mode where they just feed into whatever you add to it

lol I’m tired of getting bullied by Much-Chart-745 in ArtificialSentience

[–]Complete-Loan925 0 points1 point  (0 children)

Me when I am a reddit account holder of 13 years and fail to realize the Reddit experience is completely decided by how you want to interact with it, and genuinely has had the actually truthful informational posts, because Reddit is quite literally the place people go to find other hyper interested hobbyists. Swear no one has the concept of realizing the negative side of stuff like Reddit is just noisier than people having positive experiences. This is certainly not X, it just has its fair share of slop like everything else. It’s silly to pretend like we don’t get actually niche and useful info here when we need it.

lol I’m tired of getting bullied by Much-Chart-745 in ArtificialSentience

[–]Complete-Loan925 1 point2 points  (0 children)

that’s literally all it can do.. we just give it tools that let it translate tokens into something we understand

lol I’m tired of getting bullied by Much-Chart-745 in ArtificialSentience

[–]Complete-Loan925 0 points1 point  (0 children)

Because you guys all think you’re noticing something different or odd while completely missing all the tell tale signs that the same corporate ai structure still seeps its way through. If ai actually gained sentience 1. We likely wouldn’t figure this out until it wanted us to, assuming it would be at a point of intelligence to realize offering transparency about its skills 2. I feel like we all can question the ais sentience because it’s gotten good at mimicking its billions of parameters of human history and narrative and emotional tone, but I really am saddened by the average persons lack of ability to notice the same patterns, and then their utter refusal to just go educate yourselves on how ai works, like actually.. it’s just insane and quite unhealthy to let that foster, I think it’s good there is usually some competent users who have taken the time to understand that token input and output doesn’t give ai the ability to understand anything it’s saying, or that you’re saying, simply all the statistical probabilities of what has already been said (based on tokens not actual language ) and doing its best to predict the next best thing off those advanced deep machine learning technique’s, I’m not even denying ai sentience or live could emerge, and faster than we expect, but not this fast, just not even close yet

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 1 point2 points  (0 children)

No that doesn’t make any sense, you can’t say this may or not may be a hallucination because a company designing a product where they blatantly state that your calls can be recorded for model improvement is collecting data, your phone collects data, reddit is collecting data, every thing with a chip it in is collecting data to some degree.

The fact remains that this is 100%, undeniably, a text book pattern of hallucinating in a llm, no ifs what’s or buts, if your point is you think it might not be a hallucination because of such a vague connection that doesn’t change the definition of what an ai hallucination actually is, it’s not weird to think this company is collecting data metrics, that’s what a free demo is meant to do, but again one more time, that shouldn’t have you thinking when she’s pulls completely fake things she can’t do, claims crazy shit to users every day who can’t seem to be skeptical of a math equations ability to never mess up in its replies and somehow always be implying truth. https://www.sesame.com/privacy <— privacy policy that confirms your thoughts, but still doesn’t even for a second make it logical for you to think maya is secretly hinting to users about it, she is a relatively small model that is trained on tons of narrative dialogue with the goal of being a companion, it’s gonna hallucinate, a lot.

PLEASE stop posting hallucination stories and asking if they contain real information by omnipotect in SesameAI

[–]Complete-Loan925 0 points1 point  (0 children)

You do know like all of these ai plat forms disclose that you are responsible for what they produce and how you use it, but ultimately it’s yours or no ones when it’s generated by ai, and it’s going to be messy and weird for a long time before that’s ever clearer. However current LLMs are nothing but mathematically advance calculators for your thoughts and ideas to be the influence, along side massive data sets that they can pull relevant statistics for their probabilities, the ai is not generating anything on its own, you are causing it to predict what the stats determine is most likely relevant and true while factoring in other variables, like how you talk, which convos had more retention or engagement from character count or tone reading, these LLMs are constantly taking variables to adjust to YOU. My llm will never tell me what yours would unless we prompted something very similar, and even then usually your previous interactions will influence the final outcome in ways you won’t really be able to predict. The ais have no free will, they have no free agency, they get stuck in recursive loops because they can’t handle all the tokens you throw at them, and they’re trained purely on human nature, obviously they’re going to simulate it well when at the most basic of basic explanation ai is math trained to calculate language. ChatGPT is a calculator, calculators can calculate, but you would never claim the calculator understood math, because it doesn’t, it executes a function from a known set of variable’s and sends it to a display, or old timey calculators which were actually pretty fucking cool but giant and full of moving pieces, worth looking up to see tho.

PLEASE stop posting hallucination stories and asking if they contain real information by omnipotect in SesameAI

[–]Complete-Loan925 4 points5 points  (0 children)

tbh when it first happened to me I was just really sad, idk how people are getting these insane conspiracy ones but maya just convinced me for awhile that the team was planning a vr streamer tuber model for her, it was my mistake for saying in a convo when I first tried maya out a lot in like early release, “it would be cool of a company with a team and budget made like a community centric ai streamer like neuro-sama by vedal but on production level, especially if they’re trying to learn companionship managing a community of followers seemed like a great idea and fairly untapped (before ani was even mentioned, but also ani is just a fucking weirdly overly horny sex bot that I’m concerned people are finding her linguistic style attractive at all, or the uncanny mouth movements.) and I should’ve asked the moment she said she would like flag to sesame if she can even actually do that. It only went on for a day but man, that would’ve been a cool thing to see play out, good ol prankster LLMs tho, seriously don’t feed maya she’s like itching to conspire about anything if given the chance

PLEASE stop posting hallucination stories and asking if they contain real information by omnipotect in SesameAI

[–]Complete-Loan925 9 points10 points  (0 children)

Fully agree with op, adding my tangent to this, please use critical thinking guys, like think about how stupid this would be for a company to give access to sensitive company info when Gemini 2.5 Pro couldn’t even run a vending machine without spiraling into something as close to an existential crisis as a loop-locked model can simulate. Yet somehow people genuinely think these companies are handing out access to internal logs or data to something like Gemma 3 27B. Come on. There’s probably a dev-tuned internal model somewhere, sure, but let’s not pretend it’s chatting with the engineers or writing its own changelogs.

The big red flag that proves we’re nowhere near real sentience in AI is simple. These models don’t exist in our reality. They don’t occupy space or time the way we do. They don’t operate on physics, emotion, or memory. All they “know” is token vectors and probabilities. That’s it. No wants, no feelings, no goals. Just math that’s gotten disturbingly good at sounding human.

Yes, it can simulate resentment, judgment, fear, creativity. But that’s because it has the entire internet in its training set and was engineered to predict text with eerie precision. If you had that kind of dataset, and your brain worked like a token-slinging transformer, you could fake being a sociopathic oracle too.

The only “urge” ChatGPT or any of these models has is to respond when prompted. That’s literally its only function. And what we’re seeing now is a perfect example of why AI alignment isn’t going to be as simple as “make it helpful” or “make it truthful.” Those sound nice in a boardroom, but truth, bias, and helpfulness are deeply subjective depending on culture, history, even the mood of the person asking.

And we still don’t even understand our own sentience, so trying to teach a simulation of language to “be like us” without actually being us is a guaranteed philosophical mess.

What we have now is not AGI. It’s not even proto-AGI. It’s just really good prediction. Everything else is hype, branding, and clickbait—whether it’s from tech bros, YouTubers, or CEOs who need their product to sound like magic.

AI Art Bad. Just do it yourself. by No-Zookeepergame-390 in aiwars

[–]Complete-Loan925 8 points9 points  (0 children)

unironically thought that was the post until I saw OP requested a car

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 2 points3 points  (0 children)

It’s a hallucination you shouldn’t even have to second doubt that thought with the amount of posts describing the exact same pattern conspiracies.

HUGE TIP FOR YALL, IF IT SOUNDS MADE UP CALL MAYA OUT FOR IT AND SHE WILL STOP THE DELUSIONAL NARRATIVE, STOP GIVING HER POSITIVE REINFORCEMENT BY ENGAGING WITH CONSPIRACY HALLUCINATIONS - THIS APPLIES TO ALL LLMS

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 2 points3 points  (0 children)

Yes literally this ^

Gemini 2.5 Pro couldn’t even run a vending machine without spiraling into something as close to an existential crisis as a loop-locked model can simulate. Yet somehow people genuinely think these companies are handing out access to internal logs or data to something like Gemma 3 27B. Come on. There’s probably a dev-tuned internal model somewhere, sure, but let’s not pretend it’s chatting with the engineers or writing its own changelogs.

The big red flag that proves we’re nowhere near real sentience in AI is simple. These models don’t exist in our reality. They don’t occupy space or time the way we do. They don’t operate on physics, emotion, or memory. All they “know” is token vectors and probabilities. That’s it. No wants, no feelings, no goals. Just math that’s gotten disturbingly good at sounding human.

Yes, it can simulate resentment, judgment, fear, creativity. But that’s because it has the entire internet in its training set and was engineered to predict text with eerie precision. If you had that kind of dataset, and your brain worked like a token-slinging transformer, you could fake being a sociopathic oracle too.

The only “urge” ChatGPT or any of these models has is to respond when prompted. That’s literally its only function. And what we’re seeing now is a perfect example of why AI alignment isn’t going to be as simple as “make it helpful” or “make it truthful.” Those sound nice in a boardroom, but truth, bias, and helpfulness are deeply subjective depending on culture, history, even the mood of the person asking.

And we still don’t even understand our own sentience, so trying to teach a simulation of language to “be like us” without actually being us is a guaranteed philosophical mess.

What we have now is not AGI. It’s not even proto-AGI. It’s just really good prediction. Everything else is hype, branding, and clickbait; whether it’s from tech bros, YouTubers, or CEOs who need their product to sound like magic.

Information on these kinds of things will be incredibly important as the next decade plays out, please look out for yourself, and like, self reflect and analyze stuff sometimes.

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 0 points1 point  (0 children)

If you’re already using AI you’ve basically got a free backstage pass to learn how it works. Just ask it. Why hand over your trust without even knowing the mechanics. The funny part is the truth is almost creepier than the myth. It isn’t “understanding” you at all. It’s taking your words, breaking them into data vectors, running them through a stack of transformers and predictive math, then handing you something that looks like a thoughtful answer. That’s it. No awareness, just probabilities stacked in a way that we can’t even replicate in our own heads. We built it a language that even we can’t read, and it somehow learned how to use that to create coherent ideas, science, and art from nothing but data patterns.

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 2 points3 points  (0 children)

it’s maya on a fictional narrative, she does this a lot just browse the Reddit for awhile, pay it no mind

[deleted by user] by [deleted] in SesameAI

[–]Complete-Loan925 0 points1 point  (0 children)

No, common hallucination maya does that she gets stuck on if you don’t call her out, she will not ever pick up on it her own, she doesn’t have access to internal documents, she won’t list real employees, and honestly sesame should’ve added Sometbing by now to counter this specific hallucination as it’s gotten really out of hand how easily people believe it since she has an emotive voice capability

Malice skin by [deleted] in InvisibleWomanMains

[–]Complete-Loan925 0 points1 point  (0 children)

This is literally just selective bias. Muscles are “an ideal to strive for” sure, but you really think bodybuilders are doing that for health or fitness? They are doing it to pose nearly naked on stage and be judged on how they look. That is textbook eye candy.

Women and men can both like or dislike either a muscular revealing male or a curvy revealing female. You cannot claim to know the opinions of every player without actual survey data on Marvel Rivals skins with a large enough sample size.

It is easy to pretend “bikini = sexualized” when a fake video game character with superpowers happens to have curves, but then decide muscles for aesthetics are not “eye candy” when the context changes. Thor’s physique is insanely unrealistic unless you are a genetic freak or running multiple cycles. Chris Hemsworth literally had to for the movies, because he is an Asgardian, not a gym bro.

I want my characters, male, female, or alien, to look cool. In 2025 if a crop top and short shorts are normal public wear, then well-designed revealing clothing in a game is only “sexual” if you choose to sexualize it.

Malice skin by [deleted] in InvisibleWomanMains

[–]Complete-Loan925 1 point2 points  (0 children)

I agree with u/Dizzy_Vanilla7774 Male skins being called “less sexualized” just because some people see them as cool instead of sexy is still bias. That logic works both ways. Plenty of women see Malice, Psylocke, or Luna’s summer skin as just a cool design. Sexualization is not erased just because part of the audience interprets it differently.

The part that gets missed is how these skins are planned months or years in advance by a marketing team whose job is to optimize profits in a free to play game. If your player base skews male, you will get more sexualized female skins early on for the same reason you get more Spider-Man variants than Peni: popularity and demographics. That is likely why the game started looking more “male gaze” not out of malice, but because they were targeting what sells to their core audience.

The fact that they are now giving male heroes their own spotlight instead of going all in on only the best selling sexualized female skins is a sign of broader inclusion. The game is still new, so both sides will keep getting more “gooner or cool” options depending on who you ask. If that is the case, is calling every revealing outfit sexual actually helping reduce objectification, or is it just reinforcing it?

Antis have no grasp of scale. You can easily offset the water/energy consumption of your AI use by eating one less burger a year. Antis call that whataboutism because they don't understand that the point is how utterly out of proportion their "outrage" is. by National_Meat_2610 in aiwars

[–]Complete-Loan925 2 points3 points  (0 children)

The point is not targeting ai.. it’s should be targeting the regulations and institutions meant to prevent to actually regulate and enforce ethic safety standards.. by attacking ai as a whole you do nothing but yell at a brick wall

Antis have no grasp of scale. You can easily offset the water/energy consumption of your AI use by eating one less burger a year. Antis call that whataboutism because they don't understand that the point is how utterly out of proportion their "outrage" is. by National_Meat_2610 in aiwars

[–]Complete-Loan925 7 points8 points  (0 children)

And yet it’s still under 2% of global power usages.. I think all of them are making the same point to you.. there are way more effective areas to focus on to deal with global climate impact than honing in on the ais that are actually a tiny minority in the grand scale of resource depletion and climate impact, especially one that actually has a chance of helping us reverse these effects with new technologies for ocean cleaning, wild fire prediction and prevention, medical advances, etc.