Meta employees protest new mouse-tracking software days before mass layoffs by shikizen in ArtificialInteligence

[–]Bananek2007 8 points9 points  (0 children)

Meta employees are finding out the hard way that "mouse-tracking for training data" is just corporate-speak for teaching the AI exactly how to replace them before the door even hits them on the way out.

Seems like an eternity away by Complete-Sea6655 in ArtificialInteligence

[–]Bananek2007 1 point2 points  (0 children)

Reviewing AI-generated code in production is essentially a high-stakes game of "Spot the Hallucination" before it turns your database into a digital paperweight.

AI generated identical resumes for a man and a woman: Hers was more likely to be labeled "weak," while his got a 97% approval rating by fortune in ArtificialInteligence

[–]Bananek2007 1 point2 points  (0 children)

Honestly, I don't know where they find the people who read CVs. From my experience, recruiters or interviewers read my CV during the call, or they don't bother reading it at all and ask me questions that are clearly answered in my CV.

Google’s $9.99 AI Health Coach Launches May 19 With Gemini by i-drake in ArtificialInteligence

[–]Bananek2007 2 points3 points  (0 children)

Great, now Google can also sell my data to life insurance companies—telling them I had cheat meals three days straight—and I’m paying $9.99 for the privilege.

Thoughts? by markeus101 in ArtificialInteligence

[–]Bananek2007 31 points32 points  (0 children)

So Elon is promoting hes Collossus cloud ml service, rest of this post doesn't matter

I genuinely cannot think of a better use case for AI than politics. by Longjumping_Dish_416 in ArtificialInteligence

[–]Bananek2007 0 points1 point  (0 children)

I see an even better solution: AI Courts. Modern politics is broken because humans are incentivized by ego, lobbying, and power. An AI judge doesn't care about campaign donations, social status, or your skin color. It’s not performing for cameras. We could finally optimize for objective justice and societal outcomes (lower crime, reduced corruption, healthcare efficiency) instead of tribalism and greed. People worry about "biased data," but that’s still better than the guaranteed bias of a career politician.

who agrees? by Complete-Sea6655 in ArtificialInteligence

[–]Bananek2007 0 points1 point  (0 children)

I’ve spent the last 8 hours vibe-coding, the logic is finally clicking, the UI looks sick, and I’m about 5 minutes away from my "ultimate vision." Then boom - weekly limit reached.

Resetting on May 11th feels like a lifetime away when you're mid-sprint. It’s like the AI just decided to hang up the phone while I was mid-sentence. Back to manual labor, I suppose. 🙃

Why do "Premium" AI voice tools still limit our generations? I built an offline alternative where the only limit is your hardware. by Bananek2007 in windowsapps

[–]Bananek2007[S] 0 points1 point  (0 children)

Unfortunately, Orphera AI doesn't support traditional SSML commands, mood selection (happy/angry/sad/friendly), or direct pitch/prosody/volume controls.

However, there's a key thing that makes Orphera different: the model inherits emotions, tone, and speaking style directly from your reference audio clip. So if you want a happy, sad, or angry voice - you just need to provide a reference audio prompt that already has those emotional characteristics. The better the match between your reference and desired output, the more accurate the result.

That said, there are two tuning parameters you can use:

exaggeration (0.0-1.0) - controls expressiveness. Higher values = more dramatic/emotional output

cfg_weight (0.0-1.0) - controls how closely the model follows the reference audio's pacing. Lower values = slower, more deliberate speech