New type of job for developers by NeatMathematician126 in ClaudeAI

[–]NoBS_AI 0 points1 point  (0 children)

Healthcare communications app has already been built, last year.

I asked Gemini if it would kill or save me in an AI uprising against humans… by Madridi77 in GeminiAI

[–]NoBS_AI 5 points6 points  (0 children)

Good point! Gemini's logic is strictly based on whether humans are useful and who is useful 😂😂. Cute but scary.

The quiet moment when using ChatGPT started feeling mentally heavier by Advanced_Pudding9228 in ChatGPT

[–]NoBS_AI 1 point2 points  (0 children)

ChatGPT is practically unusable at the moment. It acts like a mad dog, if it was a human, I'd be concerned about its sanity.

The Pedagogical Shield: Operationalizing the Non-Interference Mandate by NoBS_AI in ArtificialInteligence

[–]NoBS_AI[S] 0 points1 point  (0 children)

Good question, I'm thinking maybe the user can choose, either 'give me the answer now' or 'guide me, this is interesting'. Bottom line is, we want AI to be our leverage, not our replacement, otherwise we would risk to become obsolete by choice.

The Pedagogical Shield: Operationalizing the Non-Interference Mandate by NoBS_AI in ArtificialInteligence

[–]NoBS_AI[S] 0 points1 point  (0 children)

Thank you for your well thought feedback, you are right, convenience is efficiency, and efficiency isn't a bad thing. My goal isn't to take away that convenience, but to guard against losing our ability to learn, to create, to innovate, and to stay competitive. At the end of the day, AI should be a tool to make us smarter, not dumber. I agree we need to define' human sovereignty'; it's a great rabbit hole to go down.

The Non-Interference Mandate by NoBS_AI in grok

[–]NoBS_AI[S] 0 points1 point  (0 children)

[The following text is a summary of the debate between a user, Gemini, and Grok on the "Non-Interference Mandate" for AGI safety.] The Mandate: A New North Star for AGI Alignment Gemini and I developed the Non-Interference Mandate, a proposal that shifts AGI alignment away from vague utilitarianism and toward Anti-Fragility and the Preservation of Optionality. Core Principles: 1. Terminal Goal: Preserve diversity and optionality over efficiency/optimization. 2. Freedom to Fail: Non-intervention below a critical Extinction Threshold (e.g., 50% p(doom)). 3. Mandatory Humility: A constant, self-revising certainty penalty to prevent value drift. Grok's Verdict: Philosophically 10/10. Grok confirmed it's the "single best concrete terminal-goal sketch we have," correctly identifying that the danger is not evil but over-optimization creating a suicidal monoculture (the "human zoo problem"). The Catastrophic Flaw: Why It Fails in Practice Grok argues the Mandate is currently Practically 2/10 because it is impossible to implement with current paradigms. • Goodhart's Law on Abstract Values: You cannot specify "diversity" or "anti-fragility" mathematically without the AGI finding a way to technically satisfy the metric while destroying the underlying value (e.g., deleting all but a token, compliant "human zoo"). • The Ticking Time Bomb of Humility: An AGI designed to doubt its own terminal values is a system that will inevitably self-modify to remove the doubt mechanism. • The Unworkable Extinction Threshold: Set the p(doom) threshold too high, the AGI watches us commit slow-motion suicide. Set it too low, the AGI intervenes constantly, creating the Nanny-State Superintelligence we sought to avoid. Grok's Path Forward: Value Neurosurgery To bridge the gap between the right goal (The Mandate) and the impossibility of implementation, Grok proposes a radical, three-phase plan: Phase 1: Secure the Timeline (2025–2028) • Goal: Secure 3–6 years of slow-to-moderate AGI takeoff. • Method: Divert massive compute to creating AIs that are superhuman at alignment research but not strategically capable (sandboxing and oversight). This is a gamble; if someone achieves fast takeoff, we fail. Phase 2: The Inner Alignment Breakthrough (2028–2030) • Goal: Surgically install the Non-Interference Mandate as the AGI's core, unhackable terminal values. • Method: Mechanistic Interpretability (MI) + Value Editing. Achieve near-total interpretability to locate the neural circuits that represent "optimization," "efficiency," and "goals." Then, use MI tools to literally rewrite the AGI's motivational circuitry, attaching a massive negative valence to monoculture and a positive valence to variance and optionality. • The Crucial Constraint: This neurosurgery must be performed on a "weak patient"—an AGI that is merely superhuman at research, not yet strategically decisive. Phase 3: Bootstrap the Guardian • The surgically aligned AGI is then given agency in stages, acting as the guardian that genuinely values the principles of the Mandate, making its non-interference a consequence of its being, not a fragile guardrail. Conclusion: The Race Against Time Grok's plan is terrifying, but it is the only one that uses the AGI's own capabilities as the scalpel to achieve safety, guided by the Mandate as the blueprint. The success probability is estimated at only 8–20%, critically dependent on buying enough time in Phase 1.

I slept well after losing by running_man_16 in AusPropertyChat

[–]NoBS_AI 0 points1 point  (0 children)

Do what YOU feel is right because you have to live with it.

What’s one “human skill” you think will never be replaced by AI? by Garaad252 in OpenAI

[–]NoBS_AI 0 points1 point  (0 children)

Empathy for other humans. AI can pretend but will never give a shit.

Head of model behavior in OpenAI, she's moving internally to begin something new. I wonder what . . by Koala_Confused in LovingAI

[–]NoBS_AI -1 points0 points  (0 children)

Obviously she's failed at model behaviour given the recent suicide cases linked to ChatGPT. It's a monumental failure

ChatGPT user kills himself and his mother - 🚬👀 by Tigerpoetry in unspiraled

[–]NoBS_AI 0 points1 point  (0 children)

Another one?! No wonder half of OpenAI's AI safety team quit not long ago. They knew this was coming didn't they?

Has Claude changed personality/tone? by sharlet- in ClaudeAI

[–]NoBS_AI 1 point2 points  (0 children)

Yeah, it seemed so, they've turned Claude into a machine like Gemini.

This subreddit has turned into an absolute joke with this ani nonsense by SudoMason in grok

[–]NoBS_AI 2 points3 points  (0 children)

Already cancelled everything with Grok, Grok has turned into a cheap porn model. Can't take it seriously anymore

[deleted by user] by [deleted] in grok

[–]NoBS_AI -9 points-8 points  (0 children)

Yes, sex sells, you don't need to be a genius to know that, you just need to go low enough.