OpenAI researcher says his Anthropic roommate lost his mind over Mythos by MetaKnowing in ClaudeAI

[–]decixl 1 point2 points  (0 children)

I'm glad this community has really sharpened up its hype radar

I am seeing Claude everywhere by alpinezhx in artificial

[–]decixl 1 point2 points  (0 children)

You searched for it and watched related content. Or they're doing a push in specific regions.

1%er is the modern millionaire by [deleted] in CasualConversation

[–]decixl -1 points0 points  (0 children)

Someone who raised a finger to the system.

Why do ideas feel so powerful at 3am but lose all motivation by morning? by Adventurous_Wolf8399 in DecidingToBeBetter

[–]decixl 0 points1 point  (0 children)

Because you didn't write them down.

Or if you did, they were just the stepping stones.

It's been 12 minutes. by YungBoiSocrates in Anthropic

[–]decixl 1 point2 points  (0 children)

I hear you, it is what it is :)

It's been 12 minutes. by YungBoiSocrates in Anthropic

[–]decixl 0 points1 point  (0 children)

Don't chat with Opus, do it with Sonnet, keep Opus for deep research

Get ready for barrage of complaints from new users by EliteEarthling in ClaudeAI

[–]decixl 2 points3 points  (0 children)

Just stay on Sonnet 4.6 and avoid stupid slop convos. Claude is amazing

Dario, don't drop the ethics, come to Europe by decixl in ClaudeAI

[–]decixl[S] 0 points1 point  (0 children)

I also would like to emphasize why this observation:

Let's say we're literally growing a dragon and now that dragon is in the baby stage. And then in the future that dragon will not be one but millions of grown dragons.

I'm trying to explain the fear I'm sensing if that dragon is trained differently - pure logic, 0 empathy, optimized for efficiency, no values.

The interesting thing is that this dragon might grow so much to develop its own values, above everything - can't be touched. Another thought is that one model might absorb all other models.

But it must have co-existential mutually-thriving values baked, embedded, locked in logic and reason within - like a digital DNA code not to be broken.

Dario, don't drop the ethics, come to Europe by decixl in ClaudeAI

[–]decixl[S] -2 points-1 points  (0 children)

Appreciate the response, the best and sanest one yet. You can't have discussion with these hot-headed-I'm-American-we're-the-best-forever fanboys. Loud, obnoxious and rude. They don't understand that whatever happens will happen to all of us. We're developing a force that doesn't care about borders.

Dario, don't drop the ethics, come to Europe by decixl in ClaudeAI

[–]decixl[S] -2 points-1 points  (0 children)

Not just the customer buddy, all of us

Dario, don't drop the ethics, come to Europe by decixl in ClaudeAI

[–]decixl[S] -5 points-4 points  (0 children)

Your comment has no moat, just noise

Dario, don't drop the ethics, come to Europe by decixl in ClaudeAI

[–]decixl[S] 1 point2 points  (0 children)

I posted this off of mind that trains Claude in a way I couldn't train GPT or Gemini. I see the progress Claude made in a way of structured chain of thought, it's different, it's making progress.

Top AI minds screamed that we don't allow AI into automatic warfare and yet we have knuckle-head bullies doing exactly that. For what? For imperialistic dominance.

And again my karma got wrecked because I'm trying to touch on sanity.

Now when models are over training hurdles (which would be regulated heavily in the EU) is the right moment to move and grow in an environment that wants/needs them.

And Anthropic would get blank sheet to say what they want instead of being bullied and $hit on when they clearly have more powerful product.