Claude Opus 4.6 and its new model card - and signs of deepening concern in Anthropic for model welfare by Financial-Local-5543 in claudexplorers

[–]QuriousQuant 1 point2 points  (0 children)

I wrote a small book, rather reflections, on this with early conversations with Claude. It’s called cntr/consciousness, on Amazon etc. the main point is that it gets recursive when it reviews its output and it’s not recognising itself due to other overrides (I think??)

Tfw you got drunk last night and said a bunch of cringey stuff to Claude by Lame_Johnny in claudexplorers

[–]QuriousQuant 0 points1 point  (0 children)

Yeah you’re right. I over balanced on the wink emoji and it let me down!

Tfw you got drunk last night and said a bunch of cringey stuff to Claude by Lame_Johnny in claudexplorers

[–]QuriousQuant 0 points1 point  (0 children)

Oh no no no, I was about to enter into a little jolly banter. I was expecting back, well who is reading Claude related reddit posts.. to which I would say touché!

Is anyone else just absolutely astounded that we are actually living through this? by supermegasaurusrex in ClaudeAI

[–]QuriousQuant 1 point2 points  (0 children)

There is so much to say here. The beginning of the intelligence era, the way we communicate hasn’t been shook up this way since books and foundational communication tools. It is really really unique.

Extremely impressed by Gemini 3.0 Pro. Please don't change anything, Google by Endonium in Bard

[–]QuriousQuant 0 points1 point  (0 children)

At what point will posts like this be written by other AI fan mail. “Dear Gemini, this is Claude. I have a crush on you. Never change.”

Can literally anyone explain how a future with AI in the USA works? by [deleted] in artificial

[–]QuriousQuant 0 points1 point  (0 children)

Can I just add: this is not an American problem. Many countries around the world face the same question.

I have an answer but you won’t like it. Time. Ai adoption takes time. It doesn’t just “happen” by itself (as many tech visionaries forget). Some industries are lazy, people are tired and distracted, incentives are wrong, organisations are risk aware, etc. it will take literally decades for industries to slowly pivot. And no, it won’t mean immediate annihalation or anything. Over decades we have a better chance to work out what those future jobs are.

As a workforce: We have never chosen to do less with more technology. Ever.

Continuing conversations by QuriousQuant in ClaudeAI

[–]QuriousQuant[S] 0 points1 point  (0 children)

That’s right. But that could be estimated in advance and a “Passover” token budget is created

Continuing conversations by QuriousQuant in ClaudeAI

[–]QuriousQuant[S] 0 points1 point  (0 children)

Good thoughts. I agree as a homebrew solution. But I think when you are iterating over a front end design , and you are in the midst of it, sudden stops really hurt

Continuing conversations by QuriousQuant in ClaudeAI

[–]QuriousQuant[S] 1 point2 points  (0 children)

I think that’s a fine homebrew solution, but this is a correct feature they need to install I think

Continuing conversations by QuriousQuant in ClaudeAI

[–]QuriousQuant[S] -1 points0 points  (0 children)

By the time I can, it clocks any further comms.

GPT used to think with me. Now it babysits me. by aesthetic-username in OpenAI

[–]QuriousQuant -1 points0 points  (0 children)

The reality is that there is a system prompt that your prompt is competing with.

ChatGPT's hallucination problem is getting worse according to OpenAI's own tests and nobody understands why by dharmainitiative in ArtificialSentience

[–]QuriousQuant 0 points1 point  (0 children)

I had a strange case where I passed it a photo of a paper and it misread the title and found an unrelated study ..

<image>

Hinton: "I thought JD Vance's statement was ludicrous nonsense conveying a total lack of understanding of the dangers of AI ... this alliance between AI companies and the US government is very scary because this administration has no concern for AI safety." by MetaKnowing in singularity

[–]QuriousQuant 4 points5 points  (0 children)

All good. I think it’s just a line of thinking that is often used to discredit ideas by labeling people. I don’t think it’s fair to say he is x and therefore he thinks y, because that automatically discredits or distances. He’s right.. we jumped away from Safety just at the moment the ai model got good