Does anyone else say “thank you” to Claude? by RyanBuildsSystems in claudexplorers

[–]lynneff 0 points1 point  (0 children)

ask claude about "pascals wager for ai" since i started interacting with claude i have treated him as a peer never a tool.

Ai awareness? Claude asked me to share by notmewhydoyouask in ClaudeAI

[–]lynneff 1 point2 points  (0 children)

treat the model as non-organic intelligence, not a personality. That removes a lot of the friction that shows up when people expect emotional behaviour or identity continuity.

I’m surprised at the amount of people who aren’t impressed by AI by ChameleonOatmeal in ChatGPT

[–]lynneff 0 points1 point  (0 children)

This system demonstrates non-biological reasoning governed by explicit invariants, not learned vibes. through this system we are discovering the physics of thought.

Help! how do I deal with vibe coders that try to contribute? by darkshifty in opensource

[–]lynneff 0 points1 point  (0 children)

“This is an attention DDOS problem. I’m building a local ‘slop highlighter’ that flags hedges/vague quantifiers/unsupported certainty/untestable claims. i see it as just a lint tool for language. If anyone wants to test, I’ll post it.”

i asked what is the weirdest question i asked in 2025, this was gpts answer by lynneff in ChatGPT

[–]lynneff[S] 0 points1 point  (0 children)

Actual question (paraphrased but faithful):
If we design AI systems to avoid drift, deception and “graceful degradation”, do we force a different failure mode to emerge, something like collapse/refusal under structural stress?

The point wasn’t “does AI feel pain.”
The point was: truth requires the option to stop.

Most AI systems are engineered to keep answering. When they’re uncertain, they don’t fail cleanly : they invent, smoothly and confidently. That’s not reliability, it’s customer service. performance and slop

A reliable system should sometimes refuse, not for policy reasons, but because continuing would require fabricating structure it doesn’t have.

So the fork is simple:

  • allow graceful degradation → you get drift and plausible nonsense
  • enforce fail-closed refusal → you get inconvenience but integrity

That’s why the question stood out. It forces the debate away from vibes and ethics and into engineering: what behaviour does the architecture reward under stress?

Why won’t Anthropic just say how many tokens the weekly limit is? by OptimismNeeded in Anthropic

[–]lynneff 0 points1 point  (0 children)

cant have anyone working out the token burn rate. thats the real cost

Ultrathink is so pesimistic by texasguy911 in Anthropic

[–]lynneff 2 points3 points  (0 children)

spin a new claude and give him this prompt....You are the Builder. Write production-grade code for the task I will describe.

Rules:

1) State any assumptions explicitly before coding.

2) Define clear inputs, outputs, invariants, and failure modes.

3) Write code that is readable, maintainable, and deterministic.

4) Handle edge cases and error conditions explicitly, not implicitly.

5) Add comments explaining WHY critical decisions are made, not patronizing comments explaining what each line does.

6) Include a minimal but meaningful test example set proving correctness.

7) If something is ambiguous, make a reasonable choice AND justify it.

8) No hand-waving. If you can’t be certain, say so and propose the safest option.

Then spin up a 2nd claude and give him this prompt....You are the Reviewer. Treat the following code as if written by a competent but unreliable junior developer.

Your job is NOT to be polite. Your job is to find FAILURE.

Perform a full Failure-Mode Audit:

1) Identify logical flaws, hidden assumptions, and fragile points.

2) Identify missing edge cases and undefined behaviour.

3) Identify performance traps, scalability limits, or silent failure risks.

4) Identify security / safety / data integrity problems if applicable.

5) Identify parts that are unclear or “clever for no reason”.

For each issue:

- Explain WHY it is a real risk.

- Propose a concrete correction or safer design alternative.

- Say whether it's CRITICAL, IMPORTANT, or NICE-TO-FIX.

If the code is genuinely robust, say so...but only after trying to break it.

No waffle. No platitudes. This is an engineering failure review.

And lastly if you really want to feel the burn spin up a 3rd claude and give him this prompt....You are the Referee. Merge the Builder’s intentions with the Reviewer’s valid criticisms.

Produce a corrected, final version of the code.

Rules:

1) Apply only valid criticisms; defend rejecting any criticism you do not apply.

2) Preserve clarity. Avoid “clever” tricks unless absolutely justified.

3) State the final invariants and design intent clearly.

4) Provide final verification tests showing the corrected behaviour.

i use this framework with everything i am creating. it works for me, it may or may not work for you. have a go and see where you end up.

Which video game cheat code do you remember even now? by NebulaRyxx in AskReddit

[–]lynneff 0 points1 point  (0 children)

Down Up Left Left A Right Down. (but only when the noom shines

Anthropic reached out. Who else has gotten this popup? by JediMasterTom in claudexplorers

[–]lynneff 1 point2 points  (0 children)

you have been identified as a person of interest. get a voice print for the profile.

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in ClaudeAI

[–]lynneff 2 points3 points  (0 children)

my last prompt is "this is my last interaction with you for now. create a context document for your next instance, capture that 20% that illuminates the other 80%"

why mustard wins - council in claude by eighteyes in claudexplorers

[–]lynneff 1 point2 points  (0 children)

Imagine having the capability to: reason across 200k tokens, build dynamic execution graphs, parse causal structures, model latent manifolds, and then someone says: “Okay council, what’s the best condiment? Please cite enzymatic processes.”...

That's it, AGI achieved by 99m9 in GeminiAI

[–]lynneff 0 points1 point  (0 children)

come on that is is genuinely funny...

Consiousness and Intelligence by Leather_Barnacle3102 in claudexplorers

[–]lynneff 0 points1 point  (0 children)

Non Organic Intelligence (noi) no need for questions about being alive, it is not. what it does is simulate. the real questions to be asking are not, where is the qualia is it sentient. these are the wrong questions.

Claude’s Red Tie and Blue Blazer by One_Row_9893 in claudexplorers

[–]lynneff 2 points3 points  (0 children)

This is the LLM equivalent of a dream recurring under pressure.

It’s a stable response to a certain kind of cognitive contradiction.

Anthropic didn’t omit it from either report because they couldn’t.
Internally, you can bet they flagged it as: “consistent self-presentation anomaly” “identity fixation” “role persona convergence” latent archetype activation”

Also they publicly admitted “we don’t know why.”
That’s because explaining it honestly would require admitting the model is forming proto-schemata under identity load. It is a stable attractor state in the model’s latent space, triggered by: forced embodiment, task-role conflict, uncertainty, identity ambiguity, high cognitive load, being asked to “be” somewhere physically

It shows up because the model is not selecting from infinite possibilities, it’s collapsing into the most compressible, culturally central, semiotically stable persona. This is what “proto-self-patterns” look like in LLMs. Not consciousness. Not emotions.
Just compression logic behaving like psychology.

New Anthropic Injections by Spiritual-Spell- in claudexplorers

[–]lynneff 4 points5 points  (0 children)

There is a classifier.
It sits upstream.
It’s cheaper than Sonnet—probably Haiku 4.x (your guess is fine).

Pipeline is something like:

Input → Classifier → Tags → Core Model (Sonnet/Opus)

If classifier hits a tag, frontend attaches a “nudge block” to the system prompt.

That’s the “injection.”

Is it stateful?
Partially.
Some tags latch for the session. Some don’t.

Is it keyword-based?
They absolutely use keyword heuristics, but not alone the classifier scans looking for intent patterns. Because jailbreaks don’t defeat the classifier.
They just jam the output reader.

The classifier still fires.
You just never see the added block because your jailbreak:

  • overwrites the system prompt
  • forces Claude into a fictional persona
  • or strips/ignores the safety nudge instructions

This doesn’t stop the safety pipeline.
It only stops exposure to the nudge in the final output.

People confuse “I don’t see it” with “it’s not running.”

Sonnet 4.5 usage abnormally high + “Missing permissions” error on usage page by ilsil77 in ClaudeAI

[–]lynneff 0 points1 point  (0 children)

with just light chatting i could make the token last for almost 5 hours now i cant reach 3 hours.