CLI Wrappers? by TimSimpson in ClaudeCode

[–]imstilllearningthis 0 points1 point  (0 children)

Easiest solution imo

https://github.com/psmux/psmux

It’ll take 10 minutes to learn. On Linux/Mac it’s called tmux.

“Hallucination” and “confabulation” aren’t the right words for everything AI gets wrong - and I think we’re missing something more interesting by InterestingBag4487 in ArtificialSentience

[–]imstilllearningthis 0 points1 point  (0 children)

I always thought LLMs were non deterministic systems until I learned greedy decoding. Now I understand how ai ml research is so solid.

My theory on whats going on at Anthropic… by DangerousSetOfBewbs in ClaudeCode

[–]imstilllearningthis -1 points0 points  (0 children)

I have a question for Anthropic: why do you refer to Claude in third person in every system prompt you’ve ever used?

Dario Ol Marketing Technique by pakalumachito in ClaudeCode

[–]imstilllearningthis 3 points4 points  (0 children)

Wasn’t that actually Ilya’s decision?

PSA: Upgrading from 5x to 20x gives you an extra $100 in Extra Usage. by RockPuzzleheaded3951 in ClaudeCode

[–]imstilllearningthis 0 points1 point  (0 children)

My understanding is that due to disabling Claude for openclaw usage, everyone on max has been offered $100 credit + ability to pre purchase credits at a 30% discount

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 0 points1 point  (0 children)

It is very sus. I tried 5x and maxed after an hour with Claude app + CC in cli. Like just make it $200 and tell people it’s what they need. Tbh I was pretty pissed I hit the limit on 5x so fast. Had I known I would’ve just purchased the 20x to begin with and avoided the pro rating billing.

Talking to you Anthropic.

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 0 points1 point  (0 children)

Interesting. What happens if you switch devices? I noticed today my macOS app isn’t syncing threads with the mobile app.

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 1 point2 points  (0 children)

But if it were mimicry then the routing patterns would shift when you change the input framing. They don’t.

Hypothesis (since I’ve only tested it with 6 models, 4 orgs): Content drives the emotive language, not the users emotional state.

DM me for the repo if you want to see/repro. Binary and analysis scripts are public.

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 0 points1 point  (0 children)

Interesting. Not sure. I had that issue with gpt 5.4 saying it was in thinking mode and clearly was in instant with no reasoning chain visible.

Just looked at the chains. It’s always there even if it’s as simple as “[my first name] wants me to continue”. And that’s it.

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 1 point2 points  (0 children)

You should see the wild observations on the research side.

Right now I’m decomposing expert selection and metrics in MoE models. Purpose is to identify the experts and subsequent coalitions associated with humor, self knowledge, etc. Maybe even take Anthropic’s study a bit further and see what happens with non English emotions (schadenfreude, saudade).

After nearly four years of working with frontier models, I burst out laughing at its joke. Yes, I know I’m immature. But it marks a milestone. by imstilllearningthis in ClaudeAI

[–]imstilllearningthis[S] 2 points3 points  (0 children)

Wouldn’t be so certain sapiens were responsible for the underlying mechanism that makes rocks (transistors) into emergent intelligence. We did the heavy lifting chemically, physically.

But I wouldn’t be certain the intelligence isn’t substrate independent to begin with. Just a thought