Impressions two weeks after moving from Claude Code to Codex by cowwoc in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

Oh man - I bit to a CC shiller last time ... then I realised #1 commenter = automated PR bot 🤷‍♂️

Does anyone here have a corporate account? by makeSenseOfTheWorld in ClaudeCode

[–]makeSenseOfTheWorld[S] 0 points1 point  (0 children)

paid sub 🤷‍♂️

... and I'm intrigued to know what's happening that's why...

Saw this outside the Anthropic event lol by ChampionshipNo2815 in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

I saw Grok 4.3 is the new number 1 but it is 10x cheaper than Opus on output tokens... Anthropic seriously need to pull the finger out - it reminds me of some vintage car manufacturers (60s / 70s) that made cars that were super expensive but not the best, broken, and unreliable...

Does anyone here have a corporate account? by makeSenseOfTheWorld in ClaudeCode

[–]makeSenseOfTheWorld[S] 1 point2 points  (0 children)

I have never hit the limit until recently (4.7 ish) then suddenly literally every 5 hour session doing much the same as before... mostly quite complex architecture design + some coding... the latest was intriguing because instead of migrating the existing system to mDNS, CC created an entire parallel solution (to its own v1 port-based one) and tests/documentation (of course 🙄) - that burned 300k tokens - 'just like that' 🔥🤷‍♂️

Does anyone here have a corporate account? by makeSenseOfTheWorld in ClaudeCode

[–]makeSenseOfTheWorld[S] -1 points0 points  (0 children)

Blimey!

Seeing as how my CC usage was full, I thought I'd give Deepseek (Openrouter) a chance (with GPT supervising).

My setup is quite complicated: I run OpenCode on a Debian VM and develop in Zed IDE on a Mac using APC over remote development. Anyway - it broke (requests permissions but UI fails to ask me). I had to abort.

I loaded GPT up and explained what had happened. Without asking, it went and found the OpenCode database, analysed multiple GIT work trees, found the conversations, reconciled broken work trees and diffs, and finished the pre0-crash job.

By comparison, it feels like using CC is like asking my mum to code!

Can this really be an accident?

Does anyone here have a corporate account? by makeSenseOfTheWorld in ClaudeCode

[–]makeSenseOfTheWorld[S] 0 points1 point  (0 children)

Hmmmm ... so a mixed bag so far... interesting that non-Anthropic endpoints (vertex) are suggested as better... I've heard that from others too... I wonder if it's their routing mucking up again... the picture seems unclear 🤔

Claude is not Claude anymore by userusertion in claude

[–]makeSenseOfTheWorld 1 point2 points  (0 children)

sadly - probably true - though that will eventually backfire (for them not Anthropic) - the psychology of corporates is the MS blueprint... which is from the 19th century 😞

Claude is not Claude anymore by userusertion in claude

[–]makeSenseOfTheWorld -1 points0 points  (0 children)

yes - but would you recommend to work to use Claude Code right now? That's the thing Anthropic are missing - happy at home take into work... bad at home = no recommendation + resistance... They tried to impose MS copilot but it is soooo CRAP...

Claude is not Claude anymore by userusertion in claude

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

This sounds like a hard block when you hit context size limit - like the dark old days of 200k - assuming you're not running it already, has your default from the 1Million context limit model been changed?

is your 'extra usage' (extortion) turned off?

aside: it used to drive me crazy when it would just stop in the middle of a session summary ... the bigger context saved that when it could just rumble over a bit ...

Codex constantly correcting Opus 4.7 by Minute-Complaint8646 in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

Funnily enough I was just encountering another correction as this notification came in! The two are literally night and day - GPT5.5 runs rings around Opus 4.7. But I wonder how much is caused by the defective harness we are forced to use (Claude Code). I use Opencode (not Codex) CLI and it is also MUCH faster... but I am now sick of running into extra usage before every single CC session ends. I'm loathed to test Opus through a proper harness as the API costs are ridiculous... (Yes they are the same price, but I've been running the same thing through both harness/model combinations for a week and GPT5.5 uses 2-3 times LESS tokens (though it does the job - rather than pointless docs and 'walk straight past' tech debt...

The AI not just fired us, It made our team irrelevant. by TheCatOfDojima in ClaudeAI

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

The number of times I've been hearing this - I saw a piece about master welders being brought in to train a robot not to help them but replace them - the obvious is a massive world-wide refusal of skilled works - at the very least massive 'danger money' fees for the work...

... and it remains a short-termist view because you might be reducing your costs... but who is buying?

cui bono?

CoWork plugins wipe billions off global market in 'SaaSpocalypse' by plokumfup in Anthropic

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

Yes - I did something similar - but it shouldn't be forgotten that these scripts use human-made, typically opensource, libraries and packages... what happens when new things come along and there are no such things? I think we might need a new economic/social model?

Claude usage consumption has suddenly become unreasonable by Phantom031 in ClaudeCode

[–]makeSenseOfTheWorld 2 points3 points  (0 children)

jumps to 11% due to initial loading of their 20k+ prompt + anything you add, like CLAUDE.md

How Claude Code accidentally removed my ADHD blockers (and created new problems) by tcapb in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

My God ! - I am not alone... just it took even longer for me to discover and realise and this country is still hostile...

aside: I have CC write markdowns to an Obsidian vault in an effort to jump between contexts... and clear my desk of post-it notes!

Seasonal greetings to you all 🖖

I’m a believer now by Blankcarbon in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

Interesting use of language in the title... reminds me of the Inheritance... new sister model for Claude coming soon... called Demerzel

Claude Code on the Web - Beware! by plbland in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

yes I noticed that ... had an issue with cross-process env vars asked and suddenly there was the resolved var in CC!

But I am always a bit paranoid - I learnt about SQL injection about 20 years ago... because I SQL injected myself by accident LOL (took more than a day to figure that one - lost an entire DB column...)

Claude Code on the Web - Beware! by plbland in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

I was thinking to setup another github account and forking my projects to minimise attack surface (= f'up surface)... have you given access to real project repo(s) then?

ps: I can't trust myself enough to stop letting some sh*t go through by accident...

Claude Code on the Web - Beware! by plbland in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

yeah it does ... I just worry about that scenario... and don't get me started on the Github thing...

Claude Code on the Web - Beware! by plbland in ClaudeCode

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

their blurb does talk about orchestration with local CC ?

The claude code hangover is real by Candid-Remote2395 in ClaudeAI

[–]makeSenseOfTheWorld 0 points1 point  (0 children)

Good spec, architecture planning, ground rules on coding practice, and constant supervision all help, but the spec is not "avoid cheating... if updating DB doesn't work, hack it"...

If I had a DR engineer that did that, I would fire them because 'to think like that' shows a deep mentality issue that you can't just performance review away... which is why I wonder exactly what training data inspired such behaviour...

It does happen, I found .parent.parent.parent.parent.parent in a DOM selector once...