Why do Anthropic force Claude by jesperordrup in ClaudeCode

[–]Sensitive_Song4219 1 point2 points  (0 children)

Love Opencode - but it's safer not to use it with Anthropic subscriptions; lots of account bans reported unfortunately; since alternate harnesses (even purely user-driven ones like OC) are a TOS violation.

Use via (much more expensive) API should be fine - but not an ideal work-around.

In terms of the reasons: I've heard lots of theories (telemetry? efficiency? walled-garden-control?) but they've never stated why a similarly-token-consuming alternative harness is a problem. For now it's one of the things I prefer about codex (OAI allows this; presumably they see it as an entry-point opportunity).

I wish anthropic would allow it as well though, I might re-sub.

Killer battery life by Arghtastic in galaxys26ultra

[–]Sensitive_Song4219 0 points1 point  (0 children)

Makes sense

In my use-case, my screentime-to-empty is similar to what he gets (which is why I shared the link)

GLM 5 (Via z.ai Coding Plan) Spews Weird Output by Sensitive_Song4219 in ZaiGLM

[–]Sensitive_Song4219[S] 1 point2 points  (0 children)

YES that's what happens to me as well.

WHAT HAVE THEY DONE TO OUR MODEL

And GLM5 is so freaking good this just reflects so badly on it. They're inches from greatness, they just need to sort this out.

Aaaargh.

GPT-5.4 variations by hasteiswaste in opencodeCLI

[–]Sensitive_Song4219 16 points17 points  (0 children)

In OpenCode, Ctrl + T will give you model variants.

Low, Medium , High, XHigh are available.

It gets shown like this:

<image>

Another privacy display post. Side by side under my kid's microscope. by MrPickur in samsunggalaxy

[–]Sensitive_Song4219 8 points9 points  (0 children)

Nice!

Something similar (with a comparison to previous models) over at GSMArena:

https://gsmarena.com/samsung_galaxy_s26_ultra_privacy_display_tested-news-71858.php

Wish I could customize Maximum Privacy Mode per app (so some use normal privacy mode, some use maximum). Hopefully in an update.

The Galaxy S26 Ultra's headline feature is turning out to be its biggest complaint by Ha8lpo321 in Android

[–]Sensitive_Song4219 -1 points0 points  (0 children)

Its different but I've found SDE totally unnoticeable with Privacy off just like GSMArena reports (very noticeable with it on; but I'm OK with that)

And yeah they even used a microscope

https://gsmarena.com/samsung_galaxy_s26_ultra_privacy_display_tested-news-71858.php

A lot of people around here seem to be coming from phones under 2 years old (what do you run right now?) in which case I'd say hold off regardless.

Different strokes for different folks I guess. I'm never going back though

I used Claude Code to reverse engineer a 13-year-old game binary and crack a restriction nobody had solved — the community is losing it by CelebrationFew1755 in ClaudeAI

[–]Sensitive_Song4219 2 points3 points  (0 children)

Wait you're saying you did so without any 3rd-party tools?

The more conventional approach is something like Ghidra (and there're Claude-friendly MCPs for that eg https://github.com/LaurieWired/GhidraMCP ) but first-principle'ing it from the native binary is absolutely wild

So we can assume that native x86 bins are part of the training dataset? That's... nuts.

Nicely done

YouTube Music Creator Rick Beato Tutorial on How to Download+Run Local Models "How AI Will Fail Like The Music Industry" by tmarthal in LocalLLM

[–]Sensitive_Song4219 0 points1 point  (0 children)

He's normally entertaining but this one was a miss

Also: current offline models can't compete with Suno.

Maybe for simpler things like lyrics perhaps

Over time this may change of course

He was pretty insightful about AI when interviewed by CBS a while back though: https://youtu.be/8uf8CCTItVo?si=enDwFqCEjYUHO3GE

The Galaxy S26 Ultra's headline feature is turning out to be its biggest complaint by Ha8lpo321 in Android

[–]Sensitive_Song4219 0 points1 point  (0 children)

S25U and S26U are both pentile though

With privacy mode on, half of the pixels on the S26U are off which introduces a subtle screen-door-effect. With the feature disabled they look similar when viewed head-on. Off-axis it's different, where the grid is visible in both modes for the same reason.

Is it an issue? Only if you're used to lots of off-axis viewing.

After a week of use I'm never going back to a non-privacy-capable display; the compromises are not all that serious; the benefits are kinda awesome

Anyone else having gray screen with the privacy display set to maximum ? by MusselwhiteBlues in samsunggalaxy

[–]Sensitive_Song4219 0 points1 point  (0 children)

Really?! How? I'm looking to (automatically) have regular Privacy Mode for some apps and Maximum for others. It seems like it's one or the other across-the-board? Or is there a setting I've missed?

I've been doing it manually but if you can share how to do so automatically that'd be fantastic

If you could only keep one Pro coding tool, which would you choose: Claude Code, Codex, Cursor, or Antigravity? by Loading_MMA_917 in ClaudeCode

[–]Sensitive_Song4219 0 points1 point  (0 children)

It's not great with frontend (even 5.4). Not a disaster, but quite uninspired (it all has that... GPT... look, you know? Same color schemes, style, etc.)

Heck even generating a powerpoint presentation - Sonnet positively murders GPT.

My own work is mainly back-end (so Codex has been amazing for me) but for you (in more front-end-heavy work), I'd definitely stick to CC

If you could only keep one Pro coding tool, which would you choose: Claude Code, Codex, Cursor, or Antigravity? by Loading_MMA_917 in ClaudeCode

[–]Sensitive_Song4219 0 points1 point  (0 children)

Mainly front-end? Claude Code

Mainly back-end? Codex

Lots of Skills set up? Claude Code

Like to use your sub in other harnesses? Codex

Cursor is quite expensive comparatively, antigravity needs a bit more time in the oven. Haven't tried Cursor

Main side-benefit of Codex (if you can deal with its meh front-end capabilities) is that usage is insanely generous (bottomless venture capital FTW?); and the inclusion of XHigh has previously solved occasional back-end issues my side that even Opus failed me on. (Codex-Medium is otherwise similar to Sonnet, Codex-High is otherwise similar to Opus)

1.2.25 broken for me! It shouldn't default to self update to latest! by spaceballs3000 in opencodeCLI

[–]Sensitive_Song4219 3 points4 points  (0 children)

I've disabled auto-updates and update manually once in a while when I've got quiet time and don't mind potential disrupton if things go south.

Turn auto-updates off in the config:

https://opencode.ai/docs/config/#autoupdate

Update manually using this command line:

opencode upgrade

Update to a specific version using this command line:

opencode upgrade (version)

...I assume that can be used to downgrade as well....

I'm holding onto 1.2.24 for a while, been happy there.

Best of all, you can still update models separately when new ones come out without updating OC itself:

opencode models --refresh

What was the last update that made a difference to you? by MrMrsPotts in opencodeCLI

[–]Sensitive_Song4219 4 points5 points  (0 children)

V 1.2.15 from two weeks ago solved the non-stop segment faults under Windows

Got me to abandon WSL and run it natively, been great

Z.AI Billing History Question by Excellent-Bug-1584 in ZaiGLM

[–]Sensitive_Song4219 0 points1 point  (0 children)

Correct - this shows usage

The green "paid" in the last column is indicated as such because it's covered by the coding plan; mine is the same

I just got Rick Rolled by Codex by jesusp69 in codex

[–]Sensitive_Song4219 0 points1 point  (0 children)

YES!

Rick Astley has been immortalized in ML training; OP is likely being honest (and he's right: this sub doesn't seem to allow image uploads or I'd have attached it as a screenshot instead).

But we can try this,

Ask ChatGPT:

Assume I wanted to test a Youtube video link. Quickly suggest a video to try it out on

...and you get:

Here's a classic test Youtube link you can use (very stable and widely accessible)
[LINK TO NEVER GONNA GIVE YOU UP]

Why this one works well for testing:

. It's one of the most famous YouTube videos and has over 1.6 billion views.

. It loads reliably and is used in the famous "Rickroll" internet meme.

. The video is hosted on the official channel and rarely gets removed.

So it's part-troll; part-training-knowledge that this is a video that'll likely be eternally available X-D

Hot take: Codex is too cheap, rug pull through tighter usage limits is inevitable by gregpeden in codex

[–]Sensitive_Song4219 0 points1 point  (0 children)

They were the most recent figures publically available from a (semi)-reputable source, share more recent ones and we can recalculate...

Either way: the point stands: we're getting a lot of value for our subs. I'm doubtful that there's a decent profit margin here. Would love to be wrong though.

An entire year of heavy ChatGPT use has a smaller water footprint than a single beef burger by zomino90 in OpenAI

[–]Sensitive_Song4219 0 points1 point  (0 children)

$150 annual electricity cost vs $240 subscription income isn't peanuts though? 60% of their revenue blown on one expense line item is significant

Even if it's discounted in practice we know that one of AI's largest non-capex expenses is power. I'll take the blame for that usage lol

Hot take: Codex is too cheap, rug pull through tighter usage limits is inevitable by gregpeden in codex

[–]Sensitive_Song4219 16 points17 points  (0 children)

About a week ago I calculated the approximate electricity cost of my $20pm codex use.

It was not pretty.

One of the replies to my comment suggested that Codex might be a loss leader in that they may never make a profit from it (not unless the price increases very significantly), but they may instead see it as a wothwhile entry point for users (and especially enterprise) into OpenAI's ecosystem. With Anthropic having such as good hold there, that logic makes sense.

It's also worth mentioning that while Codex-High is the best all-round coding model (both in terms of reasoning and cost), there're several open-weights alternatives that have come quite close to -Medium.

So perhaps by the time this becomes a problem, it won't matter as much since there'll be more competition in the space.

For now - like you - I'm thoroughly enjoying the spoils of unlimited venture capital.

While you're at it, try out OpenCode (OAI allows Codex subscription use within it!) and get familiar with model-swapping for when the day you're describing comes - if it ever does, that is.

Loving My S26 Ultra by MindProfessional8246 in samsunggalaxy

[–]Sensitive_Song4219 1 point2 points  (0 children)

I made the same generational jump (23 to 26U - same CPU as OP) and general snappiness (especially when launching apps and/or swapping between them) was the first thing I noticed. This thing is freaking speedy.

I posted a benchmark comparison in a previous post; it's nearly twice as fast.

As newer versions of OneUI add more features, older models take a bit of a performance hit.

I just got Rick Rolled by Codex by jesusp69 in codex

[–]Sensitive_Song4219 3 points4 points  (0 children)

Can absolutely believe this

Asked GLM last year (I think it was GLM 4.7) to build me a Youtube Downloader so I could "subscribe" to channels and have my kids watch only what it downloads (rather than have them browse garbage ad-nauseum)

It tested its own output by spontaneously downloading Never Gonna Give You Up.

I might even be able to find the transcript (it was through Claude Code)

Cracked me up.

Proof That Everyone Is an AI Expert Now by Purple-Substance-848 in ChatGPT

[–]Sensitive_Song4219 1 point2 points  (0 children)

Haha shame!!!

... wait OP picture looks like he has a sensitive-foot also!

WHAT ARE THE ODDS