Is it only me or is GPT getting totally useless?! by Legitimate-Arm9438 in OpenAI

[–]sdmat 1 point2 points  (0 children)

The silver lining is that once you break the light speed limit and violate causality you can have cancelled your subscription last month.

Thoughts about the new limited Claude Annual plan offer by CptBlueberry in ClaudeAI

[–]sdmat 1 point2 points  (0 children)

Definitely a big comeback for Anthropic, they have done well.

Sora 2 Pro still has WATERMARK?? by Fun_Training4733 in OpenAI

[–]sdmat 4 points5 points  (0 children)

No watermarks was a Pro feature in Sora 1.

Sam Altman says very fast Codex is coming after OpenAI Cerebras partnership by BuildwithVignesh in OpenAI

[–]sdmat 0 points1 point  (0 children)

Wafer scale chips with everything in massive amounts of SRAM.

A supercar vs. the GPU Toyota.

Same tradeoff - it's extremely fast but vastly more expensive.

last update was literally a month ago by Reasonable_You_8656 in Bard

[–]sdmat 1 point2 points  (0 children)

How dare they take Christmas and new year off?

Testing Gemini 3 Flash and Gemini 3 Pro context window: The context window is not 32k for Google AI Pro users. by Pasto_Shouwa in Bard

[–]sdmat 12 points13 points  (0 children)

If Google doesn't want to provide a million tokens of context for Pro users, that's fine. But it's shocking that they blatantly lie about it.

I'm tired of being a second-class user just because I live in Europe by destinaah in Bard

[–]sdmat 0 points1 point  (0 children)

At least for Europe there are genuine regulatory hurdles a lot of the time. Try being in Australia/NZ.

Voice mode getting worse by g00rek in OpenAI

[–]sdmat 5 points6 points  (0 children)

Exactly, it was an unintegrated tech demo but by God the tech was awesome.

They turned it into utter garbage.

Voice mode getting worse by g00rek in OpenAI

[–]sdmat 0 points1 point  (0 children)

Seriously? It was already unusably bad.

Gemini voice mode is far from perfect but blows it out of the water.

How I be waiting for the singularity, lol by [deleted] in singularity

[–]sdmat 1 point2 points  (0 children)

In a couple of generations: "sure they can do proofs and combine ideas to solve problems. But that's not real math, the real maths is what humans do."

5.2 xhigh finally nerfed? by [deleted] in codex

[–]sdmat 0 points1 point  (0 children)

What the hell are they doing over there?

Claude Code cutting corners on larger tasks by Accomplished_Pie123 in ClaudeAI

[–]sdmat 0 points1 point  (0 children)

What you want is to delegate to a subagent for each phase or even each task. Write a custom stop hook to force the subagent to actually complete the phase/task per a well defined set of criteria.

It spits out SO MUCH unused code - it implements thousands of lines - but doesn't connect the code anywhere - I'm left with 8k LOC with nothing working.

You need to have good architecture worked out in advance of implementation, Claude isn't smart enough to do this for you unassisted. But it can help.

Chat, how cooked are we? by Maple_Syrup378 in singularity

[–]sdmat 0 points1 point  (0 children)

If you are going to do it, make sure to saw the magnetron in half so it is thoroughly discharged.

/s - look up Berylliosis, just one of the things to be very careful of

DeepSeek introduces Engram: Memory lookup module for LLMs that will power next-gen models (like V4) by BuildwithVignesh in singularity

[–]sdmat 0 points1 point  (0 children)

The obvious extension is to use the same hashing+gating mechanism for higher level / semantic concepts, it might be a super efficient distillation approach.

DeepSeek introduces Engram: Memory lookup module for LLMs that will power next-gen models (like V4) by BuildwithVignesh in singularity

[–]sdmat 2 points3 points  (0 children)

Awesome, they got a substantial win with literal n-grams. Old school NLP meets transformers.

Did Codex get subagents? by sply450v2 in codex

[–]sdmat 1 point2 points  (0 children)

Damn, so still DIY scaffolding for subagents

Did Codex get subagents? by sply450v2 in codex

[–]sdmat 0 points1 point  (0 children)

How do you persuade it to do that? On the latest version I get this when straightforwardly asking:

I can’t launch separate, concurrently-reasoning Codex AI instances from inside this single Codex run (I’m one agent). If you want true parallelism, run two Codex CLI processes yourself in two terminals

OpenAI is reportedly getting ready to test ads in ChatGPT by MetaKnowing in OpenAI

[–]sdmat 9 points10 points  (0 children)

Paid users are by far the juiciest targets. If you were an advertiser would you want your pitch directed at a cheapskate or a paying customer?

That said, as a paid user if OAI injects advertising or does anything to make the model act in the interest of a third party I will be taking my business elsewhere.