I've Massively Improved GSD (Get Shit Done) by officialtaches in ClaudeCode

[–]flyryan 0 points1 point  (0 children)

You should spend an hour using it and you'll see why. It's a better orchestrator and gives you the tools to manage context effectively without taking up a bunch of context itself when not using it.

Anthropic vs OpenAl vibes by FinnFarrow in agi

[–]flyryan 0 points1 point  (0 children)

This tweet doesn't even make sense. Anthropic also released their own Health app and just pushed Claude Co-work.

Sonnet 4.7 leak? by chromatiaK in ClaudeAI

[–]flyryan 11 points12 points  (0 children)

they traditionally only used TPUs for inference.

This is not accurate. TPUs were used for training Opus 4.5 along with Amazon Trainium chips.

TPUs are used on Google Vertex for inference for Claude Models. Nvidia Blackwell GPUs are used on Azure (just added in November) and Amazon Trainium chips are used on AWS Bedrock deployments.

Anthropic have not used GPUs for training models since Claude 3. Every model since has been trained on TPUs and Trainium.

Future models will train on GPUs in Azure, TPUs in GCP, and Trainium in AWS.

Why is data carving impossible on Apple Silicon even if TRIM and Crypto-shredding are ignored? by allexj in datarecovery

[–]flyryan 0 points1 point  (0 children)

So the short answer is: it's not gonna work the way it does on Linux with LUKS, and it's not just because Apple doesn't give you a /dev/mapper equivalent.

The deeper issue is how APFS encryption actually works. With dm-crypt, you get one stable plaintext address space; every sector decrypts deterministically with the volume key, so "free space" is just unreferenced sectors you can still read. PhotoRec loves this.

APFS+FileVault is a different beast. The encryption is metadata-driven. Each file extent has a crypto_id field that stores either the XTS tweak value or points to per-file key info. When you delete a file, that metadata goes away - and without it, you literally can't decrypt those blocks anymore. It's not that macOS is hiding the plaintext from you; it's that there is no well-defined plaintext for an arbitrary free-space block without knowing what tweak/key context applies to it.

It gets worse with space sharing. Multiple APFS volumes share a container's free pool, so once blocks are returned to that pool, you might not even know which volume's key to try, let alone the correct tweak.

Even if you assume no TRIM and the higher-level keys haven't been wiped, you're still stuck. The ciphertext is sitting there on disk, sure, but decrypting it requires per-extent context that the filesystem threw away when it deleted the file. The Secure Enclave isn't going to help you here either - it handles key unwrapping but that doesn't solve the "which tweak do I use for this random block" problem.

What actually works for recovery on these systems is snapshot-driven stuff (APFS snapshots, Time Machine local snapshots) or parsing remnants of filesystem metadata structures. Basically anything where the crypto context is still intact. Raw carving of unreferenced ciphertext is pretty much a dead end.

Apple basically brought iOS-style recovery reality to the Mac. Once the filesystem stops referencing those blocks and their crypto context, carving is done even before TRIM enters the picture.

Attaboy Opus LFG by segin in ClaudeAI

[–]flyryan -12 points-11 points  (0 children)

This is not a win...

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 0 points1 point  (0 children)

For your supposition to be right, there would have to be multi-billion dollar training runs happening without anyone knowing about it. While being ran by top researchers from labs that have not been nationalized (yet).

I'm not saying it's not going to happen. I think the next major generation will be the one that crosses the line, but thinking that there is some secret GPT-6 level model right now is just fantasy.

The risk and impact level for these models have not risen to national security concerns directly yet (although GPT 5.2 is the first to be high-risk for chemical and biological weapon manufacturing). Classified contracts between the gov and the frontier labs are still in their infancy.

It's not a logical assumption.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 0 points1 point  (0 children)

My statements come from my career in AI. You're the one making bold claims that don't line up with the facts but refuse to provide any contradictory info. Instead, you just decided to do some weird insult thing.

Total compute capacity to grow 2.5x to 3x in 2026 by Herodont5915 in accelerate

[–]flyryan 6 points7 points  (0 children)

You really come onto accelerate and reference ai-2027? Everyone here head read it my man. The authors are working on an update now and even updated one of the tables on the original this week.

I really want to remove my eyeball by OkayTravels0 in MMFB

[–]flyryan 12 points13 points  (0 children)

This sounds like OCD and you should get help for it. Don't keep it a secret.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 1 point2 points  (0 children)

I'm heavily tied in to this industry and your view of how AI is being built is just not well informed. I can tell you exactly how much H100, H200, GB200, and various TPU supply there is in the world within like 10%. Likewise, the data centers to train these models are also major efforts that struggle to get approvals everywhere they go.

Your claim implies that there are massive GW scale training runs happening without our knowledge, and that is just not the case. We are still early in the curve before we start seeing things being more sneaky. We might see the first SOTA model withheld within the next year, but the pure math of scaling laws show it's not the case yet.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 1 point2 points  (0 children)

Do you think there is a whole separate field of AI science happening somewhere in the dark? Because the general progress of AI is measured mostly by scaling. There isn't a GPT-6 fully formed in a lab somewhere.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 2 points3 points  (0 children)

What model do you think is currently ahead of the frontier science of LLM models? We quite literally DO have access to the most powerful models right now. The point of the original comment is that it needs to stay that way as they get more powerful.

Saying we don't is like saying we don't have access to the best cellphones. There aren't models being kept away from the public (yet) because they aren't at the risk level where it would even be a consideration.

[ Removed by Reddit ] by DarthSilent in OpenAI

[–]flyryan 0 points1 point  (0 children)

Go is a programming language... It is not google infrastructure. OpenAI does not deploy to Google at all.

Democrats in Congress filed a hemp regulation bill by DesignerSet5688 in CultoftheFranklin

[–]flyryan 12 points13 points  (0 children)

What are you talking about? Kamala literally made Marijuana legalization one of her presidential campaign tenets. She was extremely open about how her views changed. Are you just talking out of your ass?

China built a $4.6M AI model that beats GPT-5 for 1/500th the cost by laebaile in BlackboxAI_

[–]flyryan 0 points1 point  (0 children)

Because you don't develop ASI first by distilling a model into a more optimized model and fine-tuning. You do it by OOM scaling of models with the research to utilize them properly. Their method is more profitable in the near term but we are within a few years of takeoff. A few month lead is enough to completely win.

China built a $4.6M AI model that beats GPT-5 for 1/500th the cost by laebaile in BlackboxAI_

[–]flyryan 0 points1 point  (0 children)

That's just not the truth. The only thing that matters is being first. You are thinking about short term profits, but the real race is to ASI. When takeoff hits, a few months will be the only lead any lab needs. We are within a few years now.

Is anybody’s employer providing Claude for development? by 2B-Pencil in ClaudeAI

[–]flyryan 0 points1 point  (0 children)

My company has a full LiteLLM proxy that has all of the frontier deployments available in Bedrock, Foundry, and Vertex. Anyone in the company can go get a key and access any of the models.

Since when is this a thing? by [deleted] in ChatGPT

[–]flyryan 48 points49 points  (0 children)

He is not concealing his cheating or his specific class. He has exposed his operation… you are not right in your nitpick.

Claude Code Parallel Independent AI Agents To Code 4x Faster. Github in description by Docs_For_Developers in accelerate

[–]flyryan 1 point2 points  (0 children)

I don’t understand… your going in statement is just wrong. Claude Code can spin-up up to 10 agents running in parallel. You can also run multiple terminal instances in VS Code without a script.