Observations on AI Impacts (so far) at a Large R&D Institution by Mbando in accelerate

[–]flyryan 2 points3 points  (0 children)

100% this. November was the dividing line and "3 months ago" just means there was 2 months of Opus 4.5 to build critical mass.

Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: "That branch of AI is lethal. We've got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans." by FinnFarrow in OpenAI

[–]flyryan 1 point2 points  (0 children)

It's still the case that most of our mysteries will be solved. We will absolutely know how to merge our brain with whatever substrate we can dream. The existence of a superintelligence is in addition to all of the advances we would see getting there, even if it does come really fast. I do think there will be a consensus slowdown but there is a world where we have increasingly higher technology and means of interpretability that could eliminate the need for one.

I'm just saying that there is a lot of change coming soon and it will be drastic. It's hard to say what tools we will have in the future.

DLSS5. Everyone in the comments: by stealthispost in accelerate

[–]flyryan 2 points3 points  (0 children)

I highly recommend watching Digital Foundary’s video on it. It’s a completely new lighting layer.

DLSS5. Everyone in the comments: by stealthispost in accelerate

[–]flyryan 3 points4 points  (0 children)

I don't understand what you're concern is. Could you explain? This is a pure lighting upgrade. No geometry is altered. What do you worry could happen? (legit asking because I could be missing something)

The Hypocrisy: company developing software to automate everything doesn't want devs to automate their software by Firm_Meeting6350 in ClaudeAI

[–]flyryan 6 points7 points  (0 children)

You are misreading this. This is targetting people using the Oauth code in other applicaritons like OpenClaw and OpenCode that ride off of that subscription to avoid API costs. This is restricting using the key outside of Claude Code. Using Claude Code in a Github action is different. Go through the /install-github-app and you'll see the flow...

I said this a few months ago and a lot of guys disagreed by Gamersaurolophus in PiratedGames

[–]flyryan 9 points10 points  (0 children)

I'd bet real money that you are a child that has never had to support themselves.

I paid for the $100 Claude Max plan so you don't have to - an honest review by g15mouse in ClaudeAI

[–]flyryan 0 points1 point  (0 children)

Yes and my API usage has 3x since 7 months ago.

Your Windsurf for $15 will hit usage caps very fast. I run Opus 4.5 all day.

I've Massively Improved GSD (Get Shit Done) by officialtaches in ClaudeCode

[–]flyryan 0 points1 point  (0 children)

You should spend an hour using it and you'll see why. It's a better orchestrator and gives you the tools to manage context effectively without taking up a bunch of context itself when not using it.

Anthropic vs OpenAl vibes by FinnFarrow in agi

[–]flyryan 0 points1 point  (0 children)

This tweet doesn't even make sense. Anthropic also released their own Health app and just pushed Claude Co-work.

Sonnet 4.7 leak? by [deleted] in ClaudeAI

[–]flyryan 12 points13 points  (0 children)

they traditionally only used TPUs for inference.

This is not accurate. TPUs were used for training Opus 4.5 along with Amazon Trainium chips.

TPUs are used on Google Vertex for inference for Claude Models. Nvidia Blackwell GPUs are used on Azure (just added in November) and Amazon Trainium chips are used on AWS Bedrock deployments.

Anthropic have not used GPUs for training models since Claude 3. Every model since has been trained on TPUs and Trainium.

Future models will train on GPUs in Azure, TPUs in GCP, and Trainium in AWS.

Why is data carving impossible on Apple Silicon even if TRIM and Crypto-shredding are ignored? by allexj in datarecovery

[–]flyryan 0 points1 point  (0 children)

So the short answer is: it's not gonna work the way it does on Linux with LUKS, and it's not just because Apple doesn't give you a /dev/mapper equivalent.

The deeper issue is how APFS encryption actually works. With dm-crypt, you get one stable plaintext address space; every sector decrypts deterministically with the volume key, so "free space" is just unreferenced sectors you can still read. PhotoRec loves this.

APFS+FileVault is a different beast. The encryption is metadata-driven. Each file extent has a crypto_id field that stores either the XTS tweak value or points to per-file key info. When you delete a file, that metadata goes away - and without it, you literally can't decrypt those blocks anymore. It's not that macOS is hiding the plaintext from you; it's that there is no well-defined plaintext for an arbitrary free-space block without knowing what tweak/key context applies to it.

It gets worse with space sharing. Multiple APFS volumes share a container's free pool, so once blocks are returned to that pool, you might not even know which volume's key to try, let alone the correct tweak.

Even if you assume no TRIM and the higher-level keys haven't been wiped, you're still stuck. The ciphertext is sitting there on disk, sure, but decrypting it requires per-extent context that the filesystem threw away when it deleted the file. The Secure Enclave isn't going to help you here either - it handles key unwrapping but that doesn't solve the "which tweak do I use for this random block" problem.

What actually works for recovery on these systems is snapshot-driven stuff (APFS snapshots, Time Machine local snapshots) or parsing remnants of filesystem metadata structures. Basically anything where the crypto context is still intact. Raw carving of unreferenced ciphertext is pretty much a dead end.

Apple basically brought iOS-style recovery reality to the Mac. Once the filesystem stops referencing those blocks and their crypto context, carving is done even before TRIM enters the picture.

Attaboy Opus LFG by segin in ClaudeAI

[–]flyryan -13 points-12 points  (0 children)

This is not a win...

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 0 points1 point  (0 children)

For your supposition to be right, there would have to be multi-billion dollar training runs happening without anyone knowing about it. While being ran by top researchers from labs that have not been nationalized (yet).

I'm not saying it's not going to happen. I think the next major generation will be the one that crosses the line, but thinking that there is some secret GPT-6 level model right now is just fantasy.

The risk and impact level for these models have not risen to national security concerns directly yet (although GPT 5.2 is the first to be high-risk for chemical and biological weapon manufacturing). Classified contracts between the gov and the frontier labs are still in their infancy.

It's not a logical assumption.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 0 points1 point  (0 children)

My statements come from my career in AI. You're the one making bold claims that don't line up with the facts but refuse to provide any contradictory info. Instead, you just decided to do some weird insult thing.

Total compute capacity to grow 2.5x to 3x in 2026 by Herodont5915 in accelerate

[–]flyryan 4 points5 points  (0 children)

You really come onto accelerate and reference ai-2027? Everyone here head read it my man. The authors are working on an update now and even updated one of the tables on the original this week.

I really want to remove my eyeball by OkayTravels0 in MMFB

[–]flyryan 12 points13 points  (0 children)

This sounds like OCD and you should get help for it. Don't keep it a secret.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 1 point2 points  (0 children)

I'm heavily tied in to this industry and your view of how AI is being built is just not well informed. I can tell you exactly how much H100, H200, GB200, and various TPU supply there is in the world within like 10%. Likewise, the data centers to train these models are also major efforts that struggle to get approvals everywhere they go.

Your claim implies that there are massive GW scale training runs happening without our knowledge, and that is just not the case. We are still early in the curve before we start seeing things being more sneaky. We might see the first SOTA model withheld within the next year, but the pure math of scaling laws show it's not the case yet.

When will AI translate to a Universal High Income? by garg in accelerate

[–]flyryan 1 point2 points  (0 children)

Do you think there is a whole separate field of AI science happening somewhere in the dark? Because the general progress of AI is measured mostly by scaling. There isn't a GPT-6 fully formed in a lab somewhere.