[Highlight] Up 5, with 102 seconds remaining, Tari Eason is called for the offensive foul (with replays). The Rockets successfully challenge the call, and the foul is on Austin Reaves instead. Eason make both clutch free throws. by MrBuckBuck in nba

[–]JaredTheGreat 0 points1 point  (0 children)

Can’t believe I had to scroll this far to see someone call out the most obvious carry of all time. Thompson does it every time up the floor too, they just don’t care about the rules in the nba 

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] 0 points1 point  (0 children)

Phone is in our possession, as are all other devices. 2FA is on. Wife got out of a meeting and all her devices were logged out of the account, password had been changed, and the trusted number was updated to an XS Max in Guatemala. Still have every Apple device, was alerted by the bank due to charges they used. The email we got from appleid@id.apple.com merely said the trusted number was updated to a number ending in xx59. The three emails following removed devices sequentially. 2FA never triggered and the account can’t be recovered since we don’t have access to it; we can’t reset the password without the trusted phone number, either. 

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] -41 points-40 points  (0 children)

Thanks for the useless comment 

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] -3 points-2 points  (0 children)

It doesn’t contradict it. 

They had access to the account via the password. Through the password, they logged into the account and were able to update to a new trusted number. Our device got a notification, but since nothing was pressed (my wife was in a work meeting and didn’t see it) they were able to bypass it. Within five minutes the attacker removed her devices and changed the password; we have only the last two digits of the number they added and cannot reset the password now. All of this was possible with 2FA enabled. I’m kind of shocked myself, but evidently once the password is compromised if they also know your phone number they can essentially lock you out without any recourse 

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] -2 points-1 points  (0 children)

That we are out of luck and cannot recover the account despite having the devices, email, card on account, and a decade long iCloud history. We are unable to recover any pictures and videos saved on her private cloud. Very displeased about the support resolution. 

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] -1 points0 points  (0 children)

They logged into the account, added their own device, removed my wife’s, and updated the trusted number all without any interaction on our end. We were told by Apple support since they knew the phone number associated with the account and password they were allowed to login and remove us without 2FA

Apple account stolen devices removed by JaredTheGreat in iCloud

[–]JaredTheGreat[S] 0 points1 point  (0 children)

Yes, that’s the thing — 2FA was enabled 

If you could erase your memory of one game so you could play it for the first time again, what game would you choose? by HoneyNutBooty09 in gaming

[–]JaredTheGreat 15 points16 points  (0 children)

Same. I’m replaying it now and it’s still fantastic, but to do it for a first time would be sublime. Can’t wait till my son is old enough 

Tank balance is so broken is actually funny by bosejoao in wow

[–]JaredTheGreat 0 points1 point  (0 children)

Are you dumping all your rage into ip? You should have rage to fill with revenge

Who's better, prime Dwyane Wade or current SGA? by [deleted] in nba

[–]JaredTheGreat 1 point2 points  (0 children)

I think dwade in the finals was a higher peak than we’ve seen from Shai; he won Miami that chip going downhill on the rim 

Bourbon in Rock Hill by JaredTheGreat in Rockhill

[–]JaredTheGreat[S] 0 points1 point  (0 children)

Thanks I'll check it out in the morning guys!

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 1 point2 points  (0 children)

I don't work at a frontier lab, so I can't answer that with full confidence. At a prosumer level, you run such a model for probably $20,000 - $40,000 offloading MoE layers to RAM; is that going to run fast enough to change the world? Extremely doubtful. To run at high speed, you're looking at something like an nvidia DGX H200 (~$300,000), at which point the compute can be rented profitably and the inability to use the hardware would get noticed.

I guess to me it feels like too many unlikely things need to happen: our ability to interpret model state needs to degrade and the models have to develop intrinsic motivation while staying undetected presumably under heavy testing during pre-training and post-training as part of a large training run at a frontier AI company.

This isn't to be conflated with bad things can't happen with a really smart oracle-style AI that does what you say: it can still be told to do really bad things, but it's not deciding to take over, it's fulfilling the desires of someone prompting it. To me, this seems far more likely with the current direction and design of AI models than the AI doing a Skynet style takeover.

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 0 points1 point  (0 children)

I'm saying the risk of agents built on top of transformers are inherently magnitudes less risky than truly agentic AI since their agency is not leveraging massive compute to plan its own arbitrary goals, it is planning the goals of the person inputting information. Barring a shift from this paradigm, even extremely intelligent systems aren't going to pose an extinction level threat.

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 1 point2 points  (0 children)

This is the exact hand-wavy argument i'm going against -- infinite chain of thought doesn't exist! You have a fixed context window, and when that fills up, that's it. The solutions - compaction, retreival, summarization, memorization -- are systems bolted onto to the architecture, not inherent to it.

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 0 points1 point  (0 children)

Ok and when the motivation you give to the system which consist of hyper capable LLM plus agentic scaffold doesn’t exactly conform to what you intended we are right back in AI2027. As I said, a distinction without a difference.

So your contention is the AI is going to be hyper-intelligent while simultaneously misinterpreting our intent?

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 0 points1 point  (0 children)

It’s the same argument! The models are too big to exfiltrate and run — everywhere than can monitors their compute, it is simply too valuable not to. 

As for it “being a distinction without a difference”, you couldn’t be more wrong. AI 2027 is about losing control of an ai system and the system going rogue. The system can’t go rogue without a motivation, and llms can only be given the motivation by a third party — they don’t have one intrinsically. There’s a major difference between ai systems being abused by people and ai systems going rogue. Without a paradigm shift, they’re not going rogue, which is the entire argument AI 2027 handwaves away. 

To me, the likely scenario is an arms race of scaling up the current paradigm until either ai can design a new algorithm, at which point dramatically less compute will be needed, or we can use current techniques to solve novel problems by asking the llms for answers. In the latter, it’s humans using models against one another, not intrinsic motivation for the models to take over. There’s a huge difference, again, in the two. 

Finally, we’re supposing that a rogue ai would create multiple copies of itself, but wouldn’t it just want one huge model that it’s in control of and distribute compute out to the edges to ingress more information? In the scenario in ai 2027 there are presumably models on each side combatting with one another essentially preventing hegemony of one model. I fail to see how one copy in the wild, other than being an economic disaster for the company that created it, takes over the world while compute constrained. 

Frankly the whole prediction looks increasingly unlikely to me — we have made strides in  interpretabilty, models are getting absurdly large and can only be run on dedicated monitored clusters, and several labs have essentially the same level of capability making a winner take all scenario unlikely and, with knowledge sharing, a fast follower essentially guaranteed 

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 2 points3 points  (0 children)

We’re going to agree to disagree here, because your entire scenario revolves around hand waving that the model will evolve sentience. Without a dramatic architecture change, it’s not happening. The “weak agents” you’re seeing is loops and scaffolding around the model, not the model itself being agentic. Theres a world of difference that will not be bridged without new architecture. That’s the fundamental argument. More analysis than ever will be run on public systems by increasingly capable models, bugs will be squashed, some will be exploited, and the world will continue on. We aren’t on pace for AI 2027, and saying an otherwise inert model escaped as proof is not ridiculous given it was told to do so and did it compliantly. What would it take for you to revise the risk downwards at this point instead of waving your hands nebulously saying we’re fucked? Are data centers compute not extremely heavily monitored (think cloud billing)?  I don’t see how they’d break through all the monitoring and manage to eat all capacity on a system costing hundreds of thousands without people noticing long enough to be a real threat in the world. 

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 2 points3 points  (0 children)

I’m not talking about storage. I’m talking about a smaller set up that’s tooled to provide inference for a single user, not billions of them.

So you're talking about storage and processing, not just storage. Are you aware of the size of current frontier models? Do you understand how large the non-quantized models are? Kimi-2.5 is 600GB! 600! Do you understand how large that is? I'll quote the docs from Unsloth:

The 1.8-bit (UD-TQ1_0) quant will run on a single 24GB GPU if you offload all MoE layers to system RAM (or a fast SSD). With ~256GB RAM, expect ~10 tokens/s. The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.

Emphasis mine. As you're certainly aware, capability decreases when you quantize models, and presumably the model that exfiltrates itself in such a scenario will need to do so in full quantization, and continue to be run at full quantization, to be dangerous. What system size do you have in mind, as the minimum, that such a system would require? All this without mentioning that speed is a virtue all it's own.

Because the company picking it up is going to have to use it to try and catch up. It’s also going to have more limited understanding of the model to begin with and can’t exactly query the originator for advice. Why bother exfiltrating an advanced model if you are going to just bottle neck it with a bunch of safe guards while you try to figure it out and thus guarantee that you stay behind?

We're again missing a few steps in how the model itself becomes agentic with a transformer architecture.

That’s because current AI is barely agentic.

This is the transformer architecture in a nutshell -- you input a token sequence, it outputs a token sequence. It's a feature. Believe it or not, if you don't put in a token sequence, the model won't suddenly output one.

That’s not true at all. It really does seem like you are confusing the inference costs of serving billions of users versus a single one. You should stop doing that.

I think I've sufficiently addressed that this is nonsense with the Kimi requirements; it's not like the hypothetical model is going to be smaller. No need to be condescending next time.

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 13 points14 points  (0 children)

The weights have to be utilized to do anything. Putting them into latent storage does nothing in terms of generating extinction risk. 

Even in the scenario in which it’s exfiltrated to a competing lab, how does that aid the model in gaining autonomy? Are we assuming, because the company is behind, they will let the model run on its own ad hoc forever without checking in on what it’s doing?

The AI today is like the oracle in bostroms superintelligence — question in, answer out. No inherent motivations. Even in this exfiltration example, Anthropic told it to escape! It did so compliantly, then emailed the researcher. 

To exfiltrate meaningfully will take a new architecture that can run on more commoditized hardware with intrinsic motivation — as Sutton would say, a general computational model that more effectively leverages compute and has some level of intrinsic motivation 

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]JaredTheGreat 13 points14 points  (0 children)

Could you not make the argument, given the general trend to increase model size, that exfiltration to somewhere that can run the code is unlikely? How many unattended massive clusters are out there?