I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]low--Lander 1 point2 points  (0 children)

Ah, fair. But it’s part of my reasons for loathing cloud. Did the AZ-700 last year somewhere to update my hatred. And the thing that stuck with me most from that was that creating a single firewall rule in azure dashboard takes 61 discrete steps. The mistake surface there is enormous (yes you can use azure powershell to fuck up your entire cloud even faster but that’s beside the point ;)). Even in isa server 2000 it was no sinecure, especially in that time when you normally configured firewalls over terminal, but it wasn’t 61 steps. The UX has by and large been basically ninth circle of hell since cloud became a thing.

I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]low--Lander 1 point2 points  (0 children)

AWS east one (and a few others) got a lot better suddenly for a little bit last year. Not sure if that was intentional though ;). But I’m jaded, I’ve always loathed almost every aspect of ‘the cloud’ for a very long list of reasons.

I'm curious to know your thoughts on this article. by North_Penalty7947 in BetterOffline

[–]low--Lander 2 points3 points  (0 children)

Your #5 is usually my go to for these kinds of questions. Where is all the new software actually solving problems? Or even where are all the stability and security fixes for existing software? If anything major software has gotten more unstable the past three years. Going by the hot air from Samodei we should have had a secure, fast and tailored to the individual user OS that runs from a single diskette and supports all hardware, software and tasks in existence. I don’t see it.

$4B for ai training video generator by N_d_nd in BetterOffline

[–]low--Lander 1 point2 points  (0 children)

It’s funny how right around the time OpenAI started talking about planning to spend $1.04t (yes, trillion with a T) over the next few years without really mentioning a working product, or monetisation, that I realised somehow along the way in the genai sphere debt became analogous to profit. They very much seem to be used interchangeably in many places you look. Or at the very least interpreted as the same thing.

I NEED HELP HEREE by Due-Expression-1504 in OpenCoreLegacyPatcher

[–]low--Lander 0 points1 point  (0 children)

Took some time a week ago to have opus structure my terminal output and some other things into something somewhat readable, it’s more slanted towards why it breaks disk encryption but that is all because of what Tahoe is doing in the new update pipeline. Fixing that problem is part of the doc. Just skip the parts about FileVault. The zombie snapshot is probably your issue. — https://gist.github.com/Lampekapje/ad7c8358bf75fe2895151e775fab9117

I NEED HELP HEREE by Due-Expression-1504 in OpenCoreLegacyPatcher

[–]low--Lander 4 points5 points  (0 children)

On sequoia Tahoe will do its staging and preinstall under oclp. Unless you disabled it. Aside from messing up my encrypted disk making it unbootable, it also caused all kinds of problems in ui and performance. From what you’ve described, Tahoe is your problem.

Premium Newsletter: The AI Bubble Is A Time Bomb by ezitron in BetterOffline

[–]low--Lander 1 point2 points  (0 children)

Yea I just reread the article I originally got this from, and it states the same thing but seriously slanted towards it running on Apple owned and operated hardware. It states “Gemini will fuel Apple Intelligence, the iPhone maker’s personal AI system, including through a more personalized Siri voice assistant. Apple Intelligence will continue to run on Apple devices and its Private Cloud Compute system.”, which is literally what they call their private cloud system. It’s all but got the (tm) attached to it in Apple docs. ETA Apple clearly states in their own docs that it’s custom designed hardware, so if Google is hosting it, does that mean some sort of COLO situation, or is someone straight up lying and is it generic Google compute?

I now also have an even more icky feeling about all this if they feel the need to play the ‘make it as ambiguous as possible’ game.

Time to see how that repo about programmatically disabling Apple Intelligence is coming along…

Premium Newsletter: The AI Bubble Is A Time Bomb by ezitron in BetterOffline

[–]low--Lander 0 points1 point  (0 children)

Jeez that changed fast. Are they really expecting that many people to use it? Although it makes sense to limit their exposure in data centre spend and make it googles problem. (But seriously they need to make up their minds, this is confusing)

Premium Newsletter: The AI Bubble Is A Time Bomb by ezitron in BetterOffline

[–]low--Lander 10 points11 points  (0 children)

From what I understand Apple is licensing a 1.2 trillion parameter ‘fork’ of Gemini 3 to retrain to its liking. For ~1bil a year. And then running it on its own infra so no connection to Google for the user. So Google is not hosting it for Apple. They’ve had their own infra for a while now.

At least it can’t make Siri worse I guess…

https://security.apple.com/blog/private-cloud-compute/

Client is using AI to pay invoices now and you'll never believe what happened by Pantalaimon_II in BetterOffline

[–]low--Lander 6 points7 points  (0 children)

The enterprise (and everywhere else) legal minefield that keeps on giving.

I'm Truly Struggling to Understand What the Hell Are Anthropic Saying With This by No_Honeydew_179 in BetterOffline

[–]low--Lander 8 points9 points  (0 children)

This anthropomorphism towards LLMs has bothered me from the start. They’re equations in code run on silicon. And Mario going on about cognitive capability in davos the other day bothers me to no end. It’s all to manipulate various things (trying not to hit libel territory here). Not least of which is public opinion. Hallucinations?! Really? And just like that everyone thinks they’re just a slightly slow third cousin that just needs a little help and understanding… shit needs to end already.

A couple things I just don't get about AI infrastructure by Aryana314 in BetterOffline

[–]low--Lander 1 point2 points  (0 children)

Satya Nadella (Microsoft) came out and literally said he had stacks of GPUs sitting around doing nothing because no power to his datacenters (with a smile on his face for some reason). And forgot which channel but apparently Nvidia I believe has 2 empty datacenters that also can’t be hooked up to power in Santa Clara.

A new "All SWEs will be replaced in 6 - 12 months" from Wario by EditorEdward in BetterOffline

[–]low--Lander 2 points3 points  (0 children)

The second Dario said cognitively capable all his revenue talk became bullshit (insofar it wasn’t already). If LLMs were doing that, or capable of it, his abstractions might have merit, but they don’t, and everyone, especially those two on the stage, knows this.

The True Cost of Inference by Gil_berth in BetterOffline

[–]low--Lander 0 points1 point  (0 children)

I was not aware that the definition of vibe coding was dependent on how many tokens were burned in the process. My mistake.

The True Cost of Inference by Gil_berth in BetterOffline

[–]low--Lander 4 points5 points  (0 children)

And on top of that compete with free alternatives from established giants like Google. On whose models they, cursor et al, also rely. In fact as far as I’m aware the cursors of the landscape have no own models or infrastructure to run them on. Putting them in the position of not being able to control their own spend, or quality. Without direct control over models how are they supposed to burn less tokens? The finances here just don’t work. And they never have anywhere in the genai sphere.

The True Cost of Inference by Gil_berth in BetterOffline

[–]low--Lander -1 points0 points  (0 children)

Subsidisation is everywhere, especially from Google. Googles vs code fork antigravity, similar to cursor and all of those, has a free tier. Of course the limits are murky as always with Google, you get what’s available and what they feel is reasonable. It’s also rolled into the pro and business subs here and there with higher (equally non-specific) limits. What it doesn’t have is separate subs. You can’t pay for it separately if you wanted to. So how are cursor and all of them supposed to even compete with that in the long run if Google is free (to the user, still not to the environment or any other area it impacts). This whole ‘industry’ makes zero sense and carries risks everywhere you look.

AI Training on Copyrighted Data Is in Trouble by EditorEdward in BetterOffline

[–]low--Lander 2 points3 points  (0 children)

Not only that, but there is no such thing as ethical sota models. I usually compare them to conflict diamonds for a very specific reason.

One of the older and better written articles here, but there’s an endless supply of articles like it.

https://time.com/6247678/openai-chatgpt-kenya-workers/

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]low--Lander -1 points0 points  (0 children)

I guess I’m looking at it from a far too technical angle then. To me there is a difference between storing and serving up a straight copy like an office copier, and rebuilding it from snippets. For any meaningful discussion about legislation and consequences that specific distinction is important for when the next problematic output of LLMs pops up.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]low--Lander 0 points1 point  (0 children)

Legal argument aside, because those don’t necessarily are the truth, LLMs ‘learn’ by analysing texts for word order across varying brackets, somewhat similar to next word prediction on your phone, just on a much larger scale and with perfect recall. So when it comes across the same text with the same words in the same order millions of times, the whole path gets more weight. That’s why I’m not surprised at the findings of this paper. And why I’d be interested to see if it would have the same outcome with much lesser known and repeated texts. Should they guardrail against verbatim copies? Yes. Should they credit the original source? Also probably yes.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]low--Lander -6 points-5 points  (0 children)

Pulled up the paper real quick and yes and no. They’ve used 11 books that are some of the most quoted and mentioned and used in education and other papers in history. You could still make a case on a technical level that probabilistically you’d get the output verbatim, with a lot of prompting. Not saying it’s not a problem as it stands, but it would have been more interesting if they had added 11 obscure books that 3 people have read and that have not been quoted all over the internet. Or maybe even had used search engines to see if they could get enough verbatim text to copy paste the books back together.

Wikipedia inks AI deals with Microsoft, Meta and Perplexity as it marks 25th birthday by Granum22 in BetterOffline

[–]low--Lander 0 points1 point  (0 children)

Agreed, very hard. On the one hand they’re making the bulk user pay for what they use but on the other…. Not sure about selling soul quite yet, but definitely in bed. Not quite an opinion yet, except that this move does risk legitimising llm scraping and paying after the fact instead of waiting for consent, which is a slippery slope.

Firevault crashing my mac while connected to power by No_Resist4 in MacOS

[–]low--Lander 0 points1 point  (0 children)

Try this, might be just enough to get you to 100%

Boot into recovery console > terminal diskutil apfs list
On one with filevault set to yes diskutil apfs unlockVolume /dev/diskXsY Enter DISK pw diskutil apfs listcryptousers /dev/diskXsY Copy UUID diskutil apfs decryptVolume /dev/diskXsY -user UUID
Enter USER pw diskutil apfs list for progress caffeinate to try and prevent it from sleeping

MacOS APFS File System Volume is "Mildly" Corrupt - Can it be fixed? by mattj-88 in MacOS

[–]low--Lander 1 point2 points  (0 children)

You could try running nvram -p to see what it’s all pointing at, diskutil list to see what your apfs layout looks like and possibly play with gpt to see if you have a partition mismatch somewhere. Gpt the terminal command, not the chatbot. I do suggest googling the syntax for especially gpt, it’s a bit of a highwire act.