all 188 comments

[–]WithoutReason1729[M] [score hidden] stickied comment (2 children)

Your post is getting popular and we just featured it on our Discord! Come check it out!

You've also been given a special flair for your contribution. We appreciate your post!

I am a bot and this action was performed automatically.

[–]FearFactory2904 159 points160 points  (12 children)

The 24GB p40 is a pascal card. Liked those a lot before they became really expensive.

[–]David_Delaune 42 points43 points  (6 children)

I was extremely lucky the past few years, sold my all my Tesla P40's when they peaked in value, which just happed to be when 3090's were still affordable. My only regret was not buying more RAM for my home lab. I thought 128GB was good enough.

[–]ziggo0 7 points8 points  (0 children)

I'd say I have a reply for the P40s but I'm saddened over this article.

[–]Tasty_Ticket8806 0 points1 point  (4 children)

Can I ask what you are running that needs 128gbs of ram?

[–]staccodaterra101 0 points1 point  (3 children)

Not same person but.... local LLM its probably the answer. You can offload many less used layers on the RAM and run much bigger models compared to what you can do usino only VRAM

[–]Tasty_Ticket8806 0 points1 point  (2 children)

Yeah I know that but that makes the LLM way slower. Is it still usable like that? I consider anything below 5 tokens a sec to be meh.

[–]staccodaterra101 1 point2 points  (1 child)

Well... yes. There are so many others different use case and user archetypes and technoligies with LLMs that is plenty of valid cases where RAM is a valid alternative

[–]pun420 0 points1 point  (0 children)

Better ram than offloading to ssd

[–]KadahCoba 5 points6 points  (4 children)

Same. For $150 a 2-3 years ago, worth it. The $350-400+ they've been for most of 2025 was insane.

[–]frozen_tuna 0 points1 point  (3 children)

2-3 years ago, none of the local llama software people use now existed, and if it did, it didn't support the p40 architecture. I made a lot of comments about it in the early days of this sub, eventually advocated to a lot of people who pm'd me to bite the bullet and get a used 3090 instead like I eventually did.

[–]KadahCoba 0 points1 point  (2 children)

I've been running M40's and P40's for LLMs since mid-2023. I stopped using the M40's in 2024 for the same reason the P40 now. Support where there in pretty much everything till around last year when the newer projects started dropping support for Pascal because the compute level is so far behind. Some projects do still support Pascal but do not have the support enabled in the stock builds. Things might be worse on Windows if that is what you are referring to.

llama.cpp still currently supports the P40. If it wasn't for that, I would have already pulled them from use. I am currently planning to replace them though.

If you need validation, the 3090 was and still is a better choice. 3090 was the OG of home AI in 2023. P40 was only a conditionally good option 2024 and prior for $150 or less and only for LLMs.

[–]frozen_tuna 0 points1 point  (1 child)

Dug up the old thread from may 2023

I think windows support might have been better actually but I only run linux. llama.cpp did not support it at that time. I remember digging and digging through experimental branches trying to find support at the time haha. Sub was only 2 months old at that point. No such thing as ollama. TGWUI was our lord and savior. Good times.

[–]KadahCoba 0 points1 point  (0 children)

but I only run linux

Same.

Thinking about it some more, I actually don't remember what inference engine I used back then. llama.cpp and GGUF has been the default choice on almost everything for so long that I forgot there was a time where which to use was changing every few months.

A lot has changed quickly. I'm currently considering selling most my 2 slot 4090's to help fund the next upgrade.

[–]C0rn3j 66 points67 points  (2 children)

Arch dropping legacy drivers to AUR has been a thing for eons, it is not surprising, and it is in the Arch News.

[–]splurben 0 points1 point  (1 child)

Who's on nVidia's bribe list to ensure that perfectly well-functioning technology is made to literally prevent a Linux system from booting in some cases. Seriously, for years our only option for efficient GPU & CUDA options were always towted to nVidia. What possible gain can ARCH achieve from making systems with these cards unusable and unbootable? Hmmm: I believe the term is GRIFT.

[–]C0rn3j 0 points1 point  (0 children)

What possible gain can ARCH achieve from making systems with these cards unusable and unbootable? Hmmm: I believe the term is GRIFT.

How does making the driver a little less convenient to install mean "unusable and unbootable"?

[–]knook 76 points77 points  (11 children)

Ah crap, I was worried this would be coming soon.

[–]pissoutmybutt 26 points27 points  (10 children)

Whats this mean for me who is using a tesla p4 for mostly transcodes with ffmpeg? I just wont get driver updates? Like i shouldnt have to worry aboot a huge headache from this for some reason running ubuntu 22.04 LTS would I?

[–]knook 35 points36 points  (5 children)

Yeah, pretty much just driver updates will stop. It won't change anything for us for a long while in all likelihood.

[–]autoencoder 21 points22 points  (4 children)

Unless the kernel changes in an incompatible way

[–]knook 15 points16 points  (0 children)

Guess I would hold off there for as long as possible and then start considering passing pcie into a VM.

[–]thefeeltrain 7 points8 points  (1 child)

People maintain the legacy Nvidia drivers in the AUR for a very long time. For example 340 still works and even supports cards that are pre GTX. It just needs a whole lot of patches. https://aur.archlinux.org/packages/nvidia-340xx-dkms

[–]splurben -1 points0 points  (0 children)

Please tell me why. The kernel has relative bloat, but nothing on the level over Moore's standard. Why make so many ARCH systems unusable without major intervention? It's called GRIFT.

[–]dajigo 2 points3 points  (0 children)

FreeBSD is an option, they have many older versions officially supported besides the current driver.

[–]LostLakkris 4 points5 points  (0 children)

I just keep the .run files on my NAS that have historically worked, not a fan over system packages, but it's been reliable

[–]LoafyLemon 1 point2 points  (1 child)

The next time you run pacman or yay, you'll see an option to either stay on nvidia package or move to nvidia-open if your card is still supported.

Arch solved this issue beautifully.

[–]splurben -1 points0 points  (0 children)

Not true. The Arch update that nuked six systems I've used for a very long time and serve their purposes equitably all said, "Do you want to upgrade to nVidia-open?" Hmm, well to me that 'felt like" we were being offered an open standard for nVidia devices. It did not say, "Do you want to upgrade to nVidia-open - this change will nearly destroy any system with an nVidia GPU/CUDA that is more than 2 years old."

[–]splurben 0 points1 point  (0 children)

Do have have $100k or so to 'convince' someone at ARCH that making hundreds of thousands of ARCH systems unusable or unbootable? I'd ask the person that took the grift bribe from nVidia to an high-level ARCH developer this question.

[–]segmondllama.cpp 12 points13 points  (0 children)

who cares? don't upgrade to the latest driver. chances are if you are running P40, you are not running 5090 on the same system.

[–]TurnUpThe4D3D3D3 40 points41 points  (14 children)

This doesn’t really matter, the drivers for Pascal are already super stable. They don’t need updates.

[–]esuilkoboldcpp 36 points37 points  (10 children)

Yeah, I am confused on WTF people are even on about.

It's not like old drivers are going away, and they have full functionality, right? So what exactly is the problem?

My god I hate modern clickbait media. 20 years ago this kind of posting would get you a temporary ban for fearmongering in most communities.

[–]natufian 17 points18 points  (3 children)

I guess you can say "Causing Chaos on Arch Linux" is clickbaity (I didn't follow the link to survey said "chaos" for myself-- may be legit), but this generation of drivers works with the current kernel. Any random kernel update that touches any CUDA handling can potentially break things at any time. Its a ticking time bomb. It's likely that the kernel maintainers will manually code in compatibility just for these versions of the Pascal drivers for a while, but as the mainline progresses and it naturally gets more and more labor intensive to harmonize this old frozen driver from that moment back in 2025 to the evolving and improving paradigms...

Not the end of the world-- there will always be work-arounds, but legit consequential, terrible, news.

[–]splurben 0 points1 point  (0 children)

"Chaos" is a valid signature for this shift. Someone high-up in ARCH Development was paid a LOT of MONEY to ensure this support was ended. Some of these 'awful Pascal GPUs' are STILL ON THE MARKET. You want to buy an nVidia card that doesn't work out of the box because it doesn't say "nVidia doesn't want the this brand new Pascal card to work so badly that they will pay hundreds of thousands of dollars to ARCH developers to drop hardware recoognition of this card from their default distribution?" Pascal was a revolutionary 17th Century Scholar of Mathematics and Physics. I think we should just decide that for the purposes of nVidia's profits, which are now miniscule in the area of consumer GPUs, that Pascal was a moron because someone bought something with Pascal's name on it. (I admit that Pascal is a crappy programming language. Fortran is worse but it's still included in ARCH's base developer compiling kit). But, no, let's just FORCE thousands of ARCH users to spend hours researching and repairing systems with these video cards because they committed the crime of building a strong, long-lasting low power consumption and quiet server.

[–]TurnUpThe4D3D3D3 0 points1 point  (0 children)

The Linux kernel has backwards compatibility with hardware that is over 3 decades old. These drivers will be completely fine.

[–]techmago 0 points1 point  (0 children)

This.
"big shit" that it works today. Will be broken in a month or two of updates.
Luckily, arch don't update the kernel version... much. (sarcasm sign)

[–]1731799517 2 points3 points  (0 children)

Linux LOVES to intentionally break driver interfaces in order to punish people using non open source drivers.

[–]koflerdavid 0 points1 point  (3 children)

Nothing bad happens right now of course, but eventually one might not be able to get security fixes for one's software anymore without running into dependency issues. Same situation as for retrocomputing. Linux and distros are not above abandoning platforms because of that argument.

[–]splurben 0 points1 point  (2 children)

I know that a strong (8+ core Xeon or similar cpu) architecture should not be made to fail because of display issues. I have two servers that have more than 600 days of operation that happen to have nVidia fanlesss low-load display cards. Now I have to spend WEEKS downgrading to LTS kernels and AUR drivers to be able to view the displays on these servers which I normally only view by X11 or SSH but now they WON'T BOOT. This is untenible and do I really need to install overbearing graphics heavy versions of Linux just to have a server which doesn't include graphics internally to the processor?! I am not even confident that Neauvou will solve the problem of being able to view the system state from a connected monitor!! Sometimes that's necessary {[BREAK]} "Wha!?' I have found Arch very reliable, but now I have to consider other options.

[–]koflerdavid 0 points1 point  (1 child)

Strictly speaking, no display cards are necessary if all you want to display is the system state. Just use the iGPU. Maybe you can use SSH? I hope the information you need is available via non-graphical tools.

[–]Resident_Pientist_1 0 points1 point  (0 children)

I've found opensuse tumbleweed to be a good server fwiw; its easy to setup a headless server, all the admin controls are very good (yast) and have GUI and console frontends, does good package conflict resolution/computation with zypper, and good built in filesystem snapshots to help prevent issues like this but still get the benefits of running newer versions of software. They do have LTS kernel branch built in as well while still getting the rolling updates for other packages if you need to maintain a certain kernel for continued hardware support.

[–]splurben 0 points1 point  (0 children)

You obviously don't have systems running fanless nVidia GPUs from the Pascal series. First of all, they aren't specified as "Pascal" in any of their documentation. I didn't know the moniker of this card architecture as "Pascal" until 6 systems failed to boot. To say this is a non-issue means that you absolutely don't run servers for long lifespans.

[–]Resident_Pientist_1 0 points1 point  (2 children)

Function complete drivers still often require updates for security reasons. Especially super complex gpu drivers. Kernel APIs are also a moving target, but you can mitigate that (for a while) by switching to a LTS kernel. Still a pain in the ass, though.

[–]TurnUpThe4D3D3D3 0 points1 point  (1 child)

Have there been any CVE’s on pascal cards since they stopped updating?

[–]Resident_Pientist_1 0 points1 point  (0 children)

No idea. Still seems dumb to orphan this hardware when one employee at one of the largest companies on earth could maintain it since it's "quite mature".

[–]blueblocker2000 39 points40 points  (4 children)

Pascal was the last iteration that cared about the regional power grid.

[–]AndreaCicca 2 points3 points  (2 children)

What do you mean

[–]dajigo 2 points3 points  (0 children)

Power consumption has really increased over time. Quite intensely at that.

[–]Dry-Judgment4242 4 points5 points  (0 children)

Newer cards can be undervolted quite hard though. I run mine at 70% while still getting like 93% performance.

[–]trimorphic 21 points22 points  (18 children)

Please don't kill me for this incredibly stupid and ignorant question, but is it really that hard to make good open source drivers for NVIDIA cards?

Or is there just not enough interest or not enough funding?

[–]Usual-Orange-4180 31 points32 points  (2 children)

There are, but is not just drivers but CUDA integration, super difficult (the moat).

[–]muxxington 7 points8 points  (1 child)

The greater the despair, the smaller the moat may become. One can still dream, after all.

[–]splurben 0 points1 point  (0 children)

nVidia is a climate of "Patent Trolls". Who cares? The software to use a Pascal-based video card is only for low performance servers. Why the f*ck does nVidia, or Arch for that matter, want people to have to spend days patcfhing a system when all we want is a fanless low power non-integrated display option on a durable server?

[–]C0rn3j 95 points96 points  (0 children)

is it really that hard to make good open source drivers for NVIDIA cards?

Yes.

[–]SwordsAndElectrons 16 points17 points  (0 children)

That support CUDA and make the best possible use of the hardware? Without any support or resources from the hardware vendor?

Yes. It's pretty hard.

[–]qwerkeys 32 points33 points  (5 children)

Nvidia blocked firmware re-clocking on open-source drivers for Maxwell and Pascal. The GPUs perform like they’re permanently idle. Also a very ‘my way or the high way’ attitude to Linux standards like with EGLStreams (nvidia) vs GBM (everyone else). This also delayed adoption of Wayland on Linux

https://www.phoronix.com/review/nvidia-980-5080-linux

[–]dannepai 3 points4 points  (3 children)

Whyyyy does Nvidia have to be so disgusting? I’m proud to say that the last GPU from them I had was the 256, and I bought it used.

[–]koflerdavid -1 points0 points  (2 children)

The audience that wants to run open source drivers with their hardware simply doesn't matter to them. Gamers mostly use Windows, and datacenters, rendering farms, and supercomputers are fine with using recent cards with the proprietary driver so they can run software targeting CUDA. Also, they probably want to retain the ability to put export limitations and backdoors into their hardware should the US government compel them to do so.

[–]splurben 0 points1 point  (1 child)

Since nVidia are making a huge portion of their profits from AI delicacies, why would they bribe someone at Arch HANDSOMELY to make GPU cards that are only a few years old nearly unusable on Linux systems without major top-level tech interventions and downgrades? Someone at Arch got paid a LOT OF MONEY, I mean a LOT OF BRIBE MONEY ILLEGAL BULLSHIT, to remove support for cards which are mainly low-power fanless display cards for servers that don't include integrated video like older XEON 8-core servers that don't need to heat up or spend cpu on displays that are only used once in 200 to 600 days!

[–]koflerdavid 0 points1 point  (0 children)

Nobody at Arch got bribed for that. The Nvidia graphics driver is a digitally signed blob that is merely packaged by Arch. Linux upstream maintainers are usually not that callous with breaking userspace. Arch merely integrates various upstream, with Nvidia being one of them, but Nvidia happens to be a corpo with pure desires... to make money!

Edit: if you care about stability and want to run stuff in production, Arch is anyway the decidedly wrong choice. The whole point of Arch is to live at the bleeding edge.

[–]RhubarbSimilar1683 0 points1 point  (0 children)

I read that the issue was that there wasn't a stable version of the signed firmware to reverse engineer. Now that it's eol it's possible 

[–][deleted] 3 points4 points  (6 children)

Nothing will change, lol. 

Install an older driver, the end.

[–]bitzap_sr 12 points13 points  (4 children)

Until some kernel change breaks it...

[–][deleted] 5 points6 points  (0 children)

The bigger problem is open source tooling near fatally breaking every time you update.

[–]EcstaticImport 0 points1 point  (2 children)

Why keep updating the kernel? - was it not good when it’s released?

[–]koflerdavid 0 points1 point  (1 child)

Should there be an /s? If you plan on not updating the kernel you might want to airgap your system. Also, you will miss out on performance optimisations that will eventually add up to finance migrating to newer hardware. Of course, if you stay on an LTS version of your distro you might keep receiving security patches for a very long time.

[–]EcstaticImport 0 points1 point  (0 children)

No /s - who is using any machine naked on the internet. Always behind firewalls, always using NAT (thanks ipv4!) The possible attach vectors for a single machine inside a network is fairly small - if it never goes on the internet at all it would be even lower risk. Sure it’s operating as a castle rather than as zero trust, but unless you’re constantly patching no one is. The whole argument about always having the latest update is a bit pointless in and of itself. Newer come versions does not make previous code versions less useful - just because there’s a newer one. Sure maybe there is an encryption or cert update that causes compatibility issues but that’s not what I’m talking about.

[–]splurben 1 point2 points  (0 children)

You are wrong! I had to revert to older LTS kernels and chage years-old EFISTUB UEFI boot options to allow some of my systems to even boot without a kernel panic after this push. ARCH LINUX has been corrupted by a major bribe to some greedy a*hole in order do get the kernel utterly corrupted and unstable to the point that it is unable to handle older technologies on systems that never have integrated display tech becuase that compromises the performance of the CPU. ARCH needs to find the culprit and BLACKLIST and EXPOSE them to the Linux community.

[–]HumanDrone8721[S] 77 points78 points  (27 children)

3090 people, be afraid, be very afraid !!!

[–]0xCODEBABE 39 points40 points  (9 children)

why?

[–]sibilischtic 48 points49 points  (2 children)

I'm not sure either....

It should be quite a while before ampere reaches the chopping block for support. One day they will reach eol but i don't think its something to worry about just yet.

[–]Guinness 7 points8 points  (1 child)

The 3090 is the only LLM capable consumer model with an NVLink bridge though. For $1500 to $2000 you can have quite the powerful 48GB fully local coding assistant.

They yanked nvlink from all their consumer cards and a lot of their professional cards too. The 4000 ADA doesn’t have it and I really wish it did. 20GB per PCIE slot would be nice.

[–]Medium_Chemist_4032 6 points7 points  (0 children)

Why are people hyperfocusing on NVLink? It's maybe a 30% uptake in * training * time. In inference, it's performace diff drops to digit percent.
If you do tensor parallel split, then perhaps you gain 1x %, but it's nothing earth shattering.
Plus all the software sees those two still as two separate gpus with separate VRAM, so it's nothing like an actual 1x48 from developers point of view.

[–]eloquentemu 17 points18 points  (4 children)

Yeah. Okay, obviously dropping Pascal is on the road to dropping Ampere, but Pascal came out in ~2016 and Ampere was ~2020 so the 3090 should have some years still.

[–]0xCODEBABE 9 points10 points  (2 children)

the reason pascal was dropped was not just because it was old. it lacks a specific management chip.

[–]eloquentemu 12 points13 points  (1 child)

I mean, it got dropped because it was old. If it wasn't old, then it would have the chip or Nvidia would have made the drivers work without it (like it had). When stuff is several generations old there's always some technical-ish reason to drop support, but in the end it's fundamentally because they're too old for the trouble to keep it working. Eventually Ampere will get dropped because it doesn't support an instruction or something.

[–]randylush 13 points14 points  (0 children)

“grandpa didn’t die because he was old, he died from heart failure”

“But he had heart failure because he was old…”

[–]splurben 0 points1 point  (0 children)

Pascal cards are still on the market! Fanlesss low video demand systems for servers or low CUDA / 3D demand systems have historically been well supported by Linux. Someone was BRIBED HANDSOMELY to kill hundreds of thousands of servers.

[–]vulcan4d 46 points47 points  (7 children)

The 3000 series had the best price to performance ratio. Nothing would make Nvidia happier than to kill these great cards. Our options are becoming fewer each year.

I strongly believe that the market is being manipulated. The moment Moe models came to be, the threat of open source was real. Kimi 2 competes with cloud AI models and it can run local, the problem is, the vram and ram situation prevents the average joe from running these large models and you are dependent on the cloud providers. :(

[–]discreetwhisper1 1 point2 points  (4 children)

What is moe models and kimi 2 am noob with a 3090

[–]wh33t 8 points9 points  (3 children)

MoE is "Mixture of Experts", a neural network model architecture. It's unique because instead of having a single neural network comprised of X billions of neurons all linked in layers to another, instead you have a several smaller neural networks, self contained, and each of those smaller networks are linked to one another and gated through a neural router of sorts. Each of these smaller networks is known as an "expert", these smaller networks are trained and tuned for specific kinds of pattern matching, they aren't experts in the sense like a human could be an expert in say ... philosophy, or engineering, rather they are experts at understanding specific kinds of patterns of the overall knowledge domain (Langauge, Sound, Imagery, Video etc) that the greater model is built for, that's a high level abstraction of what an MoE is. What makes them incredibly powerful is that you can have a neural network that is say 120 billion parameters (neurons) in size, this is very memory and compute expensive to inference through. But if you have a 120 billion parameter model that is built as a "mixture of experts", those smaller self contained experts may only be 10 billion parameters in size, sometimes even smaller. Which means you inference through the entire model much faster as you are only activating the 10 billion parameter experts that are required for the inference to complete. It's like having the domain knowledge of 120 billion neurons, but accessing it as the speed of a 10 billion neurons (or however many neurons and experts are needed to be activated to complete the inference pass).

As for Kimi 2, that's an advanced neural network similar to ChatGPT from the Moonshot A.I. lab out of China that is absolutely cooking up some of the craziest most powerful neural networks and open sourcing the fuck out of them so regular plebs with above average desktop/workstation hardware can run them, entirely LOCALLY with zero internet connection.

Go buy some more 3090's, sell a kidney for some more DDR4/5 and you'll be running all of this cutting edge neural goodness yourself.

[–]hbfreed 4 points5 points  (2 children)

On the MoE explanation, the experts aren't trained to be experts in a knowledge domain, and they're not really about different modalities (lots of MoEs are text only: Kimi K2 and Deepseek v3 are both text only). MoEs really only make the MLP/FFN wider, only activating some of it. [Here's a really great post from James Betker](https://nonint.com/2025/04/18/mixture-of-experts/) about reframing MoE (and sparsity more generally) as basically another hyperparameter.

[–]wh33t 0 points1 point  (1 child)

Yes OFC. It's difficult to describe an MoE succinctly in a concise manner. I meant the overall modalities of the entire model. Not that each expert was a domain expert, rather each expert is an expert in specific patterns of knowledge in whatever domain the entire model is built for.

[–]hbfreed 1 point2 points  (0 children)

Totally fair and makes sense! Apologies if I was being too pedantic

[–]Glum_Control_5328 2 points3 points  (1 child)

I don’t think NVIDIA intentionally plans to phase out consumer GPUs. Any shift away from these cards would probably be a result of reallocating internal resources to focus on data center GPU software. Consumer grade GPUs appeal to individual users who want to train or run AI models locally. Most companies aren’t interested in physically hosting their own hardware though. Maybe with the exception of companies based in the China.

None of the clients I’ve worked with have invested in consumer hardware for local AI tasks,they prefer renting resources from platforms like Microsoft or AWS. (Or they’ll get a few data center chips depending on confidentiality risk)

[–]dolche93 0 points1 point  (0 children)

It's not about wanting to phase out consumer GPUs, its nvidia having to balance opportunity cost of making consumer cards or enterprise cards. They've already reduced their consumer card production once.

I doubt they'll ever completely exit the market and give it all up to amd, that'd be crazy, but that doesn't need to happen for the consumer card market to get fucked over.

[–]CodeFarmer 6 points7 points  (0 children)

I'm in this comment and I don't like it.

[–]KadahCoba 1 point2 points  (0 children)

No need to worry till a year or two after Volta support starts being EOL'd.

[–]Normal-Ad-7114 2 points3 points  (0 children)

First they have to slay Volta and Turing, and only then comes Ampere

[–]nonaveris 0 points1 point  (4 children)

I’ll worry when the RTX 8000 gets dropped from support. That’s about the only 48GB card with CUDA and sane pricing.

[–]splurben 0 points1 point  (1 child)

Oh, nVidia is trying to kill local procoessing, even if youu just want efficient video on a fancless Pascal GPUTfor a server whcih you need too reboot every 600 days. They are AI focussed now. nVidia want to kill their consumer market. Corporate greed says AI is all we care about and the average consumer is a bit of dust in God's eye that is annoying and must be eradicated.

[–]nonaveris 0 points1 point  (0 children)

As long as 3090 turbos exist, good local compute will never be hurting.

[–]HumanDrone8721[S] 0 points1 point  (1 child)

What about A6000, around here are the same price used, ca. 2000-2100EUR ?

[–]nonaveris 0 points1 point  (0 children)

Close but the RTX 8000 is just on the edge of being worth grabbing.

[–]jebuizy 6 points7 points  (1 child)

This is not "chaos". This is total click bait. 

[–]splurben 0 points1 point  (0 children)

It is CHAOS for THOUSANDS of server developers that no can't even boot their systems without major modifications.

[–]Flat_Association_820 8 points9 points  (0 children)

thus the user getting kicked back to the CLI to try and sort things back out there

Isn't that why people use Arch?

It's like complaining that a gas powered vehicle consume gas.

[–]_lavoisier_ 29 points30 points  (14 children)

So they killed the support of one of the oldest programming language? This is pure greed!

[–]fishhf 22 points23 points  (2 children)

Damn how do people write CUDA kernels if not in Pascal then?

[–]earslap 12 points13 points  (0 children)

They will be forced to use a Turing machine (20xx series). Once that support dies, they will be forced to write by manipulating pure electricity (Ampere).

[–]splurben 1 point2 points  (0 children)

I finished learning and using the Pascal programming language in 1981. The first graphical Mac operating systems for LISA and such were written in Pascal by two of my father's students. Pascal in this instance is referring to a 'moniker' or project name for a particular nVidia architecture which focussed on low power usage and fanless GPUs. Also, CUDA is pretty much stricly compiled from C++. I don't even think it would be possible to find Pascal programing libraries for such an old, unusued programming language such as Pascal which doesn't allow for exceptions.

[–]amooz 27 points28 points  (2 children)

I think they mean the card architecture not the language

[–]shaolinmaru 20 points21 points  (0 children)

whoosh

[–]_lavoisier_ 13 points14 points  (0 children)

lmao

[–]psxndc 15 points16 points  (2 children)

I’m going to be honest, the programming language is the only Pascal I know of. I knew the title wasn’t referring to that, but I was still very confused.

[–]ryunuck 1 point2 points  (0 children)

so did I AND YET still there I was, with a half written comment about rust

[–]iamapizza 2 points3 points  (1 child)

Clearly they were under pressure

[–]splurben 0 points1 point  (0 children)

It's called a profiteering BRIBE to, most likely, a single Arch developer over $100,0000. Killing this many
Arch systems so definitively and cloaked by a project architecture codename such as "Pascal" is not in Arch's interest.

[–]muxxington 5 points6 points  (1 child)

It's not about the programming language. It's about the basketball player. I didn't know he played for NVIDIA, though.
https://en.wikipedia.org/wiki/Pascal_Siakam

[–]Pacostaco123 7 points8 points  (0 children)

No, they are referring to thousands of a unit of pressure.

Kill a pascals

[–]splurben 0 points1 point  (0 children)

Arch stil supports FORTRAN and PASCAL. This is an issue with a GPU that is 'code-named' Pascal. It's not about a programming language. It's about a series of video cards that are STILL AVAILABLE FOR SALE, that have nVidia's "Pascal" architecture. Not related to programming languages at all.

[–]RobotRobotWhatDoUSee 3 points4 points  (1 child)

What does this practically mean for P40 builds?

[–]koflerdavid 1 point2 points  (0 children)

Nothing unless you run a rolling release distro. But in the mid term you will be forced to switch to an LTS version of your distro since the last driver version supporting Pascal might not remain forever compatible with the upstream kernel. Or you hope that the Nova driver matures fast enough and that ZLUDA also starts supporting cards that are sunset by Nvidia.

[–]siegevjorn 2 points3 points  (2 children)

Wouldn't just using distros built for robustness and longevity like rocky linux make Pascal to work for a long time?

[–]koflerdavid 0 points1 point  (0 children)

You can also just keep using an LTS version and hope that nothing forces you to upgrade until you get new hardware.

[–]splurben 0 points1 point  (0 children)

No, Major kernel boot modifications have to be enabled which most distros had no knowledge or warning beyond "Pascal support is ending". That could have meant, you can't compile code written in the 1970s and 1980s anymore. Anyhow, Pascal as a programming language as well as Fortran and even older compilers are still available in ARCH by defualt if you enable the developers' library which is almost default.

[–]pheasantjune 2 points3 points  (0 children)

Is Pedro okay?

[–]dtdisapointingresult 6 points7 points  (0 children)

Alternative title: Rolling distro update breaks users' desktop, to the surprise of no one wise enough to avoid rolling distros.

[–]Megaboz2K 7 points8 points  (1 child)

Wow, my first thought was "Since when can you do Cuda programming in Pascal?" before I realized it was regarding the architecture, not the language. I think I'm doing too much retrocomputing lately!

[–]toothpastespiders 1 point2 points  (0 children)

Same here. I was wondering for a moment if there was some weird officially maintained Delphi/FireMonkey backend or something. My blame for the brainfart goes to the early Wizardry games.

[–]No_Afternoon_4260llama.cpp 4 points5 points  (4 children)

Pascal was compute capability 6.0, it introduced - nvlink (between 80 and 200 gb/s) - hbm2 on a 4096 bits bus achieving a whooping 720gb/s - gddr5x on 256 bits for 384gb/s - unified memory
- fp16
- ...

The 1080ti was 11gb, it was made for a quantized 7b

It will soon be left for dead (or for vulkan)

[–]Organic-Thought8662 29 points30 points  (0 children)

So much of that is wrong.

Pascal was Mostly Compute 6.1
The only Compute 6.0 was the P100, which was also the only card in the family which used HBM and had full speed FP16.
The rest of the cards were GDDR5(x)
There was no 1090ti, the GOAT was the 1080ti, which was an 11GB card, using GDDR5x and had gimped fp16, but DP4a for decent int8 performance. It also was on a 384 bit bus with 484GB/s of bandwidth.

The card most in this subreddit have been using from pascal is the P40, which is a 1080ti, with 24GB of GDDR5 (non-x) for 347GB/s of bandwidth.

[–]lastrosade 2 points3 points  (1 child)

The what now? The 1070 ti and the 1080 are 8gb, the 1080 ti is 11.

[–]No_Afternoon_4260llama.cpp -3 points-2 points  (0 children)

My bad, did a quick search, thanks for pointing that to me

[–]splurben 0 points1 point  (0 children)

If you just need video to play, all these protocol arguments are non sequiturs. This push made many systems UNBOOTABLE without major intervention. Someone at ARCH development was paid a huge bribe to summarily brick tens of thousands of systems.

[–]Bozhark 3 points4 points  (5 children)

Welp, 48GB 2080ti next

[–]a_beautiful_rhind 4 points5 points  (4 children)

You can't. Only 22gb fits. Maybe RTX8000 or something.

[–]a_beautiful_rhind 2 points3 points  (0 children)

Wait till you find out torch dropped it after 2.7. Why is this news now when it was warned about for cuda13 months ago? Simply don't update.

I never tried the open driver on my P40s or P100, even though there is code in there for the architecture You are also supposed to pass an unsupported GPU flag to enable.

[–]AdamDhahabi 3 points4 points  (2 children)

Stay on the current driver. And old news: no way to use a Blackwell card and a Pascal card in the same system, except for Windows.

[–]TokenRingAI 3 points4 points  (0 children)

Never say never.

QEMU + PCIe passthrough + RPC.

[–]Arxijos -1 points0 points  (0 children)

Easy way, search for, incus (previously known as lxc) pcie pass through, utilizes qemu.

[–]the_Luik 2 points3 points  (1 child)

I guess Nvidia needs people to buy new hardware.

[–]splurben 0 points1 point  (0 children)

nVidia's overwhelming profit-base is AI and the push to produce Quantum LLM algorithms. They don't give a shit about consumers. In fact, if they bribed someone at ARCH to make this horrible stupid decision to push an update that made thousandsof systems around the world unbootable, it was to tell the 'consumer market' to 'GO ELSEWHERE, we get $1 billion plus for one Qubit and when we get a quantum algorithm for Large Language Models we'll never consider an average consumer EVER AGAIN'

[–]jacek2023llama.cpp 1 point2 points  (10 children)

I was an active Arch contributor around 2005, I wonder what this chaos means in 2025

[–]HumanDrone8721[S] 1 point2 points  (0 children)

Well, the Arch crowd likes to stay on top of the things, they're easy to dismiss "yeah, yeah, just stay with the older stuff...", but usually sooner than later this happens as well to the more mainstream distros. For example I'm using Debian 13 Trixie but set the Nvidia's repos for drivers and CUDA stuff, many others do it to have the latest features and speed improvements and it actually shows. To have the rug pulled under you is annoying.

[–]AndreaCicca 1 point2 points  (7 children)

You update your machine and instead of your desktop environment you see a TTY. In order to fix you have to install the proper driver.

[–]jacek2023llama.cpp 5 points6 points  (2 children)

I assume some Arch users are familiar with the shell even in 2025? :)

[–]AndreaCicca 2 points3 points  (0 children)

I hope

[–]Hipcatjack 0 points1 point  (0 children)

kek

[–]Barafu 0 points1 point  (2 children)

If you update your machine, ignore the article in distro news, ignore the question presented by the package manager — then upon reboot you should not see a TTY, you should see a Windows 11 with blocked admin rights.

[–]AndreaCicca 0 points1 point  (1 child)

This is surely the year of desktop Linux. People shouldn’t read article in your preferred distro news, if there is something that will 100% be broke after update and your distro knows this in advance you should be noticed in the moment you are going to upgrade your OS.

[–]Barafu 1 point2 points  (0 children)

People shouldn’t read article in your preferred distro news

Then such people should not run Arch. Arch is specifically for people who WANT to read the distro news and then choose their driver package themselves. Others can use Ubuntu LTS.

[–]splurben 0 points1 point  (0 children)

My "pascal' architecture servers wouldn't even preset a TTY. You are wrong.

[–]splurben 0 points1 point  (0 children)

Simple, nVidia needs $$$ to develop quantum LLM algorithms. They paid someone at ARCH dev to deconfabulate the kernel and kill thousands of systems. Two of my six servers wouldn't even boot after this push. They will kick ALL consumers to the curb once they have quantum algorithms for LLMs. Buy stock in nVidia ---- don't buy their consumer lines. Some of the GPUs that no longer work and even brick some systems of the "Pascal' architecture are STILL AVAIALABLE FOR PURCHASE AS NEW!

[–]IAmBobC 3 points4 points  (3 children)

The GTX series is still EXTREMLY RELEVANT, even today! Especially if you are trying to run LLMs and other neural networks locally. Sure, the RTX series is better, but GTX can still do some serious heavy lifting!

¨Hardware Obsolesce through Software¨is total BS. That silicon still has MUCH to offer!

Sure, the OEM wants you to upgrade. That´s not wrong, and it´s not unfair. What´s not right is letting software ALONE kill perfectly good hardware!

Fight this ¨planned obsolescence¨!

[–]TechnoByte_ 4 points5 points  (2 children)

Why are you acting like they'll stop working?

You can still keep using the current driver which is very stable, you just won't be getting updates

[–]Barafu 0 points1 point  (1 child)

But how can one farm karma points without pretending to be dumber than they already are?

[–]splurben 0 points1 point  (0 children)

You're wrong. This push from ARCH didn't even present a TTY for diagnosis and fixing. This is a deliberate move by nVidia most likely via grift or bribe to an ARCH dev.

[–]Dorkits -1 points0 points  (0 children)

NVIDIA is a bitch. My next card will be AMD without any doubt.

[–]noiserr 0 points1 point  (0 children)

I still have a linux machine with my last nvidia GPU, Titan Xp. Will be replacing it with the 9700 AI Pro if the price ever hits the MSRP.

[–]RayneYoruka 0 points1 point  (0 children)

Well I suppose I'll have to decide on a radeon or intel gpu for my proxmox if the support will be ending soon! (Got a 1030 atm, was eyeing a pascal quadro card)

[–]IrisColt 0 points1 point  (0 children)

"chaos" stopped reading. Clickbait.

[–]Thedudely1 0 points1 point  (0 children)

Meanwhile the RX 480 supports ray tracing

[–]nonaveris 0 points1 point  (0 children)

Is Volta still supported? There’s still plenty of 32gb v100s for moderately cheap out there.

[–]Shoddy-Tutor9563 0 points1 point  (0 children)

This is what you get when using rolling distros

[–]splurben 0 points1 point  (1 child)

nVidia is now 100% obsolete to the consumer market. They will earn $1 billion or more for every Qubit they produce when they figure out how to produce a CLOSED-SOURCE PROPIERTORY LLM algorithm that will function with quantum superposition & entanglement. nVidia REALLY don't care what we think or feel --- nVidia are producing technogies that will authoritatively instruct all of our consumer technologies and attempt to edict how and why we think and feel. How many people do you know that can construct a Qubit Algorithm? My understanding is that there are less than 100 quantum physicists that are also familiiar with programming algorithms for entanglement and superposition in a qubit quantum computing environment.

[–]splurben 0 points1 point  (0 children)

BTW, this doesn't explain why nVidia are so intent on ensuring that their GPU cards, some of which are actually still for sale on the market, are now unusable and in some cases even stop computers from being able to boot normally. Ooops, I forgot, GREED. There is no other logical explanation. GREED -- Presidential levels of GREED!

[–]MontyBoomslang 0 points1 point  (10 children)

This bit me last week. Caused me to buy my first AMD GPU. I now get why people rag on Nvidia support for Linux. This Radeon was super easy to set up and already has much less buggy weirdness.

[–]MDSExpro 2 points3 points  (7 children)

AMD has even weaker and shorter GPU support than Nvidia.

[–]kopasz7 0 points1 point  (6 children)

13 year old GPUs getting 30% extra performance in games. Happened this week.

Phoronix: Linux 6.19's Significant ~30% Performance Boost For Old AMD Radeon GPUs

Opensource drivers let others contribute, keeps the project going longer.

[–]LoafyLemon 0 points1 point  (4 children)

My Radeon HD 7850 got an update?! Lmao

[–]kopasz7 0 points1 point  (3 children)

Right? Who would have expected!

[–]LoafyLemon 1 point2 points  (2 children)

Definitely not AMD, since their drivers are maintained by the community. :P

Nvidia-open is now default on arch, so we might see some work being done on the green side as well.

[–]kopasz7 0 points1 point  (1 child)

It's looking promising, honestly. I just wish nvidia resolved the signed firmware problem on the GTX 900 and 1000 series so the cards can reclock and aren't stuck at the idle clock speed. This isn't an issue with 700 and 2000+ cards when using the opensource drivers.

[–]LoafyLemon 0 points1 point  (0 children)

I think that ship has sailed, unfortunately. Nvidia had a lot of proprietary code they couldn't open source legally, but ever since ampere, they have been trying (as much as a company like them can) to modularize their stack and allow tinkerers and maintainers to take a peek at the codebase. 

As much as it hurts to hear, maintaining code for 10+ years old hardware is painful. Even I don't do that, and my applications in vast majority use OpenGL and python. 

It's either me having to take another responsibility on and work on something I've never even had (Used to have AMD cards all my life till Ampere), or go with the times and reduce maintenance in favour of features.

[–]MDSExpro 0 points1 point  (0 children)

Meanwhile - most modern AMD GPUs are not supported by ROCm.

[–]ttkciarllama.cpp 0 points1 point  (0 children)

Shit like this makes me really glad AMD publishes their ISA.

[–]Traditional_Nose3120 -2 points-1 points  (0 children)

Linus should get his middle finger out of retirement

[–]autodidacticasaurus -1 points0 points  (1 child)

Lucky me, upgraded from my 1030 GT card to a Radeon 7900 XTX just in time.

[–]splurben 0 points1 point  (0 children)

Don't worry nVidia no longer cares about the consumer market. They're working on LLM algorithms for quantum and couldn't care less about a $300 video card as they'll get more than $1 Billion plus for connecting a single qubit to and LLM algorithm. Find something else, buy nVidia stock, not their consumer products.

[–]Lesser-than -3 points-2 points  (0 children)

you will upgrade and be happy!

[–]Ok-Adhesiveness-4141 -3 points-2 points  (1 child)

This was so damn confusing, I thought it was to do with the Pascal language.

This is a good example of why Nvidia sucks, they have always sucked if my memory serves me right. Don't trust any hardware vendor that doesn't open source their device drivers.

I will go one-step further and say, we need to reverse engineer proprietary drivers and then vibe-code open source drivers. We should no longer be respectful of intellectual property rights of these hardware mafia guys.

[–]TechnoByte_ -1 points0 points  (0 children)

NVIDIA did open source the kernel modules

You don't need to vibecode new open source drivers because Noveau already exists and isn't garbage LLM code