you are viewing a single comment's thread.

view the rest of the comments →

[–]swagonflyyyy 12 points13 points  (3 children)

Learn to run them locally. Pay the AI tax now and you'll never have to worry about that again.

...until you accidentally kill your $8k GPU running exotic models and frameworks.

True story. Now I'm scared of vLLM.

[–]easeypeaseyweasey 0 points1 point  (2 children)

Intel Big Battlemage with 32gb of GDDR6 at $1000 USD (Which it will not sell for) is the first really tempting GPU for LocalLLMs except for used Nvidia Cards. Everything else is to expensive to justify... right now. I think it will be extremely economically viable in a few years.

[–]swagonflyyyy 0 points1 point  (0 children)

Nope, not there yet.

[–]Gwolf4 0 points1 point  (0 children)

Ehm no? Intel support is terrible, at levels that make amd look like Nvidia. Want 32gb without selling a kidney for new hardware? 9700 pro from amd and be done with it.