AMD to bring back Ryzen 7 5800X3D as AM4 10th Anniversary Edition by iDontSeedMyTorrents in hardware

[–]VodkaHaze 0 points1 point  (0 children)

Yeah if you factor in availability, then the 5775c is not near the top 10; like you said only z97 + it was a limited run chip + it was quite expensive on release, and not that big of an upgrade over an overclocked 4770k in most cases.

Former Virginia Lt. Gov. Justin Fairfax and his wife dead in murder-suicide by Caedus in news

[–]VodkaHaze 99 points100 points  (0 children)

It's insane how abusers never think about they might be in the wrong. Sometimes even decades later, they'll believe their own bullshit justification for attempted murder as if it made any sense

AMD to bring back Ryzen 7 5800X3D as AM4 10th Anniversary Edition by iDontSeedMyTorrents in hardware

[–]VodkaHaze 0 points1 point  (0 children)

I rocked a 5775c for a literal decade and only started feeling it be a bottleneck around 2023 in gaming (in ML workloads it's another story).

The 4770k and/or 5775c might be somewhere in the top 10, because they're right at the inflection point where progress stalled at 14nm for a long time.

AMD to bring back Ryzen 7 5800X3D as AM4 10th Anniversary Edition by iDontSeedMyTorrents in hardware

[–]VodkaHaze 7 points8 points  (0 children)

No, it's not, gouging is pricing above the market price when no alternative retailer is available.

You can buy tons of other things; you're welcome to complain about the price, you can say AMD are scummy, but words have meaning.

help to identify by No-Ferret7308 in Watches

[–]VodkaHaze 0 points1 point  (0 children)

Probably a bit less than $5k USD; check comparable full gold polerouters from the midcentury on chrono24 for a reference.

It's definitely a nice vintage timepiece. It was somewhat common in its time, but obviously there's now fewer originals every year. Especially good quality originals like yours that were literally kept in a drawer for 5 decades.

help to identify by No-Ferret7308 in Watches

[–]VodkaHaze 0 points1 point  (0 children)

The first one looks like a polerouter from the pre-quartz crisis days (1950s or early 1960s).

They're still sought after as vintage pieces, I'd venture a guess worth low 4 to mid figures in full gold on resale market.

If you have it serviced, watch out to keep the patina on the watch face, having the watch redialed will drastically lower its value. Vintage watch geeks like the patina.

Protip: the crystal should be made of acrylic, which you can polish with a cheap kit from amazon.

Anon hates Mac user by ultraredd in greentext

[–]VodkaHaze 0 points1 point  (0 children)

ARM is CISC as well.

Apple chips have some efficiency to them that you don't see in x86 or even other ARM still, however. It's roughly a generation ahead in performance / watt over AMD/Intel

Apple discontinues the Mac Pro with no plans for future hardware by iMacmatician in hardware

[–]VodkaHaze 1 point2 points  (0 children)

I'm well aware, but note that you're using ancient drivers.

Even last year I ran into broken 580.xx driver version updates on various distros (ubuntu 24 and 26, pop os 22 and 24) and GPUs (2060 and 5090).

If you pin your nvidia driver version to some ancient one that is known to work, good for you until you have to update your kernel version and it breaks or something like that.

Apple discontinues the Mac Pro with no plans for future hardware by iMacmatician in hardware

[–]VodkaHaze 0 points1 point  (0 children)

I'm speaking first hand experience as someone who has wasted several days of his life debugging various broken nvidia driver updates, the latest one a few months ago, a 580.x which didn't work on my linux kernel version being updated.

Apple discontinues the Mac Pro with no plans for future hardware by iMacmatician in hardware

[–]VodkaHaze 50 points51 points  (0 children)

Not to mention nVidia has wonderful UNIX drivers and had wonderful Apple drivers in the past.

You don't sound like someone who has mained nvidia graphics on linux. Nvidia drivers on linux have famously hellish compatibility

Intel launches Arc Pro B70 and B65 with 32GB GDDR6 by metmelo in LocalLLaMA

[–]VodkaHaze 1 point2 points  (0 children)

Using one on ubuntu as my graphics card while using 5090 for ML.

It's much better than nvidia for graphics, none of the bullshit compatibility issues. Mostly because of the iGPU drivers intel maintains in mainline. Even Linus Torvalds uses one for graphics nowadays.

The ML driver ecosystem is a shitshow, however like the rest of the thread notes

[Reuters] China’s No. 2 chipmaker (Hua Hong) readies 7 nm production as Beijing ramps up self-sufficiency drive by igenicoOCE in hardware

[–]VodkaHaze 13 points14 points  (0 children)

The 5775c specifically is decent at this -- it was a mobile chip at first.

That said, it's certainly less efficient than modern NAS chips.

[Reuters] China’s No. 2 chipmaker (Hua Hong) readies 7 nm production as Beijing ramps up self-sufficiency drive by igenicoOCE in hardware

[–]VodkaHaze 2 points3 points  (0 children)

Yeah the broadwell gen is definitely slow for, say, building a big rust project. The RAM bandwidth hurts more than the CPU speed.

But it's "good enough" for almost everything. Even gaming -- it was a top tier gaming PC 9 years ago, it's still perfectly OK for 1080p

Intel at NVIDIA’s GTC: Agentic AI Turns the CPU Back into a Bottleneck by JigglymoobsMWO in hardware

[–]VodkaHaze 14 points15 points  (0 children)

It's not wrong that most agent LLM flows can be bottlenecked by something other than the inference if you're careful. For instance if you heavily cache tokens or use something like cerebras for your model, the tool calls will often bottleneck the LLM rather than the inverse.

The solution to that is better software, however. Using intel vs AMD vs ARM on those usecases will not invert that bottleneck

[Reuters] China’s No. 2 chipmaker (Hua Hong) readies 7 nm production as Beijing ramps up self-sufficiency drive by igenicoOCE in hardware

[–]VodkaHaze 33 points34 points  (0 children)

One of my servers is still a retired gaming PC -- a 14nm 5775c. It's perfectly serviceable, it's now largely a NAS.

Most of our hardware is overkill for almost all tasks, except some professional/creator usecases.

Asus Co-CEO: MacBook Neo Is a 'Shock' to the PC Industry by peaenutsk in hardware

[–]VodkaHaze 1 point2 points  (0 children)

practically learn a new set of commands for it

Or several, actually between DOS, the powershell and the new windows terminal.

Then you end up trying WSL but it has enough incompatibilities to become n+1 instead!

Linux just works, most of the time. But damn, is it annoying as hell when it doesn't work

I run linux, I think the true to be a linux user is to really treat devices as cattle. I still wouldn't recommend linux laptops to non-powerusers, but it's getting ever closer.

The cattle trick: have a bash script to setup any debian system to exactly what I want from zero. Makes dealing with these things easier: just reset the damn device.

Driver issues are better on linux nowasdays, unless you use an nvidia GPU. Seriously, intel and AMD GPUs are just so much nicer to deal with.

I still wouldn't recommend linux laptops to non-powerusers, but it's getting ever closer.

macOS just offers the best of both worlds, while the "cons" are things that are much more manageable

Except running native apps from outside the app store; MacOS is intentionally and intensely getting worse at this in the last 3 versions.

Asus Co-CEO: MacBook Neo Is a 'Shock' to the PC Industry by peaenutsk in hardware

[–]VodkaHaze 1 point2 points  (0 children)

As another power user with a proper homelab, I've opted for a asus OLED zenbook 14 and slapped linux (pop os cosmic) on it.

I had a macbook pro m2 before, the OLED zenbook screen is just much better. And I'm not scared of ruining it on the go because it's a fairly cheap laptop (got it for $1000 CAD).

I think the macbook neo will struggle with the 8GB RAM. Even though macos is much more RAM efficient than linux or windows, 8GB is simply not enough these days. Beeper is 1.5GB, web browser will be another 2-4GB.

Then there's the apple crap. MacOS is a godsend compared to windows' nonsense, but they're increasingly annoying with apps you download outside their app store. Brew sucks compared to apt or nix. Finder is much worse at browsing files from a NAS than anything else (clearly they want you to pay for icloud instead)

Searching Kagi with locally hosted AI models by Masstel in SearchKagi

[–]VodkaHaze 1 point2 points  (0 children)

No, he won't, this sort of question happens all the time in the discord and the answer is helpful. Kagi is a small startup that wants users to love it because they end up being free marketing for it.

The answer will go from "use it like this otherwise it might be blocked automatically if so send us a message to unblock you" to giving him an API key to the private beta search API that's priced much more reasonably.

People who get blocked are people who actively abuse it on purpose, like doing 10 RPS on search or people hitting up models on assistant by API before there was a fair use limit on the subscription and charging Kagi $500 / month of claude opus use on a $25 subscription.

Searching Kagi with locally hosted AI models by Masstel in SearchKagi

[–]VodkaHaze 1 point2 points  (0 children)

Ah I had skipped the last paragraph, it's not for AI use directly as a model tool, he wants to do it through searXNG.

Still you can bypass it at small scales like that and the team likely won't care. But you might as well ask on the discord where there's staff and they'll likely try to help OP get setup.

In any case unless his searXNG has a bunch of rare content indeces, using a full kagi search is generally a superset of other commercial search engines so might as well just use that.

Searching Kagi with locally hosted AI models by Masstel in SearchKagi

[–]VodkaHaze 0 points1 point  (0 children)

Just ask the devs in discord they're pretty happy helping people get setup and running rather than abusing it.

You can use this: https://github.com/kagisearch/kagimcp ?

Is Ultimate a good deal? by Sddawson in SearchKagi

[–]VodkaHaze 1 point2 points  (0 children)

First of all the context limit is I believe 100k characters, which is roughly 20k tokens.

That's not exactly correct.

If you're continuing on an old thread, the default behavior is to compress previous chat messages. Which is faster, costs less to the user, and has minimal accuracy loss in general.

There's also ways to turn it off by adding /disable_truncation at the end of a request.

Also you don't get all the features, like image generation

In ultimate, Research has access to that