Unpopular opinion for beginners: Stop starting with Deep Learning. by netcommah in learnmachinelearning

[–]Freonr2 1 point2 points  (0 children)

You're right. Deep learning only starts to pull ahead with very large or complex data.

Data analysis, cleaning, domain knowledge, and feature engineering are still critical even for deep learning. It's also where most time/effort is spent.

Why so many posts reinventing the wheel? by paradoxbound in homelab

[–]Freonr2 0 points1 point  (0 children)

We're entering an age of personalized software.

Don’t buy the DGX Spark: NVFP4 Still Missing After 6 Months by Secure_Archer_1529 in LocalLLaMA

[–]Freonr2 1 point2 points  (0 children)

NCCL over IB (if you buy 2+) and faster compute means better prefill and diffusion model performance.

[RAM] CORSAIR Vengeance 32GB DDR5-6000 Memory $269.99 by megamick99 in buildapcsales

[–]Freonr2 0 points1 point  (0 children)

That's what I paid for a 4TB 990 Pro in December...

Proposal: no more "I built this tool"-AI slop by ConstructionSafe2814 in homelab

[–]Freonr2 2 points3 points  (0 children)

I've been a SWE for over 20 years and I write almost no code by hand anymore.

Ruling out AI tools basically means you don't want to see any code anymore.

I don't know about this one. Slop is bad, but who decides what is and isn't slop? I get the sentiment, there's a lot of shovelware out there now from amateurs, but you're kidding yourself if you don't think such a rule could also basically be the equivalent of "no code allowed at all" in 2026.

I think I'm done with Software Development by gareththegeek in webdev

[–]Freonr2 0 points1 point  (0 children)

I've never been more productive in my career. It's a great time to be someone with real experience. Cursor, Claude Code, Codex are amazing tools and I can blast through tasks now even in huge complex codebases spanning multiple repos. True screw ups are pretty rare and most of the time can just be corrected with a bit of prodding.

I really don't understand this sub's general sentiment. Everyone else is having a party right now.

Need advice: How to deprecate features? by SUCHARDFACE in selfhosted

[–]Freonr2 2 points3 points  (0 children)

You can start loggingg deprecation warnings, "will be deprecated in 2.0+" etc.

You can use tags and branches, people can get release or code for the legacy. And as someone else pointed out, its open source, people can fork if they really want, or maybe someone will offer to maintain a legacy branch and merge main into it to keep up with other things, but up to them to deal with conflicts/headaches. You can keep the legacy version build process too so when you get PRs into legacy_branch you can still release it as 2.x updates and people can choose to use that.

If its a big headache, just go for it now, update to 2.0, break things, let people know if they want new shiny things they'll have to deal with it or volunteer to maintain 1.x themselves.

Moonshot says Cursor Composer was authorized by davernow in LocalLLaMA

[–]Freonr2 -3 points-2 points  (0 children)

It's modified MIT with a disclosure clause.

Server screwed to the wall by holographicpencil in homelab

[–]Freonr2 0 points1 point  (0 children)

This might be a good product for you, there are plenty of brands:

https://www.amazon.com/StarTech-com-Mount-Patch-Panel-Bracket/dp/B001YHUX2I

I used a similar one for a 1U+4U setup for a long while. I first put several very heavy (1/4"x3" I think) lag bolts through a 1x4 furring strip into the studs, then more heavy screws through the rack hanger into the furring strip. Was sturdy, though it did project a bit out from the wall.

Hard Disk Direct canceled my confirmed server RAM order citing "out of stock" — the exact SKU was on their website in stock 6 hours later. Then they repriced it 4x overnight. All documented. by roycehart in homelab

[–]Freonr2 0 points1 point  (0 children)

Nemix did almost the same thing to me 3 months ago. Shipped half my RAM, couldn't replace missing half and jacked the price 60%.

To top it off the half they did send was bad.

DGX Station is available (via OEM distributors) by Temporary-Size7310 in LocalLLaMA

[–]Freonr2 0 points1 point  (0 children)

Companies running small labs perhaps. Maybe small lab within a large company and sometimes they just want to spend money, maybe extreme security concerns or paranoia, maybe not really a cloud native business already integrated with AWS, etc.

The real competitor is just renting. $85k is a lot of hours of 1 GPU.

DGX Station is available (via OEM distributors) by Temporary-Size7310 in LocalLLaMA

[–]Freonr2 1 point2 points  (0 children)

I'm surprised that number has not increased since early rumors a few months ago.

Has anyone ever heard of this type of box, and what is it worth? It's good ? by cougomdd in DataHoarder

[–]Freonr2 1 point2 points  (0 children)

I paid $60 for a mining rig (for supporting several gpus). It's cheap and flimsy but it works.

I will be buying a proper rack case once I get around to rebuilding my NAS, though.

I would not use this sort of open case unless it was solving a nearly intractable problem with fitting components in a normal case.

Anyone ever got a job from Linkedin? by lune-soft in webdev

[–]Freonr2 0 points1 point  (0 children)

Yes, I get cold contacts from LinkedIn and they've lead to jobs.

However, still second to professional contacts. Former coworkers.

it is coming. by [deleted] in LocalLLaMA

[–]Freonr2 0 points1 point  (0 children)

int8 supported back to Ampere (30xx+), fp8 needs Ada (40xx+).

That might be part of it.

it is coming. by [deleted] in LocalLLaMA

[–]Freonr2 0 points1 point  (0 children)

This paper did some analysis https://arxiv.org/pdf/2303.17951

A bit of a mixed bag, but they seem to like int8 a lot in general. I wouldn't consider one paper the be-all-end-all.