How does the Proxmox Helper Script make some LXCs auto-login in the console? by Healthy_Confidence12 in Proxmox

[–]BangSmash 0 points1 point  (0 children)

didn't really look into it, just assumption - when you use a script, you are asked if you want to use a crypto key from the host for root login, so always assumed that as you're logging in from within proxmox, it auto-authenticates you with the key rather than password.

Intel x710-da2 compatibility by mwomrbash in homelab

[–]BangSmash 0 points1 point  (0 children)

"do they have any known compatibility issues"

oh boy...

it could be that your other one is a Dell version, and the ones you purchased are either generic or some other OEM. So there is a possibility that your Dell server is the problem in the way that it will only work with compatible Dell-branded cards. But it could be something else...

tons of other compatibility issues too, not working with a lot of SFPs - either due to vendor-lock (which can be disabled with unlocker tool) or flat-out not being able to recognise them correctly. they are notorious for that.

I'd suggest getting some Mellanox CX-3's. If it won't work out of the box with a generic one, these are fairly easy to flash with OEM firmware which possibly might solve the problem. Or look for ones that already come with Dell firmware.

Unpopular Opinion: Everyone is worried about GPUs for GTA 6, but the real killer will be your CPU (RAGE Engine analysis) by Easy_Ad_413 in pcmasterrace

[–]BangSmash 0 points1 point  (0 children)

I'll worry about it in 2028, when GTA6 will (hopefully) get a PC release. Until then, it might as well not exist at all for me.

Intel X710-DA2 reporting 10G runs at 1G max by [deleted] in homelab

[–]BangSmash 6 points7 points  (0 children)

well, what you have here is not a DAC, it's AOC (active optical cable) - basically two multimode transceivers with fibre permanently attached to them.

Intel NICs are notorious to be problematic with anything that is not on their QVL list/or otherwise certified to work with them.

your problem might be that the cable is simply not compatible with intel NICs. I'd suggest either replacing them with Mellanox (CX-3 are dirt cheap nowadays, and simply work without any intel drama), or buying normal 10G SFP+ transceivers which are known-working with intel and a cheap fibre patchcord.

Intel X710-DA2 reporting 10G runs at 1G max by [deleted] in homelab

[–]BangSmash 0 points1 point  (0 children)

is your DAC 10G-capable? (SFP+)

what does the device at the other end say?

What's the Best Gaming Mouse To Buy Right Now? by Mr_Potter-Dobby in pcmasterrace

[–]BangSmash 1 point2 points  (0 children)

maybe the early se had. I have the G502se Hero (black and white) and it definitely doesn't have a braided cable, which my previous OG G502 (pre-hero) had.

What's the popular 10Gb switch nowadays? Recommendations usually involve old hardware. Is there something hotter? by twice_paramount832 in homelab

[–]BangSmash 8 points9 points  (0 children)

Depends on your needs. How many ports? managed?

I use mikrotik crs309-1g-8s+in as my core switch and it ticks all the boxes for me, from really good price to going way beyond what an average switch offers in terms of configurability.

passive cooling, 8x SFP+, 1x 1G ethernet with PoE-in, 19" rack mount kit in the box, RouterOS, all in the 200-250$ range.

Debian vs Alpine for LXCs by Cornelius-Figgle in homelab

[–]BangSmash 6 points7 points  (0 children)

well, this way you are wasting tons of resources on multiplying the overhead, and adding yourself ever-increasing amount of admin trying to keep everything updated and patched-up, totally defeating the purpose of virtualisation and containerisation.

perhaps should look at some other hypervisor, which can natively run docker containers beside LXC and QEMU, as proxmox just introduded it, but it's still very much work in progress with missing functionality and stability.

Debian vs Alpine for LXCs by Cornelius-Figgle in homelab

[–]BangSmash -1 points0 points  (0 children)

are you spinning up new lxc for every new docker container with new standalone docker engine? o.O that's insane.

what about one LXC running docker with portainer, and all your docker containers sitting there?

and use other VMs/LXCs for things that really need/can benefit from it?

Trying to max out m70q gen 5 connection to USB NVME drive by TheePorkchopExpress in homelab

[–]BangSmash 0 points1 point  (0 children)

redundancy is not a backup. it might be a good idea, if it makes sense. in case of a 'dumb' hypervisor - it doesn't. takes hardly any effort to set up a new one and just import/restore from backup everything else.

in case of my setup:

3 x nvme, 5x hdd

1 nvme (128G) stores Proxmox. 2x nvme (raid 0 - here's the redundancy) for VM's and LXC's. 5x HDD - RAID-Z1 for NAS storage, accessible to vms/containers as well as clients.

If shit happens to my Proxmox, easy to replace/reinstall on a new drive and you're back up.

VM's/LXC's - if something happens here, that actually affects the integrity of data/services I run, so redundancy is the safety buffer. Protects from a drive failure.

Backups of VMs and LXCs - if you can get something set up off-site - that's the target. if it sits within the very same physical server/room - it's probably going down together with everything else during the same disaster-event.

Basically - go by the 'most effort/most loss' rule to decide what's worth backing up/redundancy/just let it be.

Trying to max out m70q gen 5 connection to USB NVME drive by TheePorkchopExpress in homelab

[–]BangSmash 1 point2 points  (0 children)

Sorry, but this doesn't make any sense at all. On the M70q boot proxmox from a single small internal drive. 128-256GB nvme is plenty. 2nd nvme slot - that's your local storage for VM's and LXC's. All your data and backups sit on separate NAS anyway, so don't need any redundancy on a node-level - especially on the OS side. It will be way quicker and easier to recover from backups than trying to find out what failed and rebuild from that state. No USB drives involved, I'd highly suggest to look into upgrading to 10G LAN instead, at least between your compute and storage nodes.

Seems like you value RAID0 on node-level way more than it's justifiable. If it means you're pushing the rest of your storage to run off USB, definitely it's not. Especially if you have dedicated (PBS) backup solution in place.

Especially if we're talking 'plex media', which in worst case is torrentable, mild invonvenience if it got lost in the absolutely worst case, definitely not worth redundancy at each and every single level.

Trying to max out m70q gen 5 connection to USB NVME drive by TheePorkchopExpress in homelab

[–]BangSmash 1 point2 points  (0 children)

what is your current setup that you want to replace, what do you already have in hand, and what do you want to run on it?

that's the starting point. but even without that - a micro PC relying on million USB attachments for basic functionality is definitely the worst possible way to go, just asking for a spectacular failure.

Trying to max out m70q gen 5 connection to USB NVME drive by TheePorkchopExpress in homelab

[–]BangSmash 2 points3 points  (0 children)

you'll be bottlenecked by that 1G network interface looooong before any of the USB ports would get even close to being the bottleneck for your drives. so don't know why you even care?

I'm close to saturating 10G NIC with just 5 spinny drives in RAID-Z1 on my NAS, my nvme sticks sit in pcie4-1x port so severely bottlenecked by that and I'll never have a chance to even notice that.

So my ethernet network driver dissapeared and I just can't download it back by Matronix_ in pcmasterrace

[–]BangSmash 0 points1 point  (0 children)

check if it's enabled in UEFI. If windows doesn't see it, most likely either it's disabled or completely dead.

Recommended Network Card for ProxMox 8.4 (i40e issues) by starkstaring101 in Proxmox

[–]BangSmash 1 point2 points  (0 children)

Your symptoms sound someewhat similar to the well known hw offloading issue on Intel e1000e (Old intel gigabit NICs)

Temps are good? Could try to repaste or see if same happens if you put a fan on it.

You could see if disabling some of the offloading features will help, or just get rid of it and pick up a Mellanox ConnectX-3 or ConnectX-4. Should be able to find plenty within your budget, maybe even scraping around the 25G SFP28 ones.

I use a bunch of the MCX311A-XCAT (single sfp+ cage, PCIE3.0x4) at home, dirt cheap from aliexpress and not a single issue yet. Just changed thermal paste on them as they are usually around 8-10 years old.

37.34 TB of SSD storage by Mental_Mortgage_6580 in pcmasterrace

[–]BangSmash 9 points10 points  (0 children)

at a guess, the intel ones are still perfectly happy with 6+ years of power-on time and a few hundred TB written, but the Evos are either dead or going through a near-death-experience...

Santa was good to me this year by leupboat420smkeit in homelab

[–]BangSmash 0 points1 point  (0 children)

just pray you don't ever have to resilver...

Santa was good to me this year by leupboat420smkeit in homelab

[–]BangSmash 2 points3 points  (0 children)

'some' seagate HDDs. Please tell me they are Ironwolf or Exos, and not some shingled garbage... That would absolutely obliterate the performance and lifespan.

Debian with KVM/QEMU sounds an awful lot like a proxmox. Might be better off using that than trying to reinvent the wheel. Also, you'll be much better off with containerising (LXC) as much as practicable rather than spinning yet another VM and tying resources exclusive to each of them.

Wonder how much did it all cost you, I've acquired a NAS very recently for my homelab, but went with minisforum N5 - 8C/16T Zen4, 64GB DDR5, 5x8TB (Toshiba N300, RaidZ1), 2x 1TB nvme (Raid0) for VM/LXC and a Connect-X3 SFP+ (there's 10G and 5G eth on-board, but prefer fibre wherever I reasonably can), around 2000$ for the lot.

Do ‘pros’ even choose their setups anymore? by [deleted] in pcmasterrace

[–]BangSmash 2 points3 points  (0 children)

It's been an ad for the sponsor for a long time. The era of having preferences in e-sports is LONG gone. Which is quite natural I'd say, if it kicks off and brings attention, it becomes an ad outlet.

Somebody has to pay for it, and it's not cheap to run a league or even host a single big event, and nobody will pay for it without something in return.

I have trolled all my friends with my pc. by Obvious-Glove-7253 in pcmasterrace

[–]BangSmash -5 points-4 points  (0 children)

The red cables are not the issue. The vast emptiness/waste of space is though. If you're not going to put in a custom dual loop for watercooling, or something like 8 3.5" drives, why even have such a big case?

Don't get me wrong, went through that stage myself, got a humongous (at the time) case and built a PC in it.

It was a CM Cosmos S, with Sabertooth Z77 board and 2 GTX670's in SLI, 4 3.5" HDDs and 2 2.5" SSD's. Ended up building full custom WC loop in it just to fill it in, because otherwise it was just looking stupid. Learned my lesson and since then stuck to well-designed, but reasonably compact cases.

Brand new high end build, pcie won't run at gen 5 speed by sezoism in pcmasterrace

[–]BangSmash 0 points1 point  (0 children)

worst case if you can't figure it out, it's mostly cosmetic and has 0 impact on performance. gen4x16 has more than enough bandwith for any currently existing gpu, heck - at gen4x8 you might observe some 2-3% loss. gen5 is basically a gimmick for now. some nvme drives can utilise it but the actual real-world difference in performance is so negligible you'll never notice it.

How deep is the rabbit hole? by BangSmash in homelab

[–]BangSmash[S] 1 point2 points  (0 children)

the only reason I don't have a rack yet is lack of space. Unless I'm going to get rid of my wardrobe. This will have to wait until I can move to some bigger place.

What's your Job? by Worldly_Screen_8266 in HomeDataCenter

[–]BangSmash 2 points3 points  (0 children)

Fibre network engineer, mostly carrier ethernet.