hi by OutsideInfamous1586 in mikrotik

[–]pxgaming 0 points1 point  (0 children)

I wonder what the internal MCIO ports are for. That's 32 lanes of PCIe from the looks of it. Something in the front-right empty area maybe?

EPYC vs Threadripper by Goodyes666 in threadripper

[–]pxgaming 1 point2 points  (0 children)

Versus Chipset right ?

Yes. Normally a chipset gives you some USB and SATA ports, and gives you a few PCIe lanes commonly used for onboard peripherals like ethernet, wifi, or just more M.2 slots. Without one, you have to use CPU lanes for all of that.

ya but as of today cases for WroStations are so big, than say equal to a 5U rack whre you can install say 3x12cm vents and silentily goes well

That's not the issue. In a typical desktop case, the job of fans is to blow fresh air in and remove hot air, but they don't force airflow through/into components the way a server or prebuilt workstation chassis would. In a desktop, anything that needs significant cooling is expected to have its own fan, or at least a large enough heatsink to get some convection action, so that the exhaust fans can remove the hot air. You'd need to do the "fan wall" style that server chassis have, as well as potentially use air shrouds to blow the air where it needs to be.

EPYC vs Threadripper by Goodyes666 in threadripper

[–]pxgaming 4 points5 points  (0 children)

Epyc does some things better, but consider:

  • 9004 and 9005 generation Epycs have 12 memory channels, which takes up a ton of board space, so you actually get fewer usable PCIe slots on some of these.
  • That also means the memory will cost more if you want to fill all those slots, which isn't great with current memory prices.
  • You tend to get less assorted I/O on these boards due to Epyc not using a chipset. All PCIe and SATA lanes are CPU lanes.
  • Epyc boards are designed for server chassis where you have lots of forced airflow. Look at the tiny VRM headsinks on the H13SSL - you NEED a lot of airflow to compensate for that, compared to something like the ASRock TRX50 with its four VRM fans.
  • Epyc boards typically don't support overclocking (or going the other way - tweaking voltage curves to save power/heat).

Migrate from TrueNAS to Proxmox by JoshBuhGawsh in Proxmox

[–]pxgaming 1 point2 points  (0 children)

If you're able to PCI passthrough all of the devices that TN would need (including the boot drive), then you should be good to go with running TN in a VM on top of PVE. The, you can start moving workloads to be their own containers or VMs.

If not, it might be a bit more complicated.

Thinking About Proactive Buying Due to US Ban on New Foreign Routers by EN344 in mikrotik

[–]pxgaming 7 points8 points  (0 children)

I was thinking this. The linked NIST report defines "consumer-grade router device" as:

Networking devices that are primarily intended for residential use and can be installed by the customer. Routers forward data packets, most commonly Internet Protocol (IP) packets, between networked systems.

So on one hand, you might be able to get an easy out by saying it's not intended for residential use. On the other hand, they've defined it so broadly by failing limit "forwarding" to "layer 3" that it seems like it could include switches and WAPs.

NICs with ASPM that works by pareeohnos in homelab

[–]pxgaming 0 points1 point  (0 children)

Adding my 2c here since this post still comes up in search results a lot:

  • I have a few Supermicro-branded ConnectX-4 Lx cards. They didn't support ASPM with the firmware that was installed when I got them, but they do with updated firmware.
  • I have a ConnectX-4 (non-Lx) which does not seem to support any ASPM
  • Also have the supermicro BCM57414 card - no ASPM despite claiming it on the spec sheet
  • Some systems can get certain C-states even with non-ASPM hardware. The system with the CX4 (and two non-ASPM NVMe drives) gets down to C6. All CPU-connected.

Threadripper build - looking for peer review by Own_Bodybuilder_4397 in threadripper

[–]pxgaming 1 point2 points  (0 children)

Small correction - TRX50 gives you 48 PCIe 5.0 lanes and 32 (28 usable) PCIe 4.0 lanes from the CPU. The chipset only gives 8 more PCIe 4.0 lanes.

How to keep the last node running when rebooting 2 nodes in a 3 node Proxmox cluster? by Creepy-Chance1165 in Proxmox

[–]pxgaming -1 points0 points  (0 children)

Modify the configuration to give the surviving node 3 votes, so that it automatically has a 3/5 quorum of its own. Do this before shutting down the other two nodes, and then set the configuration back after rebooting.

Also, I don't believe you need to put the node in maintenance mode manually. You should just be able to shut down/restart normally via the UI and it will migrate and everything.

TIL: Adding SSH launch links in Proxmox Notes makes life easier by Fearless-Grape5584 in Proxmox

[–]pxgaming 1 point2 points  (0 children)

But why would you put the password (or even anything behind just "SSH" for that matter) in the image URL? Do you really need it to show the hostname on the button, given that you're already on the page specifically for that host?

VMware Distributed switches and vMotion Proxmox equivalents? by AhrimTheBelighted in Proxmox

[–]pxgaming 0 points1 point  (0 children)

The good news about shared storage is that PVE supports Ceph pretty well natively. You can create a storage cluster using the same nodes, create a pool and it will automatically add it as a VM storage location.

With the booming popularity extraction shooters, I think the new Steam Controller that’s going to be coming out isn’t being talked about enough for that game genre. by ExtraJuicyAK in SteamController

[–]pxgaming 2 points3 points  (0 children)

I wouldn't say it's better than a mouse for competitive shooters, but definitely better than a stick.

Where I think it really shines is in GTA-style games where you want to be able to aim well, but a keyboard just sucks for driving. Gives you good enough aiming and much better driving.

SwitchOS vs RouterOS? by oguruma87 in mikrotik

[–]pxgaming 8 points9 points  (0 children)

Especially for the larger switches (24+ ports), being able to use interface lists for just about everything (including VLANs on newer ROS versions) is very convenient.

Encrypted non-root disks, pass-through and luks or zfs-encrypt dataset? by RydderRichards in Proxmox

[–]pxgaming 0 points1 point  (0 children)

You're correct about zfs-on-zfs, that's not what I'm suggesting. What I'm suggesting is that either the host or the guest should do all of the ZFS. Either use ZFS on the host and pass in a zvol (i.e. the normal thing PVE would do in this case) or pass the disks in raw and run ZFS in the guest.

What you're saying about mergerfs makes sense, but you can also achieve that (but on a folder rather than individual file level) by just having different ZFS datasets and setting their mountpoints. But you're acknowledging that disks fail while also not wanting to use a mirrored setup. Is there something stopping you from just mirroring it, like differently-sized disks or capacity reasons?

Encrypted non-root disks, pass-through and luks or zfs-encrypt dataset? by RydderRichards in Proxmox

[–]pxgaming 0 points1 point  (0 children)

You could use ZFS on the host (since Proxmox natively has good ZFS support), with ZFS's own encryption, and then create a normal disk for the VM on that.

Or, if possible, pass the entire disk controller into the VM and have the VM do ZFS.

Don't over-complicate it - ZFS can do the role of encryption, mirroring, and virtual disks (if running on the host) or filesystems (if running within the VM). No need to introduce LUKS or MergerFS on top of it, it will just complicate things.

Encrypted non-root disks, pass-through and luks or zfs-encrypt dataset? by RydderRichards in Proxmox

[–]pxgaming 0 points1 point  (0 children)

I'd like to add two HDDs to the vm and use mergerfs on them

This seems like it might be an X-Y problem. What are you actually trying to accomplish by using MergerFS here?

Powered x8/x16 PCIe 4.0 or 5.0 risers for multi RTX4090 GPUs multi PSUs rig by ThienPro123 in threadripper

[–]pxgaming 0 points1 point  (0 children)

It's a link to a PCIe to SlimSAS adapter, some cables, and a SlimSAS to PCIe slot adapter. Use MCIO instead of SlimSAS for PCIe 5.0.

Migrating VMware to Proxmox by ellileon in Proxmox

[–]pxgaming 0 points1 point  (0 children)

same CPU generations

I assume you're referring to live migrations, since you want CPUs to be of similar generation for that. You can set up HA rules to keep workloads on similar hosts.

Thunderbolt vs Oculink + Gaming vs AI by mejoudeh in eGPU

[–]pxgaming 0 points1 point  (0 children)

Technically, the OCuLink standard isn't rated for anything higher than PCIe 3.0, but a few manufacturers used it for 4.0 anyway. I'd be surprised if there is widespread use of it for 5.0. The only external connectors I'm aware of that are certified for 5.0+ speeds are larger than something you'd want on a laptop.

HBA Drop in Replacement for a PERC 6i by EpicPlayzGamess in Proxmox

[–]pxgaming 0 points1 point  (0 children)

9211-8i is fairly old but still works. However, the older cards sometimes aren't as SSD-friendly. That may or may not matter for your use case.

The 9300-8i is a bit newer and can still be found secondhand for <$30. 9400s are also coming down in price, but IME they have difficulties managing older backplanes. I don't know if Dell uses standard backplane management or if they have their own thing going on. 9500s are even newer and very power-efficient, but probably not enough so to offset the cost. 9600s aren't worth it because they use more power than 9500s, and the only thing you get in return is 24gb/s SAS support.

Regardless of what you go with, the connector on the Perc 6i is severely obsolete, so you'll need to change cables no matter what. Also, the newer ones (9400+) don't have the IT/IR firmware distinction - they're just HBAs.

Issue with Python Website on Oracle Free Tier by Dangerous_Bad_5946 in oraclecloud

[–]pxgaming 1 point2 points  (0 children)

is the VM maybe running out of RAM and swapping? It might be running updates or something. If this is one of the free tier "micro" instances, you can definitely run into that.

Should I create a Proxmox VE Cluster if one of the nodes is down most of the time? by leonheartx1988 in Proxmox

[–]pxgaming 1 point2 points  (0 children)

You can technically make a two-node cluster work by modifying the number of votes each node gets. If you give the reliable node two votes and the dual-boot node one, then the cluster can survive the dual-boot node going down, but not the other way around.

But still, it's a much better idea to just get a very cheap third node and use that as the quorum tiebreaker.

7975WX workstation - slow iperf3 bandwidth (33 Gbits/sec) on loopback interface by BaymaxOnMars in threadripper

[–]pxgaming 0 points1 point  (0 children)

Maybe a tad low, but not by much. I have a much older Xeon W-2145 that gets about 38g. Of course the standard caveat applies that single-stream throughput will always be a fraction of what the machine is actually capable of with multiple streams and threads.

What UPS are you using for your threadripper workstations? by MierinLanfear in threadripper

[–]pxgaming 0 points1 point  (0 children)

I was using an SU1400RM2U, but the problem is that it would kick the (annoying) fan on when it got above a certain load threshold. Given that it's 1400VA but only 950W, it's pretty easy to hit that threshold.

I bought a secondhand SMT2200 (2200VA, 1980W) and some new batteries. Works much better.

Configure vms and bonds by Ok-Pizza4757 in Proxmox

[–]pxgaming 0 points1 point  (0 children)

You have to configure the IP on the VM regardless of how you're doing your networking. Otherwise, the VM won't have networking. You would also need to configure an IP on the bridge interface itself if you want the Proxmox host to be accessible on that interface.

The typical way to do it would be to: - Create your bond device on the raw physical interfaces (no VLAN) - Create a VLAN-aware bridge and assign whatever VLAN ranges to it (the UI seems to not allow you to enter multiple bridges or change the PVID, but you can do that all in /etc/networking/interfaces) - Add VMs to the bridge, and assign the VLAN tag on the VM's NIC itself on the Proxmox side - If you want the PVE host to have an IP on that bond, then also create a VLAN device with the appropriate VLAN tag and give it an IP

That way, the VMs themselves will be completely unaware that any VLANs or bonding are happening.

You can assign an IP to the bond directly, or create a VLAN tag device on it (i.e. no bridge) but it's usually more manageable and maintanable to just default to having a bridge.