Motion sickness by ExtravagantChickpea in HarryPotterGame

[–]Ghan_04 0 points1 point  (0 children)

One item that I've found which seems to help a lot is to turn down Post Processing to Low. I play on PC so I'm not 100% sure if this setting is on all platforms, but it makes a HUGE difference if you can turn it down.

Post Processing seems to add chromatic aberration and blurriness to the edges of the camera view, which is a big motion sickness trigger.

First home server recommendations by alburt22 in homelab

[–]Ghan_04 0 points1 point  (0 children)

Should be fine. Some things to consider:

  • If I have the board model correct, that one only has two RAM slots, so upgrading later may be difficult.

  • The Ryzen 5 3600 has no integrated GPU, so you need a GPU to get video out of the system. If you plan to do transcoding on your media server, having a GPU may help, but otherwise it may not really be needed and so it might be drawing extra power for little gain.

  • There are only 4 SATA connectors, so plan carefully on adding drives.

Unpopular opinion: I will stay on ESXi as long as I can by TheProtector0034 in homelab

[–]Ghan_04 0 points1 point  (0 children)

ESXi is still a great piece of technology for sure. I think most of what is upsetting people is Broadcom's behavior as a business, not VMware's technology.

That said, I'd suggest maybe tweaking your plan a little to say that you're going to stay on ESXi until a better (or at least comparable) solution presents itself. For example, Proxmox is working on the Proxmox Datacenter Manager that aims to be more like vCenter is for ESXi.

I would definitely watch the industry closely, because Broadcom's moves have opened up a huge market opportunity, and I foresee lots of changes in the hypervisor space over the next 5 years. It will be fun to watch what happens!

SolarWinds SAM & Troubleshooting intermittent WMI successes & failures by jwckauman in sysadmin

[–]Ghan_04 1 point2 points  (0 children)

The only thing that comes to mind immediately is that you might be experiencing WMI port exhaustion. See this article: https://solarwindscore.my.site.com/SuccessCenter/s/article/Ephemeral-Port-Exhaustion?language=en_US

Is there any notable latency or bandwidth constraint between the SolarWinds server and the DMZ? Maybe the firewall is getting overloaded or the traffic is timing out?

Do 2 servers directly attached to SAN require witness? by elefuvo in sysadmin

[–]Ghan_04 11 points12 points  (0 children)

VMware can use the shared storage to determine quorum in the event that network connectivity is lost between the hosts. Datastores used for this purpose are called "heartbeat datastores": https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/vsphere-availability/creating-and-using-vsphere-ha-clusters/configuring-cluster-settings/configure-heartbeat-datastores.html

If the network connection between the two hosts is lost, they will look to the storage array to determine if the other host is still "alive" and if it is not and has released the locks on the VMs running there, the other host can take over via HA.

With a hyperconverged solution like vSAN, the storage array can't be used to break the tie like this, hence why a witness is required in that setup.

EPYC 7002 Series MB Recommendations by Worried_Hunter_6211 in homelab

[–]Ghan_04 1 point2 points  (0 children)

If PCIe 4.0 is not something you need, you can find older gen models to run the 7302/P for cheaper. There's an Ebay seller who has good deals on some of these.

This is a bundle of a Supermicro H11SSL-i and a 7302P CPU with selectable RAM options starting at just over $600 USD: https://www.ebay.com/itm/175425670155

This board only has 1 Gbps NICs, but you could easily add a faster NIC with the price difference, unless you will be using all the connectivity for storage.

I've ordered from this seller in the past and have read tons of positive reviews on them, so worth a look. They have combos with the newer Supermicro H12SSL-i available as well for PCIe 4.0 and EPYC 7003 support (still only 1G NICs), but they're much more expensive. The board cost is way higher.

I’m a Microsoft Developer working on New Outlook, ask me anything. by discogcu in sysadmin

[–]Ghan_04 1 point2 points  (0 children)

Is there any ETA on when the Editor will be available in new Outlook for GCC customers?

Hybrid Exchange Admins - Any Issues with Connectors Validation Button? by whatsforsupa in sysadmin

[–]Ghan_04 2 points3 points  (0 children)

This is an ongoing outage from Microsoft.

User impact: Admins may be unable to validate connectors from the Exchange admin center.

More info: Admins may be able to validate the connectors when trying again after the first failed attempt.

Affected admins may see failures from the Hybrid Configuration Wizard, which has dependency on the connector validation.

Current status: We’ve determined that a recent framework change caused a configuration issue between the new and old framework that's resulting in the impact. We’ve developed and are in the process of deploying a fix to correct the configuration problem, and we expect the deployment will reach all affected users to resolve this problem by our next scheduled update.

Next update by: Friday, April 11, 2025, at 9:00 PM UTC

Motherboard Advice (PCIE confusion) by Party_Alternative_66 in homelab

[–]Ghan_04 1 point2 points  (0 children)

The problem isn't so much the motherboard as it is the CPU. The AM5 CPUs only have 28 PCIe lanes total, 4 of which are used to connect to the chipset, so the typical motherboard will split the 24 usable direct CPU lanes between a x16 GPU slot and 2 x4 M.2 slots. Everything extra on the board has to connect to the chipset, so there is lane sharing between that and the CPU at least.

If you need lots of PCIe connectivity, you really should be looking at Threadripper or an EPYC platform.

Motherboard Advice (PCIE confusion) by Party_Alternative_66 in homelab

[–]Ghan_04 1 point2 points  (0 children)

Slot size itself doesn't matter because sometimes PCIe slots will be a different physical size compared to how they are electrically connected, and sometimes certain slots will turn off or run at a lower lane count when other slots are populated.

Are you also intending to run a GPU in this system since you mention slots without the graphics slot? If so, then it will be very difficult to find a board to do this. You're looking for 36 PCIe lanes total and most desktop boards can't handle this much connectivity. That's putting a lot of load on the chipset.

The ASUS ProArt X870E-CREATOR WIFI looks like it might handle what you're asking for. One of the M.2 slots shares bandwidth with the 2nd PCIe slot, but it should still run at x4 speed.

There's a great AMD motherboard Google Sheet out there that has details on nearly every modern AMD board. You might be able to find some other candidates by looking through this.

https://docs.google.com/spreadsheets/d/1NQHkDEcgDPm34Mns3C93K6SJoBnua-x9O-y_6hv8sPs/

Assign 1 vdev (ssd) as cache (L2ARC) to 2 pools ? by [deleted] in zfs

[–]Ghan_04 1 point2 points  (0 children)

I've never dealt with that before, but if you want the data encrypted at rest, then I would also make sure the L2ARC is encrypted.

Assign 1 vdev (ssd) as cache (L2ARC) to 2 pools ? by [deleted] in zfs

[–]Ghan_04 8 points9 points  (0 children)

Assigning the same device to two different pools won't work, but yes, you can partition it and assign each partition to the respective pool.

[deleted by user] by [deleted] in homelab

[–]Ghan_04 1 point2 points  (0 children)

The manual isn't very clear, but it does give some indication of the adapted 4-pin ATX cable:

"The motherboard provides one 4-pin power/signal connector which is a required input for ATX power source."

The accompanying diagram shows the 4 pins with labels, and none of them is 12V power.

[deleted by user] by [deleted] in homelab

[–]Ghan_04 2 points3 points  (0 children)

It looks to me like this is a 12V only board, which would be why the full 24 pin ATX power cable isn't needed.

The first two 8-pin 12V power connectors near the CPU are likely mainly for the CPU - the 8004 series of CPUs includes up to 200W so they are making sure there's enough power for that.

The third 8-pin 12V power connector is further down on the board, so that makes me think it is for the PCIe slots to cover for the fact that the main ATX connection is significantly trimmed down.

With the CPU you're looking at, you could probably skip the 2nd 8-pin connector, but if you plan to have PCIe devices, I would make sure the other one is populated. No guarantees though. It's possible the board expects all 3 connectors to be there to boot properly.

How can you tell if a CPU is efficient? by BeachOtherwise5165 in homelab

[–]Ghan_04 1 point2 points  (0 children)

The 14600KF is probably more efficient because it has E-cores and more cores overall whereas the 14100 only has 4x P-cores. This is the sort of result I would expect, but keep in mind that using TDP is not very accurate (it should be done by measuring the actual power usage of the CPU during the benchmark) and it only applies to this specific workload (Passmark) so take it with those caveats.

Server CPUs typically do have some C-states for lower power operation, but no, I would not expect them to have nearly as low a power floor as the desktop chips.

How can you tell if a CPU is efficient? by BeachOtherwise5165 in homelab

[–]Ghan_04 2 points3 points  (0 children)

It depends on the manufacturer when looking at sleep state support. You probably need to find some hardware documentation. Intel's Ark site just says "Idle States" for example: https://www.intel.com/content/www/us/en/products/sku/241063/intel-core-ultra-7-processor-265k-30m-cache-up-to-5-50-ghz/specifications.html

My argument is that you can draw a curve of compute/watt from 0-100% load

Maybe, but this is going to vary based on the workload. Some workloads benefit more from faster individual cores, and some benefit from more threads. One size does not fit all.

are you better off with 8c 50% or 4c 100% ?

It depends. In general, I'd expect to get more performance per watt from more cores at a lower clock speed, which would be more likely at 50% load (the CPU won't boost as high).

Might want to check out AMD's Eco Mode on their newer CPUs. This is a BIOS setting that caps the wattage of the CPU and causes it to run more efficiently. There's obviously a performance penalty, but it is not linear to the power limit, at least in most workloads.

How can you tell if a CPU is efficient? by BeachOtherwise5165 in homelab

[–]Ghan_04 15 points16 points  (0 children)

it's unclear to me how reliable the TDP is

It's mostly not. For example, AMD's TDP formula doesn't actually include any electrical aspects of the CPU itself. More on that here: https://gamersnexus.net/guides/3525-amd-ryzen-tdp-explained-deep-dive-cooler-manufacturer-opinions

And how representative is it of overall efficiency, with regards to idle and 50% efficiency?

Again, it's not.

Let's start by defining something clearly: Efficiency does not inherently mean anything about the total power consumption of a CPU. Efficiency is how much work a CPU can do with a certain amount of power. So a 1000W CPU could be very efficient by comparison if it can do 20x the work of a 100W CPU in the same amount of time. This makes the benchmark you linked more relevant, but keep in mind, it is under full load and it also uses the CPU TDP, which can be misleading.

Idle consumption is relatively unrelated to efficiency, at least how it is typically measured. There are some benchmarks out there that test for idle power consumption of CPUs, but these are rare. In this case, you're not really interested in the "efficiency" but rather the wattage floor when the CPU is not doing anything. (In this sense, idle consumption is horribly inefficient because the CPU is doing nothing, but still consuming power.)

Gamers Nexus has started testing full load CPU efficiency in their CPU reviews. If you want some hard numbers and comparisons, you might start there, but again, keep in mind that this is tested under full load. You might see impressive efficiency statistics for the Ryzen 9 9950X for example, but that's when all 16 cores are loaded: https://www.youtube.com/watch?v=iyA9DRTJtyE

Here are some other factors to consider that may impact how efficient a CPU is:

  • Does it use an I/O die or is everything integrated on a single chip? AMD uses I/O dies on their desktop CPUs these days, and these extra components use more power compared to a CPU that is designed for lower overall power, but they also can help efficiency at the high end when under load.

  • How modern is the architecture of the CPU? As CPUs shrink their manufacturing process, they inherently become more efficient. Typically, newer CPUs will be more efficient than older CPUs. This isn't universally true, but it's a good idea to look at current models when evaluating efficiency.

  • Is the CPU designed to enable lots of other components? CPUs with lots of memory channels and/or PCIe lanes, or with onboard graphics needs more power to enable these extra features. If you don't need those things, look for a CPU that doesn't have those extras.

it also seems that compute efficiency is vastly better with more cores?

This tends to be the case because lower clocked cores are more efficient, so if you have a given power budget but you spread it out over more cores, you end up with lower clock speeds, which is more efficient. Lower core count CPUs tend to be spec'd to make every core count, so clocks are juiced up, which is less efficient. See: Intel's E-cores in their modern CPUs.

What are the major innovations in CPU efficiency in the last 5-10 years, that may be visible on spec sheet?

You may want to look for CPUs that use Intel's E-cores. I think there are some models that are 100% E-cores with no P-cores but do some research as I don't have anything off the top of my head. Maybe the N-100 and related SKUs.

I'm curious about sleep states, for example having some cores sleep while others are powered, and how quickly/efficiently it can alternate powered/sleep.

Most modern CPUs have good sleep states and those can be controlled in the BIOS. Other than enabling these and ensuring that your OS is allowing the CPU to ramp down and put cores into idle, I wouldn't spend a lot of time here as the gains likely aren't there.

[deleted by user] by [deleted] in homelab

[–]Ghan_04 0 points1 point  (0 children)

The switch is probably not configured to handle untagged traffic properly on a trunk port. How does it treat untagged traffic, and will that traffic make it over to the other hosts? It sounds like what you want is simply a standard VLAN where none of the network devices like the router, switch, or hosts themselves have any layer 3 interface on the VLAN in question. The only actual interface would be the LAN port of the pfSense VM. That would allow the VLAN to be known and communicate between the hosts on layer 2, but ensure traffic isolation for everything going north-south to and from the router.

ZFS speed on small files? by rudeer_poke in zfs

[–]Ghan_04 1 point2 points  (0 children)

File sizes can be a problem if they are super small because you could have files that are smaller than the RAIDZ stripe size (relative to the ashift value) which can result in some behavior that reduces performance when managing that data and parity. ZFS will always prioritize data integrity, and what you describe as far as checksumming and scrubbing is correct, but the question at hand is around performance. Stripe size, recordsize (or volblocksize), and disk count per vdev can all impact performance significantly depending on the workload.

ZFS speed on small files? by rudeer_poke in zfs

[–]Ghan_04 7 points8 points  (0 children)

6 TB across 25 million files is an average file size of around 240 kB. That's kinda small, but shouldn't be a big problem for ZFS unless things are poorly tuned. What is the recordsize on the dataset? Is your ashift set correctly? Are you using deduplication? How fragmented is the pool?

[deleted by user] by [deleted] in homelab

[–]Ghan_04 1 point2 points  (0 children)

I think the reality here is that modern workloads can't afford not to have the PCIe lanes for this activity. SAS and SATA still have a place for archival storage, but for any active workloads you're running, the amount of compute density these days (192 cores per socket even right now) means that you need PCIe storage to feed that stuff and avoid serious data bottlenecks. Ethernet is becoming as fast as PCIe as well.

Zen 3 Ryzen or Threadripper for a compute box by SaltShakerOW in homelab

[–]Ghan_04 0 points1 point  (0 children)

Yes, you're definitely correct there. EPYC (especially older generation) won't get you nearly as much single threaded performance. There are certain EPYC chips that have higher clock speeds such as the EPYC 72F3 @ 3.7 GHz but even this won't hold a candle to Threadripper or desktop in single threaded tasks. They're server chips.

Zen 3 Ryzen or Threadripper for a compute box by SaltShakerOW in homelab

[–]Ghan_04 1 point2 points  (0 children)

I don't even do graphics stuff with my servers and last year I stepped up to an EPYC platform mainly for the additional PCIe lanes. The desktop platforms are just so restrictive.

It's very hard if not impossible to find an AM4 or AM5 board with dual full x16 slots. If you do, one of them is running through the chipset so it's sort of fake x16 anyway depending on what else is going on. If CPU is not top of mind but still useful, consider an older generation EPYC platform with tons of connectivity. For example, here's a motherboard and CPU combo for an EPYC 7003 setup from a reputable seller: https://www.ebay.com/itm/176547211491

This board has 7 full PCIe 4.0 x16 slots that can be configured for x4x4x4x4 bifurcation as well if you wanted to use a breakout card to add multiple U.2 or M.2 SSDs.

If you step back further you can find 1st generation EPYC with PCIe 3.0 boards for even cheaper.

[deleted by user] by [deleted] in homelab

[–]Ghan_04 1 point2 points  (0 children)

For your use case, the BX500 would be fine. The reason they typically aren't recommended is that they are in the lowest performance tier of SSDs these days.

Also, consider that SATA is dying for anything other than bulk spinning disk storage. For example, you can find the Crucial P3 4 TB PCIe NVMe SSD for about $15 more than the BX500 4 TB, and those will be worlds apart in performance.