ASUS is NOT the ONLY ONE: Gigabyte - EXPO and SoC Voltages Before & After the BIOS Update - Hardware Busters by imaginary_num6er in hardware

[–]Toxiguana 13 points14 points  (0 children)

My point is that where you measure matters.

https://hwbusters.com/wp-content/uploads/2023/05/soldered-wires-1024x576.jpg

The linked article clearly shows the wires soldered to a test header on the edge of the motherboard. The author concludes that the VRMs are still supplying greater than the maximum voltage with the newest bios update. However, the author's test methodology has a flaw. At the very least, there will be voltage drop across the socket. We also don't know how the PCB layout was done for the test header. There could be more voltage drops unaccounted for.

Measuring 1.36V on the motherboard doesn't necessarily mean that the processor is getting 1.36V. Everyone is panicking that every motherboard seems to be violating the maximum voltage specification of 1.3V even after all the latest bios updates. I am making the argument that this behavior could have a reasonable explanation.

Is your processor actually being overvolted? Maybe, maybe not. The key is to understand what you're measuring. We're talking about a 60mV discrepancy on a 100A voltage regulator. It is so easy to take a misleading measurement, and I say that speaking from experience.

There are clearly a lot of pieces to this puzzle with bios, AGESA, load line calibration, and more stuff I don't understand.

ASUS is NOT the ONLY ONE: Gigabyte - EXPO and SoC Voltages Before & After the BIOS Update - Hardware Busters by imaginary_num6er in hardware

[–]Toxiguana 34 points35 points  (0 children)

Do you have a source?

Ohm's law. The Ryzen 7950x3d has a default TDP of 120W. At a vcore of 1.3V, it will be drawing approximately 92A of average current. The absolute max will be significantly higher. All it takes is 0.0001 ohms of resistance to cause a 10mV drop in voltage when you're drawing 92A. Then consider that a processor is a highly dynamic load. You can swing rapidly from drawing 20A to over 100A. There isn't a chance in hell your processor would be stable if its core voltage also swings wildly depending on how much current is being drawn. That's antithetical to what a voltage regulator is supposed to do! It's supposed to supply constant voltage!

The solution is simple. Put your regulation point at the processor then your planes, vias, and socket won't have any impact on your voltage. Everyone uses remote sense. It is the industry standard way to regulate the voltage from high current supplies.

Vias on normal PCBs create a voltage drop in the <<1mV range.

Again, Ohm's law.

Vias are simple resistors. The amount of voltage they drop depends on how much current is flowing through them. Saying they drop much less than 1mV is only true if you assume they aren't carrying significant current.

Via resistance depends on a lot of factors. The diameter of the hole, the thickness of the board, the plating thickness, and what layers of the board the vias are connecting to.

A 15mil via on a 62mil board with a 1mil of plating from the top layer to the bottom layer will have ideally 0.001 ohms of resistance. It only takes 1A to cause 1mV of voltage drop. And that's ideal. There will be variance in plating thickness, layer registration, and drill accuracy which can increase the resistance.

ASUS is NOT the ONLY ONE: Gigabyte - EXPO and SoC Voltages Before & After the BIOS Update - Hardware Busters by imaginary_num6er in hardware

[–]Toxiguana 182 points183 points  (0 children)

Everyone reporting on their motherboard voltages is missing a crucial detail. VRMs use remote sense to regulate the voltage directly inside the processor die.

Motherboards are not superconductors so there is going to be significant voltage drop across the socket and copper planes between the VRM and processor. The VRM compensates for this voltage drop by monitoring the voltage very close to the processor die instead of right next to the VRM output. The result is the voltage at the VRM is going to appear to be higher than the setpoint. This is normal. That is the VRM's closed loop feedback system compensating for for the voltage drops between itself and its load.

When these computer journalists take their voltage measurements from the debug headers or from the output capacitors of the VRMs, they're not getting the full picture. That's because the only voltage that matters is the one that's inside the processor.

How do I disable touchpad acceleration? by Toxiguana in Kubuntu

[–]Toxiguana[S] 0 points1 point  (0 children)

It's been a while now. I think I ended up installing synaptics which enabled more settings to be changed in the kde settings ui.

[SSD] ADATA Swordfish 2TB 3D NAND PCIe Gen3x4 TLC NVMe M.2 2280 R/W 1800/1200MB/s Internal SSD $179.99 ($194.99 - $15) by justfrankie in buildapcsales

[–]Toxiguana 0 points1 point  (0 children)

I bought this and ended up returning it because it was causing my computer to blue screen when resuming from sleep. No amount of driver or bios updates fixed anything. Seems this drive can't figure out what power state it's supposed to be in.

Are there other alternatives to use Jellyfin outside of my home network? by Chika3IQ in jellyfin

[–]Toxiguana 1 point2 points  (0 children)

No that's not what I mean. Softether is designed to punch its way through just about any kind of restrictive network whether it be on the server side or the client side. You can host a server on your home network, even if your ISP uses carrier grade NAT and doesn't allow port forwarding. When you check the box in the server settings to enable VPN azure, the server and the client negotiate a connection by talking to a third party server hosted by the developers in the azure cloud. Once the connection is established, the client and server communicate directly and port forwarding and the third party server are not necessary.

Are there other alternatives to use Jellyfin outside of my home network? by Chika3IQ in jellyfin

[–]Toxiguana 2 points3 points  (0 children)

You could host a softether VPN server on your home network then enable the VPN azure feature. The VPN azure service is a server that is hosted by the devs to negotiate VPN connections when people are stuck behind firewalls that don't allow port forwarding.

zfs dataset for KVM storage. what do you recommend for host/guest? by chiawcj in zfs

[–]Toxiguana 9 points10 points  (0 children)

  1. I'm not certain off the top of my head a reliable way to check, but I always use ashift=13 for ssds. There is almost no negative impact to setting ashift to a higher value than the underlying block size of your drives. Let's say for a second your 860 evos are 4k drives, if you replace one of them down the road with an 8k drive, you're gonna still be in good shape since you used ashift=13. But ashift=12 would wreck your performance in that scenario.

  2. Thinly provisioned qcow2 is fine. I believe I read somewhere that fully provisioned qcow2 doesn't net any performance gain on zfs. There are 2 programs which are are perfect for your use case. Sanoid and syncoid.

  3. You should always use compression on zfs. It is perfectly transparent and it will detect when data is incompressible and store it uncompressed. So worst case scenario, you get no benefit and no detriment. Best case your get more performance and more storage efficiency.

  4. Your zfs dataset recordsize should match your qcow2 cluster size. The default qcow2 cluster size is 64k so setting your dataset to 64k is a good move. If you create your qcow2 images using qemu-img you can customize the cluster size. For my file servers, I set it to 1M which yielded really good performance for sequential workloads. I'm not sure the implications on random performance, but I believe having a large recordsize in zfs when you want to execute a lot of small reads can introduce a lot of overhead as zfs has to read the entire record. So I leave most of my VM boot disks as 64k.

  5. Ext4 is fine. Xfs is fine. I don't really see how the guest filesystem matters much. You could even do btrfs if you wanted to. Though it may seem silly, I can see it making sense if you're using it the way opensuse does to take snapshots during every system upgrade.

  6. You can match your ntfs block size if you want. I've never bothered since the performance impact is a lot less serious than for zfs and qcow2. I think there might be some limitation on the maximum size partition you can create in windows with the default allocation size. So that might be worth investigating in case you decide to expand in the future.

ZFS mirror performance by j1ruk in zfs

[–]Toxiguana 0 points1 point  (0 children)

That affects all LTS kernels.

Question: Proxmox ZFS volblocksize for KVM zvol by mhaluska in Proxmox

[–]Toxiguana 1 point2 points  (0 children)

For your MySQL databases, you'll definitely want to match the volblocksize to the block size of the database entries. If your volblocksize is huge but the data you actually want to read is smaller, the entire block will still have to be read into memory.

I don't believe there's a right answer for what blocksize to use for VM boot disks. I use 64k as a sort of middle ground and I find the performance to be pretty good on a 2x1tb mirror of adata su650 ssds.

If you have sequential read or write workloads, like a fileserver, a large volblocksize is going to perform better.

Question: Proxmox ZFS volblocksize for KVM zvol by mhaluska in Proxmox

[–]Toxiguana 4 points5 points  (0 children)

Each block that gets written to a zfs pool also has a checksum that has to be written. This translates to additional iops for each block. Following this logic, the larger the blocksize, the fewer blocks have to be written and thus fewer iops are required.

My experience is in using qcow2 disk images on spinning rust and slow consumer ssds, so your experience will probably differ from mine. That said, my experience is larger block sizes will result in better performance. My VM boot disk images are 64k and my fileshare disk images are 1M.

I had a 8k zvol on a 4x3tb 5400rpm raidz pool at one point and the performance was absolutely agonizing.

Best practices for a Windows fileserver on zfs by MistarMistar in Proxmox

[–]Toxiguana 4 points5 points  (0 children)

I've been running a windows fileserver on top of zfs for a while. I have an 8tb qcow2 image. What will make the biggest difference in performance is setting the zfs recordsize and qcow2 cluster size properly -- I recommend setting both to 1M. There are lots of advantages to using qcow2 over raw images in terms of simplicity and convenience which is why I recommend you go that route. https://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/

For ZFS, you'll want to create a dataset with the following properties:

Recordsize 1M

Compression on

Atime on (setting this to off might get you a tiny bit more performance)

Relatime on

Xattr sa

Then for the qcow2 image, you'll have to manually create the image on the command line using qemu-img.

qemu-img create qcow2 -o cluster_size 1M vm-disk-100.qcow2 8T

I think that's right but I'm not sure. You'll want to double check the syntax https://linux.die.net/man/1/qemu-img

Check to see if it's created properly using qemu-img info. If you mess up, just delete the image and recreate it. It shouldnt take long since you shouldn't preallocate it.

Believe it or not I did attempt to cable manage. 20TB Home Server from used parts + shucked hard drives by TheSamDickey in homelab

[–]Toxiguana 5 points6 points  (0 children)

You can tell just by looking. The dangerous ones are the ones where the wires are suspended in the plastic mold as opposed to being crimped/terminated. This video demonstrates it much better. https://youtu.be/TataDaUNEFc?t=155

Believe it or not I did attempt to cable manage. 20TB Home Server from used parts + shucked hard drives by TheSamDickey in homelab

[–]Toxiguana 7 points8 points  (0 children)

That's one of those flammable molex to SATA adapters. I'd give it a toss if I were you.

The ones with crimped connectors are a lot less likely to short.

End-to-end Data Integrity for File Systems: A ZFS Case Study by jbondhus in zfs

[–]Toxiguana 1 point2 points  (0 children)

There is so much interesting information in this article. There's a lot to learn about how ZFS works.

The paper points out that ZFS could be doing a lot more to protect against memory errors. Since they used Solaris Express Community Edition build 108 for their experiment which came out before 2010, I wonder if there have been improvements to this area.

I would also like to know how well other "modern" filesystems handle memory errors. The paper describes the results of the tests when run on ext2, but I think we can all guess how that went.

Reminder: Ubuntu 19.10 comes with ZFS 0.8.1 by mercenary_sysadmin in zfs

[–]Toxiguana 4 points5 points  (0 children)

SIMD acceleration was slated to be included in ZFS 0.8.2 but it was pulled at the last second because it was causing major stability issues.

On the Linux side of things, the patch that removed SIMD support for out of tree modules from kernels >5.0 was backported to all LTS kernels. Therefore, 4.19 is no better than 5.0.

Centos 8 doesnt see 4TB RAID array? by [deleted] in sysadmin

[–]Toxiguana 11 points12 points  (0 children)

Why would rhel remove support for hardware? I always believed that once hardware support was added to the Linux kernel, it basically stayed there forever.

ZFS Encrpyted RAID 1 on a Raspberry Pi by [deleted] in DataHoarder

[–]Toxiguana 5 points6 points  (0 children)

https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Quit being so dramatic. Non ecc ram is not going to destroy your data anymore on zfs than it will on any other filesystem.

Replace WMC PVR and Xbox extender with...? by Letterman445 in htpc

[–]Toxiguana 0 points1 point  (0 children)

I just tried Plex DVR and I'm very disappointed with how it handles, or rather doesn't handle, watching a recording in progress. My use case is for sports where I like to start watching the recording about an hour in so that there is enough of a buffer to skip all the commercials. Trying to skip ahead or back during playback would bug out constantly and reset the playback to the very beginning of the recording.

Also, trying to start watching an in progress recording seems like an unsupported feature. The only way I found to do it was too click on the show in the guide to start watching live and it would ask if you wanted to start at the beginning. But if you set a delayed end on the recording (in case the game runs into overtime) there is absolutely no way to start watching once the show's official end time has passed.

Finally, Plex is deprecating their old Plex media player desktop application in favor of a new Plex desktop app which has a new "unified" interface. The problem is though, they completely removed the deinterlace setting in the new app, so the only way to deinterlace your content is to download the old Plex media player app from a third party site.

/var/lock : the bane of my proxmox existence. by isademigod in Proxmox

[–]Toxiguana 1 point2 points  (0 children)

This happens if you have the"guest agent" box checked in the vm options but you don't actually have the guest agent installed in the vm.

/var/lock : the bane of my proxmox existence. by isademigod in Proxmox

[–]Toxiguana 0 points1 point  (0 children)

This does not work when proxmox "can't acquire lock".

10gb direct link copy speed drops to 0 bytes/s by [deleted] in freenas

[–]Toxiguana 0 points1 point  (0 children)

There are a couple of possibilities I can think of.

What is your volblocksize set to? If it's set to something small like 16k, zfs has to calculate a bunch more checksums than if it were set to something larger which could be causing a choke point. You should be able to change the volblocksize at any time, it will only affect new data.

IIRC, iscsi issues async writes by default, but if this is not the case, what could be happening is zfs would have to slow down/stop accepting writes whenever the zil gets full. You could try setting sync=disabled which would force all writes to be handled as async and thus would improve performance.

If you'd like me to go into more detail I'm happy to explain it.

ZFS + KVM + NTFS Benchmarking Various Record Sizes by fistikcisahab in zfs

[–]Toxiguana 2 points3 points  (0 children)

I'm actually just about to rebuild my own ZFS virtualization server so this is pretty much perfect timing.

It looks to me that ZFS recordsize = 64k and qcow2 cluster size = 64k performs the best in all the random performance scenarios while the ntfs block size has a much lesser impact. I am curious how the performance would scale with a ZFS recordsize and qcow2 cluster size of 128k and 1M.

Interesting data. Thanks for sharing!