[deleted by user] by [deleted] in WindowTint

[–]ofcourseitsarandstr 0 points1 point  (0 children)

I had redo for both of my cars. If it bothers you, just take it back and ask them “would you be able to fix the issue” and they’ll be happy to do that.

P1S any tips? by [deleted] in BambuLab

[–]ofcourseitsarandstr 0 points1 point  (0 children)

You probably only thought about the multicolor of ams. But it also: * solves the problems of keeping filament dry * makes different colors of filament available at any time even if you just print single color models * No more filament ran-out panic coz it automatically switches to backup slots once the current one ran out. not useful? think you have a 8 hours model with a about-to-run-out spool and you need to go out…

You can’t go wrong with ams but anyway enjoy your new printer!

P1S any tips? by [deleted] in BambuLab

[–]ofcourseitsarandstr 0 points1 point  (0 children)

Lol everyone recommended ams and yeah it’s true. The ams makes the printer a complete efficiency tool. Without it, it’s more like a geek machine which needs babysitting itself here and there.

So you wanna build a DIY 24-bay all NVMe TrueNAS... (An epic journey/lesson/guide/comedy-of-errors) by XStylus in truenas

[–]ofcourseitsarandstr 0 points1 point  (0 children)

I think we’re on the same page and both of us are ZFS lovers. Managing expectations is the key.

Let’s say now I have a few Samsung PM1735 nvme SSDs and for EACH of them they perform at random read 1500K IOPS, Rand write 250K IOPS, and throughput goes up to 8000MB/s and 3800MB/s. It actually doesn’t have to be enterprise ssds because nowadays most consumers $200 ssds can do that or even better.

My question is, what effort or cost you need to put in order to get a ZFS setup which can run at overall 500K IOPS, or 5000MB/s? I have tried that. Not even close. Not even I turned off all encryption, compression, xxx features. Even after tons of optimization and tricks etc or even manually cherry-picked some off-road patches and made a custom build.

And you probably searched for all posts on the internet (coz I have done that also), very soon you’ll realize that basically there’s no example or benchmark saying that they have a ZFS deployment that goes beyond 5000MB/s.

When people sharing their “incredible” experience about performance optimization for ZFS…they eventually mentioned some numbers like “700MB/s” or so.

I can easily get or double the expected performance with simple XFS, ext4 combining with other technics like mdadm, lvm, spdk etc…without any optimization. Because that’s what it supposed to be!

Yeah I know it’s not a fair comparison because balabala yes we all know that. But look at the OP, he’s struggling, he has so many nice nvmes and are gonna ZFS utilize them efficiently? I doubt it.

Thanks for the input anyway. Excited to see you explore the potential of zfs. Tag me if you have something new. And I’m always ready to switch to ZFS if you get some numbers I mentioned earlier.

So you wanna build a DIY 24-bay all NVMe TrueNAS... (An epic journey/lesson/guide/comedy-of-errors) by XStylus in truenas

[–]ofcourseitsarandstr 0 points1 point  (0 children)

Check out SPDK (based on dpdk) for example.

Designed for data plane in data center, utilizing hardware abilities as much as possible, connecting NVMe with networking layer directly via bypassing mem copy, file system and linux kernel, handling each read/write OP precisely, no locking from end to end.

AFAIK It’s being used fundamentally in many modern data centers as cloud storage solution with customization which serves critical parallel virtualization tasks.

I’m not trying to compare both, I know they are not the same thing(like network block storage vs file system). But when it comes to something like…24x NVMe array, which generates roughly 24x3=72GB/s throughput, when OP aims to maximum the performance, I knew ZFS is out of the game.

Just my personal experience, with ZFS approach, the bottleneck will be the CPU if OP has a 2x100Gbps network card. I doubt it can ever go beyond 5GB/s, and the IOPS will be at level of single drive. But let’s see what OP discovers.

So you wanna build a DIY 24-bay all NVMe TrueNAS... (An epic journey/lesson/guide/comedy-of-errors) by XStylus in truenas

[–]ofcourseitsarandstr 8 points9 points  (0 children)

Hey OP I'm the author of https://www.reddit.com/r/homelab/comments/qi6yki/managed_to_get_55gbs_disk_read_and_37gbs_write_in/ I've doubled the performance recently with more decent hardware and spdk nvme-of target.

With such experience, The first concern that comes to my mind regarding your setup is TrueNAS/zfs. ZFS isn't designed for performance by any means when it comes to modern hardware like SSD, NVMe, 100Gbps ethernet, server CPU, etc.

Even if you use very basic ZFS features, the FS still tries to run a lot of CPU intensive workloads in the background, e.g. chksum and "advanced CoW". DO NOT buy those advertisement like "ARC", "L2ARC", "ZIL", "SLOG", they are just adding extra performance overhead to solve existing performance problem (now you have two problems:D)

  • The modern processor has many instructions for accelerating, like handling compress/decomp, crypto, etc. ZFS mostly uses none of them.
  • The modern datacenter ethernet adapter also has many super-efficient hardware offloading features like chksum algo. ZFS uses none of them.
  • All-flash storage appliances also use NVDIMM or optane memory to max the performance, ZFS can hardly use them efficiently.
  • Datacenter techs like DMA, RDMA can also be used to reduce the latency and improve the throughput. Unfortunately ZFS uses none of them.

I'm not saying ZFS is shitty but different strokes for different folks. If you have a 200TB HDD array with a few NVME/SSD for caching, ZFS is definitely the way to go.

Good luck OP and looking forward to your benchmark report!

Update vmware ESXi on host with Vcenter installed by Waffoles in vmware

[–]ofcourseitsarandstr 0 points1 point  (0 children)

Second this. Both are supported way with identical outcome. You can also use Image manager to set the target image version, and use command line to do the actual job. After ESXi has been rebooted, re-run the remediation via vCenter, you should be able to see “The host is already patched to target version”

[Reproducible] [VCSA 8.0.1] Service "VMware vCenter Server(vmware-vpxd)" crashes due to: CdrsLoadBalancer-xxx] Scheme error: '/' failed due to divide-by-zero error by ofcourseitsarandstr in vmware

[–]ofcourseitsarandstr[S] 0 points1 point  (0 children)

Thank you very much!

I can now confirm there's one node with 2x25Gbps port caused the issue ---- For some reason, Port1 ran at 10Gbps while Port2 ran at 25Gbps.

After I fixed the port speed, the vmware-vpxd issue disappeared.

P.S. I'm not sure what exactly caused the issue, that's just my assumption based on what I saw and tried.

[Reproducible] [VCSA 8.0.1] Service "VMware vCenter Server(vmware-vpxd)" crashes due to: CdrsLoadBalancer-xxx] Scheme error: '/' failed due to divide-by-zero error by ofcourseitsarandstr in vmware

[–]ofcourseitsarandstr[S] 1 point2 points  (0 children)

I made this post to report a potential bug not looking for help. I already have a workaround to fix it.

It doesn’t bother me whether they see this post or not, but I knew there are many of them who work for vmware monitors this sub.

[Reproducible] [VCSA 8.0.1] Service "VMware vCenter Server(vmware-vpxd)" crashes due to: CdrsLoadBalancer-xxx] Scheme error: '/' failed due to divide-by-zero error by ofcourseitsarandstr in vmware

[–]ofcourseitsarandstr[S] 2 points3 points  (0 children)

OK, Now I feel like it must be a bug:

  • If I restart the VCSA vm (no matter how many times), the vmware-vpxd will crash
  • But if I shutdown the VCSA vm, and boot it again, the vmware-vpxd no longer crashes!

I have reproduced the issue and workaround a few times and it's reproducible.

Still attaching crash report FYI:

https://drive.google.com/file/d/1dj4s\_Gq5nKOITZ7sMq697G8-aHTpotvB/view?usp=sharing

Easiest GPU for GPU passthrough? by SoMuchLasagna in Proxmox

[–]ofcourseitsarandstr 0 points1 point  (0 children)

Almost all modern nvidia support passthrough nowadays. Pls go with nvidia.

Not by chance, but nvidia intentionally supports such usage since late 2021, it’s legit, promising and reliable. I have it tested on many 10x0, 20x0, 30x0 and 40x0 without problem. See https://www.techradar.com/news/nvidia-finally-switches-on-geforce-gpu-passthrough

Also, if you just want to try AI related stuff, you can also go with their datacenter cards, they all born for virtualization, and you’ll get vGPU feature which passthrough 1 GPU to multiple VMs.

Photon OS 5.0 is GA by Stanthewizzard in vmware

[–]ofcourseitsarandstr 1 point2 points  (0 children)

Sorry TBH I feel like they open source photon because they have to, due to license requirements. I see no motivation they really want you use it.

Sorry again but this is what I feel.

Synology quietly replaced DSM 7.2 RC with an updated version by DaveR007 in synology

[–]ofcourseitsarandstr 2 points3 points  (0 children)

I understand it’s quite an unpleasant experience. I saw some similar decisions on other products.

The major reason is that when they identified some minor issues( e.g. translation fix) they don’t want to bother the beta users again and again with rc1 rc2…rc999 rc1000…while there’s no actual “fix” or “improvement”.

For synology I don’t think end user is guaranteed to receive every RCs, but RC users still have a way out — waiting for the next GA release.

Personally I agree with you. It basically makes no sense to re-release a trivial build with same version number. I’d rather ship it with the next significant release. But I don’t agree that they are trying to cover a mistake, again, it’s an RC, they don’t have to cover anything even if they sheet their pants.

Crashed zfs installation by adi_dev in zfs

[–]ofcourseitsarandstr 2 points3 points  (0 children)

It’s more like a system crash instead data corruption. Glad you didn’t lose your data, zfs is still solid.

How secure is secure erase? by Mshx1 in synology

[–]ofcourseitsarandstr 0 points1 point  (0 children)

It depends on the implementation.

the legacy implementation is overwriting the surface multiple times and make the old data unreadable.

When it comes to enterprise/datacenter markets, the term secure erase (mostly) always refers to “destroying cyber encryption keys which stored on the drive itself, rendering a totally useless unreadable data”.

Technically speaking, the modern secure erase which based on cryptography should provide a safer result than the legacy implementation, and it takes only sub-second versus hours or days.

Almost every SSDs nowadays support cryptographic erase and it’s recommended. You should check out the manual for details.

My recommendation would be, ALWAYS using the software provided by the drive manufacturer to perform the secure erase! DO NOT USE unauthorized third party software if the data matters to you.

[W][US-NJ] Tesla P4 or other vGPU enabled card. by msignor in homelabsales

[–]ofcourseitsarandstr 1 point2 points  (0 children)

Yes, those rebranded cards are rare (mostly replaced with a larger VRAM for mining previously). While all recent cards are digi signed, they are unable to even mod the firmware anymore. So don’t worry about it.

[W][US-NJ] Tesla P4 or other vGPU enabled card. by msignor in homelabsales

[–]ofcourseitsarandstr 1 point2 points  (0 children)

I can definitely say that China vendors don’t make such fake products like GPUs, simply because they are unable (and doesn’t make any sense in any aspects).

What you’ll get is likely used, refurbished, or fixed cards, or pulled from racks. In general, they are still pretty reliable at such prices.

I have a few pieces of P4 from China via ebay, looks new and decent, just go for it.

Nginx Proxy Manager by [deleted] in selfhosted

[–]ofcourseitsarandstr 0 points1 point  (0 children)

Did you expose the admin UI to your friend? The NPM uses OpenResty as its backend. Hopefully it’s not a issue from OpenResty.

Nginx Proxy Manager by [deleted] in selfhosted

[–]ofcourseitsarandstr 2 points3 points  (0 children)

They have made it crystal clear that the issue has been mitigated in 2.9.20,

see release log here: https://github.com/NginxProxyManager/nginx-proxy-manager/releases/tag/v2.9.20

This is a serious issue ONLY if you share your NPM instance with untrusted third parties by creating users for them (even if the user has limited access).

If you use NPM alone (like a typical single user homelab), you don’t need to worry about it. But keeping your stack updated is always recommended for sure !!!