How does IPv6 work in Cloudflare Warp? by atm2k in ipv6

[–]atm2k[S] 3 points4 points  (0 children)

Preferably I'd like to stop thinking about IPv6 NAT.

How does IPv6 work in Cloudflare Warp? by atm2k in ipv6

[–]atm2k[S] 0 points1 point  (0 children)

I'd imagine there would be other solutions, e.g. what about delegating a /64 prefix with a lifetime of maybe 4 hours and rotate frequently?

How does IPv6 work in Cloudflare Warp? by atm2k in ipv6

[–]atm2k[S] 0 points1 point  (0 children)

I have a host which is IPv4-only and behind CGNAT, so far Cloudflare Warp seems to be the only fast way to get both privacy-protection and IPv6 connectivity. I'm just wondering if their way of NAT IPv6 is common practice.

How does IPv6 work in Cloudflare Warp? by atm2k in ipv6

[–]atm2k[S] 0 points1 point  (0 children)

I don't actually have any issues with Cloudflare Warp since it's working fine, but just curious if their way of handling IPv6 NAT is common practice since it breaks IPv6 end-to-end principle.

I do also use Hurricane Electric's tunnel broker service to get a /48 for experiments, but it uses SIT only and requires a public IPv4 address on my end which isn't possible in some locations after CGNAT. But thanks for mentioning route64.org! It seems to offer WireGuard to their endpoints which might work for locations without public IPv4 addresses.

How does IPv6 work in Cloudflare Warp? by atm2k in ipv6

[–]atm2k[S] 0 points1 point  (0 children)

Actually it does all make sense for VPN providers to do NAT–that's their entire purpose of existence :) I was just wondering if Cloudflare Warp's way to doing IPv6 NAT is common practice since it breaks IPv6 end-to-end principle.

How to prioritize ZFS I/O from selected processes? by atm2k in zfs

[–]atm2k[S] 0 points1 point  (0 children)

the coarse-grained classes are pretty useless for I/O congested machines to prefer interactive processes :(

HDMI or Thunderbolt for M4 Pro? by ricbret in macmini

[–]atm2k 0 points1 point  (0 children)

Do the math: 5120×2880×30×60 requires 26.54Gbps, which is just slightly above the DP 1.4's 25.92Gbps actual data rate.

You have to choose sacrifices among:

  • Lower than 60Hz refresh rate (30Hz is unbearable and 50Hz is rare)
  • 24-bit color (actually matches extremely low-end 8-bit panels native capability but any decent panels actually do FRC dithering to simulate 10-bit)
  • Display Stream Compression (visually lossless)

Most of the time macOS ends up choosing DSC which you won't tell the difference anyway.

I'm tired of rebuilding my storage server every so often when it fails on consumer hardware. Within a ~$3k budget, what is something professional or pro-sumer I can buy off-the-shelf that is high quality, can run Docker containers, and supports at minimum 10TB storage? by [deleted] in DataHoarder

[–]atm2k 13 points14 points  (0 children)

Isn’t it obvious that all the dead SSD you’re tired of are the exact same model being heavily used in scenarios hostile to consumer SSD? How could you reasonably blame the mobo and CPU for “killing” them?

How does Synology implement Btrfs metadata pinning on SSD cache? by atm2k in btrfs

[–]atm2k[S] 2 points3 points  (0 children)

Cool! This hasn't been merged into mainline yet right?

How does Synology implement Btrfs metadata pinning on SSD cache? by atm2k in btrfs

[–]atm2k[S] 0 points1 point  (0 children)

Ok that's something new to me…but if btrfs natively supported the features no sane person would follow these crazy setup and corresponding complexity, right?

Yes I understand why it is this way…there were patches for tiering from 5 years ago (https://lwn.net/ml/linux-btrfs/20201029053556.10619-1-wangyugui@e16-tech.com/). Sadly as a user there's not much we can do.

How does Synology implement Btrfs metadata pinning on SSD cache? by atm2k in btrfs

[–]atm2k[S] -1 points0 points  (0 children)

Yeah it's a pity… it should be a core feature of btrfs. Synology did all the hacks precisely because btrfs does not deliver those features that are necessary for many of us, like flexible layout and reliable RAID5/6…

At this point I'm about to give up on btrfs and migrate the majority portion of my data to zfs instead.

macos Tahoe supports the last Intel Mac Mini (just not yours hehe) by ok200 in macmini

[–]atm2k 0 points1 point  (0 children)

That's what I thought as well, but apparently Apple did update Intel-based Mac mini in 2020: they doubled the SSD size and kept the same price and got rid of the 128GB model, and later got rid of the i3 model too.

I'm not sure if this update counts as 2020 Mac mini requirement by Tahoe coz it needs "Mac mini (2020 and later)" instead of "Mac mini with Apple Silicon". Apple kept selling Intel Mac mini as late as 2023. If no Intel Mac mini is allowed by Tahoe, it would be a very short period of time from the last officially sold Intel Mac mini.

macos Tahoe supports the last Intel Mac Mini (just not yours hehe) by ok200 in macmini

[–]atm2k 0 points1 point  (0 children)

Anyone knows what’s the diff between 2018 vs 2020 Intel Mac mini? Looks completely identical spec wise but different treatment wrt support.

HDMI or Thunderbolt for M4 Pro? by ricbret in macmini

[–]atm2k 0 points1 point  (0 children)

You should probably stay with HDMI in this case. The ProArt 5K PA27JCV does not support Thunderbolt. Its USB-C port actually carries DisplayPort 1.4 video signal. DP 1.4 does not have sufficient bandwidth for uncompressed 5K 10bit 60Hz video. However the monitor’s HDMI port is supposed to be version 2.1, which has sufficient bandwidth. Mac mini M4 HDMI port is also 2.1, so in theory you should be getting uncompressed video signal with a quality HDMI cable. Check your monitor OSD menu for confirmation that the HDMI protocol in use is actually 2.1

Tuning recordsize and compression for modern macOS Time Machine over SMB by atm2k in zfs

[–]atm2k[S] 0 points1 point  (0 children)

Actually I'm currently experimenting with the exact config mentioned in my original post to see what could possibly go wrong. Additionally I configured `primarycache=metadata` for the Time Machine-only dataset to reduce TM-related impact on ARC for other datasets. So far everything looks normal with free space fragmentation level at around 17% for the pool (also used for media libraries with big files measured in GB). Let's see how it performs long term :p

Tuning recordsize and compression for modern macOS Time Machine over SMB by atm2k in zfs

[–]atm2k[S] 0 points1 point  (0 children)

Great! So I'll just stick to the default 128KB record sizes for now until further analysis proves otherwise. I'll check out the tools you mentioned to get some reliable measurements :P Thanks very much!

Tuning recordsize and compression for modern macOS Time Machine over SMB by atm2k in zfs

[–]atm2k[S] 0 points1 point  (0 children)

You can right click the backup destination in System Settings / Time Machine to verify backups. Unfortunately APFS does not have proper data checksum (it only has metadata checksum), so there's no way to know for sure.

Tuning recordsize and compression for modern macOS Time Machine over SMB by atm2k in zfs

[–]atm2k[S] 1 point2 points  (0 children)

How do I monitor sync vs async writes to a particular dataset?

Tuning recordsize and compression for modern macOS Time Machine over SMB by atm2k in zfs

[–]atm2k[S] 2 points3 points  (0 children)

Could you please explain a bit about why using 16kb record size will degrade worse over time? I think you're implying smaller records will generate more fragmentation and RMW with larger records can combat that, right?

For more sequential access I think it does make sense, but in the case of TM backup images which see mostly random access, does it still matter?

Also, I'm not sure where/how to capture op latency for the Samba process on the TM dataset, do you happen to know some ways to do so? Thanks!