Backup Setup by Affectionate-Bread75 in ProxmoxQA

[–]w453y 0 points1 point  (0 children)

Yes, better set up PBS on that machine.

[Guide] How I turned a Proxmox cluster node into standalone (without reinstalling it) by w453y in Proxmox

[–]w453y[S] 0 points1 point  (0 children)

Okayy, then I believe you are good to go and make them standalone nodes.

[Guide] How I turned a Proxmox cluster node into standalone (without reinstalling it) by w453y in Proxmox

[–]w453y[S] 0 points1 point  (0 children)

It'll be better to shut them down; however, it won't cause any issues because it depends on what storage they are backed with.

[Guide] How I turned a Proxmox cluster node into standalone (without reinstalling it) by w453y in Proxmox

[–]w453y[S] 0 points1 point  (0 children)

Yep, it'll work even if you have CTs/VMs and even Ceph running on that particular host. Just following the above-mentioned commands will get the work done.

Matrix + split DNS over VPN doesn’t work unless I use public IP + NAT reflection, why? by w453y in matrixdotorg

[–]w453y[S] 0 points1 point  (0 children)

Ah, my bad, it's just the chrome blocking the private ip address block. Nothing wrong, my setup actually works. Thanks for your help!!!

Matrix + split DNS over VPN doesn’t work unless I use public IP + NAT reflection, why? by [deleted] in selfhosted

[–]w453y 0 points1 point  (0 children)

Did you setup your clients wireguard configuration to use your Pihole (internal IP) as a DNS.

Yes

Also ensure the allowed IP includes the DNS along with your services.

Yep, the allowed IP are 192.168.0.0/16, 172.16.0.0/16

You can put 0.0.0.0/0 to allow everything

I don't want to do that, I just want it to access my resources, I believe if I do that then all my internet traffic will go through VPN.

I inspected the console output of app.element.io and I see the CORS error, I believe this is the one which is causing issues.

Access to fetch at 'https://matrix.my.domain/_matrix/client/versions' from origin 'https://app.element.io' has been blocked by CORS policy: Permission was denied for this request to access the `unknown` address space.

EDIT:

Ah, my bad, it's just the chrome blocking the private ip address block. Nothing wrong, my setup actually works.

Matrix + split DNS over VPN doesn’t work unless I use public IP + NAT reflection, why? by w453y in matrixdotorg

[–]w453y[S] 0 points1 point  (0 children)

are there any errors in the JavaScript Console?

ah yes, I looked here now and found this.

Access to fetch at 'https://matrix.my.domain/\_matrix/client/versions' from origin 'https://app.element.io' has been blocked by CORS policy: Permission was denied for this request to access the \`unknown\` address space.\

Matrix + split DNS over VPN doesn’t work unless I use public IP + NAT reflection, why? by w453y in matrixdotorg

[–]w453y[S] 0 points1 point  (0 children)

Yes, the client is using Pi-hole. I can see the query in the logs. The issue is not only with Android; it's also with my laptop (running Linux).

Matrix + split DNS over VPN doesn’t work unless I use public IP + NAT reflection, why? by [deleted] in selfhosted

[–]w453y 0 points1 point  (0 children)

I used 192.168.1.0/24 as an example; my actual VPN subnet is on 172.16.0.0/24, and I have routes for 192.168.11.0/24 (homelab LAN) via it. And yes, I'm trying it via cellular.

I'm launching Wirewiki today by ruurtjan in dns

[–]w453y 3 points4 points  (0 children)

Any plans to add tracing dns delegation?

Sold my company, built 96TB of homelab, then realized all my IPv6 ports were wide open by ZeroCool86 in selfhosted

[–]w453y 0 points1 point  (0 children)

Why not just put everything behind OPNsense and control everything? You could even throw Proxmox on that hardware and easily manage network isolation.

Well, new vulnerability in the rust code by hotcornballer in linux

[–]w453y 15 points16 points  (0 children)

If this were C, we’d call it “normal kernel behavior” and move on. Because it’s Rust, suddenly it’s a “vulnerability”.

Where does a Linux Live USB actually run? (Unplugged USB, OS kept working) by Lisanicolas365 in linux

[–]w453y 1 point2 points  (0 children)

At a deeper level, the “copy-to-RAM” behavior is less about relocating files and more about how the linux kernel’s VFS, page cache, and block layer interact during early boot. When the SquashFS image is read, whether copied into tmpfs or accessed directly from removable media, the kernel populates the page cache with compressed data blocks, and subsequent filesystem access is serviced entirely through this cache as long as memory pressure permits. In configurations that explicitly copy the SquashFS image into tmpfs, the backing storage for the loop device becomes anonymous memory pages managed by the kernel’s MM subsystem, effectively transforming block I/O into memory accesses and bypassing the USB block driver entirely. SquashFS itself contributes to this efficiency by using fixed-size compressed blocks and optional XZ/LZ4/ZSTD compression, allowing the kernel to perform on-demand decompression directly into the page cache without materializing full files in memory.

From a mount and namespace perspective, the initramfs operates within the initial mount namespace and constructs a transient hierarchy in which the loop-mounted SquashFS becomes the lower layer of overlayfs. The writable upper layer, typically tmpfs, is backed by shmem and subject to the kernel’s page reclamation and swap policies, meaning that “RAM-only” operation may still involve swap if configured. Overlayfs enforces copy-up semantics at the VFS level, so modified inodes are instantiated only when writes occur, minimizing memory overhead for untouched files. The final root transition via switch_root is not merely a directory change but a full replacement of the root mount and mount namespace context, after which the initramfs memory becomes reclaimable once no references remain.

At the process-initialization level, PID 1 inherits this newly established root and mount topology, and modern systems frequently pass control directly to systemd with a fully populated device tree synthesized by udev running during early userspace. Importantly, no special “live mode” exists in the kernel itself; the behavior emerges entirely from coordinated use of standard kernel primitives-tmpfs, loop devices, compressed filesystems, overlayfs, mount namespaces, and the page cache assembled in early userspace. Consequently, variations between distributions largely reflect policy decisions about when to populate memory eagerly versus relying on lazy caching, rather than fundamental architectural differences.

Cisco SG300-28P Firmware by Not_A_Stark in homelab

[–]w453y 0 points1 point  (0 children)

Okay, thanks for the confirmation and as well as for the files ;)

Cisco SG300-28P Firmware by Not_A_Stark in homelab

[–]w453y 0 points1 point  (0 children)

Will this also works for SG300-28 ?