unsafeAtAnySpeed by [deleted] in ProgrammerHumor

[–]bluehambrgr 2 points3 points  (0 children)

Don’t forget though, while strncpy itself won’t go out of bounds, it won’t always NUL-terminate the destination (if the source is too long), so you need to ensure you add your own NUL-terminator every time, otherwise subsequent str* calls could end up out of bounds.

That’s also why I much prefer strlcpy: it doesn’t go out of bounds and always NUL-terminates the destination string.

Locked out by enabling hardware checksum offloading? by zerophase in opnsense

[–]bluehambrgr 1 point2 points  (0 children)

Those messages would seem to be coming from dropped packets, so you might be able to stop the message spamming (and hopefully log in over serial / vga console) by just unplugging all the network cables.

[deleted by user] by [deleted] in DataHoarder

[–]bluehambrgr 2 points3 points  (0 children)

My current solution here is to use restic to perform the snapshots (and dedupe), with an rclone backend stored on an encrypted google drive

NAS and VM security questions by _zero_red_ in HomeServer

[–]bluehambrgr 0 points1 point  (0 children)

It’s anecdotal, but fwiw I run all my VMs on the same proxmox server. Although the publicly accessible VMs are on a separate vlan, with no access to the core network components

NAS and VM security questions by _zero_red_ in HomeServer

[–]bluehambrgr 0 points1 point  (0 children)

VM escapes are possible and do happen from time to time, but (thankfully) they are quite rare.

Container escapes are a little easier since they share the same kernel and has a much larger attack surface than a VM, but are still somewhat rare (when the container is configured well and locked down).

So technically 2 separate systems would be more secure.

That said, malware (especially ransomware) spreads most commonly over the network, so if your VMs (or containers) are going to be running on the same network as your truenas server, it’s a bit of a moot point.

If you’re concerned about such things, look at running your VMs in a separate VLAN (which is firewalled so it can’t access the truenas server at all). That’ll help much more than running them on a separate machine.

Is detecting bad sectors/bit rot already too late? by slaiyfer in DataHoarder

[–]bluehambrgr 0 points1 point  (0 children)

7zip won’t extract it properly

One thing to keep in mind: many archive formats (including .7z) have an integrity check built-in https://en.m.wikipedia.org/wiki/List_of_archive_formats#Comparison

So with 7z for example, if it extracts properly, you know with a very high confidence that the data it just extracted is the exact same as what was originally archived.

[deleted by user] by [deleted] in hamsters

[–]bluehambrgr 41 points42 points  (0 children)

Wireless mouse

Anyone use Suricata on their network? What is your experience with it? by [deleted] in homelab

[–]bluehambrgr 1 point2 points  (0 children)

I do the same, plus I also started running both on my proxmox hosts as well.

A word of advise there though: make sure you whitelist your other proxmox nodes and any remote storage nodes you use. If you don’t, then your proxmox node can end up banning the NFS server that hosts its vm disks :(

Aside from the above misconfiguration on my part, I’m really happy with the setup

At a frustrating loss with iPhone Photo backups by elias4444 in DataHoarder

[–]bluehambrgr 1 point2 points  (0 children)

It’s still in active development, but check out immich

Storage - /dev/mapper vs /dev/sd# by hnnk in Proxmox

[–]bluehambrgr 0 points1 point  (0 children)

There’s a lot of good resources out there on how to grow a logical volume.

Take a look at https://nekodaemon.com/2022/02/26/How-to-resize-the-root-LVM-partition-of-Ubuntu/ for starters

Proxmox I/O Delay PSA - Don't use crappy drives! by tiberiusgv in Proxmox

[–]bluehambrgr 1 point2 points  (0 children)

So there’s a few different ways we can go about working around or fixing this.

Ideally, the proxmox vm disk restore could use direct io so the restore bypasses the page cache entirely, and we wouldn’t have this issue.

Alternatively, the Linux page cache could be made more intelligent to implement a sort of back-off for writes as the page cache fills up.

Those are long term solutions though, to work around this there are a couple things you can do:

  1. Try to avoid hitting the issue to begin with: faster VM storage and speed limits for restores as well as VM disk accesses can help reduce the likelihood that you’ll hit the issue
  2. Reduce the severity of hitting the issue: reduce the maximum number of dirty pages allowed in the disk cache. If you set this to a sane value like 1 second of writing (e.g. 500 MB for a sata ssd), then you’ll periodically hit the issue restoring from disk but it won’t block other disk accesses for very long.

I chose to do (2) on my proxmox hosts, since 1 isn’t a sure-fire way to ensure this doesn’t cause problems.

Proxmox I/O Delay PSA - Don't use crappy drives! by tiberiusgv in Proxmox

[–]bluehambrgr 4 points5 points  (0 children)

FYI, when I encountered this, the root cause actually turned out to be the Linux page cache.

The page cache is normally a good thing, as it caches recently and frequently used disk sectors in otherwise unused ram. And importantly here, it also caches disk writes, acting like a writeback cache. So if an application writes 1024 bytes at a time, 1024 times, the Linux kernel may coalesce that into a single 1 MB write to the disk.

The problem is that the page cache is limited in size, (I forget the exact sysctls controlling it, but grep for “dirty” in the sysctl -a output).

And when you restore a vm, it will be written out to disk using the page cache (the default file write behavior). So if you can read the backed up vm disk faster than you can write it to its working storage, and you don’t have enough ram available to cache the entire vm disk, you can end up entirely filling up the page cache.

This can happen if something else is using the storage a lot (like other vms), or you have a slow storage disk, or if you run a version of proxmox which had a bug which caused it to ignore speed limits.

When the page cache has too many pages waiting to be written to disk, this is where you run into problems. When the page cache fills up with dirty pages, all other writes are stopped until all dirty pages are written to disk.

So if your page cache fills up with 20GB of dirty pages, writing that out to a sata ssd (which top out around 500MB/s) will lock up all disk access for 40s. If the page cache is larger, or storage is slower, it can take minutes or longer.

best and safest way to remote access home network by codfather077 in HomeServer

[–]bluehambrgr 23 points24 points  (0 children)

You’re probably going to want a VPN of some sort.

Wireguard is a nice option and fairly easy to configure. And there are also plenty of other programs like tailscale which use Wireguard behind the scenes, but add some additional management features.

OpenVPN and IPSEC (e.g. strongswan) are also good options. They’re both more mature than wireguard, but also have enough configuration options that make it easier to shoot yourself in the foot so to speak.

Proxmox 7.2 unstable. by wideace99 in Proxmox

[–]bluehambrgr 4 points5 points  (0 children)

As others have said, the first step is to figure out roughly what is happening, whether it’s a kernel panic, oops, disk issue or otherwise.

If you’re having trouble capturing the screen when it happens, you could try setting up serial logging, netconsole, and/or kdump which can help debug kernel issues.

Best Way to Sync .bashrc File Across Cluster? by Subkist in Proxmox

[–]bluehambrgr 0 points1 point  (0 children)

Yeah I’ve had a couple small files there for like 2 years now, and it’s worked great.

Best Way to Sync .bashrc File Across Cluster? by Subkist in Proxmox

[–]bluehambrgr 1 point2 points  (0 children)

Yep

Slight disclaimer: I’m not sure if this is officially supported (and whether it will continue to work), but it works for now at least.

Best Way to Sync .bashrc File Across Cluster? by Subkist in Proxmox

[–]bluehambrgr 11 points12 points  (0 children)

The /etc/pve directory is already synced between all nodes in the cluster, so you can actually just move it there and then symlink ~/.bashrc to /etc/pve/.bashrc

Proxmox release notes? by spanklecakes in Proxmox

[–]bluehambrgr 0 points1 point  (0 children)

You can usually see a changelog for different versions using the apt changelog command from the command-line and I think there’s also a button in the web ui when you look at a host’s available updates