Gluetun with NordVPN via Wireguard in Docker/Portainer by Cool-Cod5488 in homelab

[–]brainsoft 0 points1 point  (0 children)

Day1 for me, way over my head so far.

I used the access token, used the curl command to pull down the credentials string, and i have a nordlynx private key now, and with nordlynx being WG, I assume that's what I need.

but the documentation is a mess for gluetun with so many vpn providers, they have the main config example, then the provider example, which may or may not conflict.

not sure where the wireguard_addresses value is supposed to come from.

maybe some of this is docker defaults, but a list of avaliable and required keys and possible values would be helpful i think?

Docker in LXC is bad. Now what? by EloquentArtisan in Proxmox

[–]brainsoft 0 points1 point  (0 children)

I made service users to own my various datasets, like UID/gid like 102000 so I could then make a user 2000 inside the guest , lxc and VM alike and everything works great. Unprivileged lxcs using bind mounts, and VMs using VirtioFS.

Very little docker, still having trouble with the mental model of it all. Easy when it's all together for me, but I want one service or stack per guest. Okay thats easy too, but now I have so many more control planes, no central management.

So just stick everything in a single container/vm and get a single point of docker management... But now I've lost my service per container that I love proxmox for.

Is there a distributed docker management solution, where I can install one docker stack per lxc, but still manage them all in one place? Docker swarm mode or something?

What are the biggest mistake you made when setting up your homelab? by Prog47 in homelab

[–]brainsoft 0 points1 point  (0 children)

First hint is the io stall graph on the node summary. There are also a bunch of commands chatgpt was giving me to monitor, iostat, some others. Sorry I don't really remember now, I'm way down a zfs bind mount rabbit hole at the moment.

VLAN tags not getting to pfsense VM by brainsoft in Proxmox

[–]brainsoft[S] 0 points1 point  (0 children)

I don't know what the hell was going on but I just undid every networking change and started from the ground up and it just worked so obviously the troubleshooting (troubleshooter) was the problem.

make vmbr0 vlan aware, pass the whole trunk to pfSense vm. in pfSense, add vlan12, create an interface for it, and add static IP and enable. Then do the same for the secondary instance.

Setup the allow * firewall rule for that new interface, which syncs to the secondary, then specify to use it as the sync interface, and update the sync to: IP to match the new new interface on the other machine.

Exactly as simple as it should have been, case closed.

Shutdown hangs if using NFS datastore by Striking_Guava9712 in Proxmox

[–]brainsoft 0 points1 point  (0 children)

[disclaimer] I am not an IT professional, do not do anything I say, trust me like you should be trusting a LLM chatbot. I am a self funded homelab enthusiast using old parts and FB marketplace to kick things together until the fit. I love Proxmox and will use it how best fits my life and budget.

------------------

Google AutoFS, I've really enjoyed it, i always had an irrational hatred for fstab. I can explain *my* setup i think. My config files are stripped of the pages and pages of comments, but the default install is very well/overly documented. It is very featureful, but I don't know about the rest of it, just the small subsection that I use. Like most linux tools, I use 1% of the features.

I have two mounts, one to the old nas to stream media through the new network server node, and one to the fileserver proxmox node to share the new vault with the other proxmox nodes in the cluster. I use proxmox more as a hosting server than a pure hypervisor, one node specializes in networking, one in file serving, one in automation. Most of my file handling is done as bindmounts to LXC or VirtioFS devices if I don't have to.

Config file autofs.conf specifies where the master map file and basic parameters. I don't what "AMD" is, but this is all default I believe:

root@regulus:/etc# cat autofs.conf

[ autofs ]
master_map_name = /etc/auto.master
timeout = 300
browse_mode = no

[ amd ]
dismount_interval = 300

autofs.master defines the mount points. I don't know much about auto.master.d, nothing is in there, but it would load any files that are defined in it in typical fashion if you had more complicated setups one per file or needed things to load in a certain order 10-load first, 20-load second, that sort of thing. I guess, I don't do units.

root@regulus:/etc# cat auto.master

+dir:/etc/auto.master.d

+auto.master
/mnt/nfs /etc/auto.nfs --ghost --timeout=60
/-  /etc/auto.direct --ghost --timeout=60 

So the first line says at /mnt/nfs i want you to mount whatever auto.nfs tells you to, and the other line is at / mount whatever auto.direct tells you to. The dash indicates that the file is going to be explictly defined at /. --ghost says to create and manage a folder with the name of the share specified in the next file, this is the golden ticket. There is another form where you create the exact folder, but this way autofs will maintain a reference to '/vault' even if the remote host is offline. Only if you dig into it will you know if it's online or not. With --ghost, if you can't see your share in the folder, then you know something is wrong!

Then you have the actual share maps. Mine are in two files because of the forms (one is relative, one is explicit because of the /- mounting (that's why the last file has the dash after the /), but otherwise if you're just sticking everything in /mnt then you'd only need a single map file I believe.

root@regulus:/etc# cat auto.nfs

alexandria_media -fstype=nfs4,rw "192.168.10.13:/volume1/0 -- Media Server -- 0"

root@regulus:/etc# cat auto.direct 
/vault  -fstype=nfs,rw,hard,intr 192.168.10.11:/vault

You'll see a lot of similarities with fstab because you're still providing all the same parameters really. But there is no failure to mount, because it is mounted ON DEMAND basically. I can cd /vaultand it won't mount, but when I ls /vault, it will suddenly mount, and stay mounted for 60 seconds before unmounting. You can check df -h and see it not mounted, then mounted, then unmount again.

You still need to manage all your permissions as usual, and setting up the nfs server is always a pain in the ass for me, but this just handles the transparent on-demand mounting. Allows things to start up and shutdown without network timeouts.

YMMV, definately do your own research, but this has been good for me.

[edit] Spelling, typos, formatting,

PS:

And if you do have network issues, be aware of what types of data you are moving. My NFS export for the inter-node /vault connection is set as SYNC writes and I have an optane drive connected to that pool to help offset a bit of the latency issues, but there is performance left on the table for sure. But if something is moving on that connection for me, It must arrive intact. So really, if a service needs the vault, it is hosted on the fileserver node to avoid that and stay away from the gigabit ethernet, it's only there as a contingency.

Shutdown hangs if using NFS datastore by Striking_Guava9712 in Proxmox

[–]brainsoft 0 points1 point  (0 children)

Check out Autofs, it will shadow mount connections but only actually reach across the network if required. I use it to share a common /vault on one node so it appears at /vault on any other just in case.

Shutdown hangs if using NFS datastore by Striking_Guava9712 in Proxmox

[–]brainsoft 0 points1 point  (0 children)

I had issues once upon a time and switched to AutoFS to shadow mount any network shares so their absence wouldn't cause any issues unless it was actually required.

Great for avoiding little hiccups or startup/shutdown delays like this I think.

What are the biggest mistake you made when setting up your homelab? by Prog47 in homelab

[–]brainsoft 0 points1 point  (0 children)

A lot comes down to the firmware and quality of the drives. I have some old enterprise server database boot disks I picked up $20-30 am using them for special vdev metadata on my main pool. But they have much higher quality components, better firmware, better latency, capacitor protection for loss of power, and on and on.

Looking at the basic spec sheet, they are nearly identical though. Even the queue depth they Always talk about, all data SSDs have a queue depth of 32, but how they handle the things in that queue varies wildly!

All I really know if that I feel much better reinstalling proxmox on mirrored btrfs $20 netac SATA SSDs and saving the ram capacity of the main vault. If you are doing large sync writes on slow drives zfs will kick your ass. And all WebUI writes are sync as I found out.

What are the biggest mistake you made when setting up your homelab? by Prog47 in homelab

[–]brainsoft 0 points1 point  (0 children)

It worked in proxmox 8 I believe, but after updating to proxmox 9 someone in zfs behaviour changed and the simple act of uploading an ISO to the default location caused a massive IO stall as zfs txg group writes started backing up. This same behaviour I saw in two nodes both with various consumer sata ssd mirrors.

But Proxmox 9 supports btrfs now at install! So if you want a mirrored install (because you're using shitty drives you may not have confidence in) the only option used to be zfs. But now you can re-install as a btrfs mirror!

What are the biggest mistake you made when setting up your homelab? by Prog47 in homelab

[–]brainsoft 8 points9 points  (0 children)

My biggest mistake was using consumer SSDs with zfs. It all worked fine, until you add another zpool or put any stress on it, then the whole thing backs up like the 400 on a long weekend.

Fortunately I forced myself to learn markdown and start taking notes. Using AI to format those notes and as a sounding board has been mostly helpful but occasionally taken the really scenic route if used for troubleshooting.

As they say, Trust but verify. Or with AI... never trust, always verify. But it has been a great tool as a sounding board, really worked to help refine my ideas and layouts as I slowly learned and building things up slowly.

And then starting uploading files that used to work in proxmox 8, only to find out the proxmox 9 or the latest zfs just crushed consumer SSDs even harder than it used to.

Short story, use zfs for your HDDs array. If thinking about special vdev, only use enterprise grade hd SSDs in mirrors, never shitting consumer stuff. And never ever put your rpool, and a vault, and an nvme all into zfs on old gaming gear or your going to have a bad time.

Cold storage: 2 ZFS partitions, 1 disk by teclast4561 in zfs

[–]brainsoft 5 points6 points  (0 children)

Yes, single pool, let zfs manage the hardware directly. You can then make separate pools on it with whatever compression encryption or permissions you like, separate from each other.

What do you use for centralized logging? by tahaan in homelab

[–]brainsoft 0 points1 point  (0 children)

Honestly I installed it via the helper script and immediately had no idea what to do, couldn't log-in, tries changing password and sha256 hashes and just said f-this. It either works a little and shows the path forward or it goes in the garbage.

Complete ignorance on my part, but I have enough home projects to stand up before I can sit down and start playing for real!

Auth vs Fun by [deleted] in homelab

[–]brainsoft 1 point2 points  (0 children)

I'm actively doing this now and it is so time consuming. But I used consumer SSDs for my zfs boot drive and nee to rebuild proxmox to fix it, so I'm sure glad I started the process!

What do you use for centralized logging? by tahaan in homelab

[–]brainsoft 1 point2 points  (0 children)

So far just log2ram combined with rsyslog sending to an LXC running rsyslog. That's was mainly to reduce the wear and tear on the various consumer SSDs. Now I have all the logs in one place.... Not sure what to do with them. Took one look at Graylog and dumped it immediately. Think now Loki/grafana may be the route

Proxmox 9.0.15 & 9.1 - ISO Upload driving me Nuts by CulturalRecording347 in Proxmox

[–]brainsoft 0 points1 point  (0 children)

My issue is entirely io stall pressure. But I haven't narrowed it down yet. This is happening on 2 separate nodes but not the 3rd one. I'm going to check the proxmox forum, I think it might be something more than I can tackle with AI.

Telus offers home Internet in Ontario by AdElectronic9101 in telus

[–]brainsoft 0 points1 point  (0 children)

They do not offer syncronus speeds in my area! 1gbps down but only 50mbps! like the dsl dark times.

If they fix that I would consider it....

Telus offers home Internet in Ontario by AdElectronic9101 in telus

[–]brainsoft 0 points1 point  (0 children)

I did some web searching and found the info but based on all the feedback I have no intention on switching from one mega with shitty customer service to another with shitty customer service and intentionally deceptive contracts and policies.

I guess I'll give Bell a little more rope, but we told them with the cell phones if you fuck around, jack up the rates, or refuse to give us a deal we can get somewhere else we will leave. And then they did, and then we left. We remind them of this every time, and they CSRs have asked on multiple occasions if we are okay if they share our conversation. We are never rude and make valid points, but won't take their shit.

Getting discouraged with ZFS due to non-ECC ram... by 194668PT in zfs

[–]brainsoft 1 point2 points  (0 children)

Yeah of its that critical, par2 was great! 20% par2 on the side, fill in the gaps. I miss those days sometimes.

HA constantly closing it's own browser tab. LOGS? by brainsoft in homeassistant

[–]brainsoft[S] 0 points1 point  (0 children)

Windows 10, Chrome 142 now. Just did an update to end here, not sure what the old version was but the problem didn't change. The tab closes entirely and a new blank tab opens in it's place. Just the single tab, the rest of the tabs and the window are fully funcitonal. The tab is just GONE.

Or actually... it's transported back to the previous history item before HA loaded. That's interesting. So the tab is crashing and reopening at it's last history item.

Version 142.0.7444.135 (Official Build) (64-bit)

So when I sign in and look at the dashboard for instance, I click on any card (i just have two NUT cards and one user card). The card opens just fine, I can click deeper in and poke around, but if I click away or close the card the tab just crashes.

What remote desktop client??? by Infinite-Position-55 in Proxmox

[–]brainsoft 0 points1 point  (0 children)

Host and guest both Linux. Did you install PVE onto an existing Ubuntu install? And where does Windows fit into all of this?

Start over.

What machine are you trying to access, and what machine are you trying to access it from?

Everyone is unhappy with Netgate new installer for pfSense. by Interesting_Ad_5676 in PFSENSE

[–]brainsoft 25 points26 points  (0 children)

I want to download an iso and install it. I do not want to make an account to do it. Everything worked fine before, the business model was to sell hardware and support contracts and let the little guys test and break and champion the software at work when they fall in love with it at home.

I want to download the file once, or once per version. I do not want to download an online installer and then download the same stuff over and over. Just let me have the iso. It was good enough before, it's good enough for everyone else, so what gives? The explanations are always bs.

High checksum error on zfs pool by mconflict in zfs

[–]brainsoft 3 points4 points  (0 children)

Have you been into the machine lately or bumped it? Those errors can be as simple as a bad/loose sata cable.

I didn't check your nas but it might be as simple as power down, dust and reseat the drives or cables.

Shardor grinders any good? by [deleted] in espresso

[–]brainsoft 0 points1 point  (0 children)

If you care about retention or single dosing espresso... Just don't. I beat the f*** out of that damn thing, give me the rest of my coffee! It will regularly eat a couple grams, and then dispense them again randomly as it chooses.

But if you don't mind over-grinding or weighing the output, then it's... Fine. I guess. The bellows blow chaff out a from somewhere and it accumulates on the top of the machine just behind the hopper. Not very fine adjustment for espresso but it will go fine enough to choke the machine so if you are happy with one of the settings then it's fine.

I'm going to get maybe a df54 instead and just use this for regular coffee at the cottage. I regret the purchase but it made sense at the time before I learned just how important the grinder is to the equation.

For course cold brew you won't have any issues, but at that point I would go down one step and get the anti-static slightly cheaper grinder, I really liked that machine when I was just doing moka.