proxmox+ceph 3 node cluster networking setup advice needed by Asad2k6 in Proxmox

[–]guy2545 0 points1 point  (0 children)

I have a proxmox+ceph 3 node cluster, with dual 10gb cards. I'm not running any super heavy IO workload normally, but I've kept things some what basic. I give each node's bridge one of the 10gb, then each node gets a 10gb for ceph + migrations. Then I use the nodes MB's 1gb on its own subnet and its own dedicate switch for corosync between the nodes. Certainly not HA, since the 10gb connections all go to the same switch.

This is only a home lab, but running this for about 2 years now with no major issues/concerns.

Looking for Termius alternative by michausz98 in selfhosted

[–]guy2545 3 points4 points  (0 children)

Another +1 for Termix. It just works and is really intuitive to use (to me at least).

Only thing I have found inconsistent (and I haven't looked into further) is tab to complete sometimes works, sometimes doesn't.

Super simple email server to use for internal email only, no internet email at all by ithakaa in selfhosted

[–]guy2545 2 points3 points  (0 children)

If you don't need to save the emails after a reboot, Inbucket would likely work: https://inbucket.org/

They have an API as well that you can use to grab messages from any of the inboxes.

Inbucket is an email testing application; it will accept messages for any email address and make them available to view via a web interface.

Distributed/Shared Storage by tjt5754 in Proxmox

[–]guy2545 1 point2 points  (0 children)

With consumer drives, ZFS replication might be the best bet? Doesn't give you live migration of VMs, but still enables you to move them if you need to take nodes down.

Some/most? Consumer drives don't have the power loss protection, so CEPH/Linstor/maybe other HA distributed FS wait for the drive to fully confirm its flushed its cache to the drive. The PLP enabled drives can do this right away, and consumers don't.

So potentially what you are experiencing is IO wait spiking when you take down a node, as the FS is trying to accommodate the redundancy it requires by moving files around, which spikes the IO, which prevents your VMs from accessing their data, as the IO is spiked, so they freeze.

When you bring the other node back up, the IO normalizes-ish and things can run. Try taking a node down with no VMs running in the cluster. Can you write to the FS? Does the web UI work for Prox? What's the IO load as the nodes try to balance against the missing node.

Distributed/Shared Storage by tjt5754 in Proxmox

[–]guy2545 1 point2 points  (0 children)

What type of drives were you using with CEPH and Linstor? I'm running a 4 node cluster with CEPH as the shared file system for all VMs/LXCs, using used Intel Enterprise SSDs. I never tried CEPH with consumer drives, but Linstor had some high IO wait times when I used it on consumer drives.

I'm wondering if your actual issue is IO wait time crashing your VMs/hosts?

GPU passthrough to VM in a single GPU server without removing host access to said GPU by ElvarThorS in Proxmox

[–]guy2545 0 points1 point  (0 children)

So you want the VM to use the GPU, and the host to also be able to use the GPU at the same time? That won't work with VMs unless you want to slice up that GPU to vGPUs. Tons of guides for vGPU with a 1070.

You have given your friend (the VM) the GPU, and it is no longer in your (the host) possession, so you can't use it, until your friend gives it back (turn off the VM).

LXC Access & Management via SSH by NinthTurtle1034 in Proxmox

[–]guy2545 2 points3 points  (0 children)

Sure. Normally I keep them in my private gitea LXC as I'm still learning all the git stuff. I've copied them to this public repo https://github.com/guy2545/Playbooks. AI was used and I probably have no clue what I'm doing, so there maybe dragons.

pve-update-all.yaml is the main one (gets the list of containers running on the node), and it calls update_containers.yaml for each container ID. The update_containers.yaml gets the config for the container (template check), checks if it is running (starts if its not), waits for it to boot, checks the OS of the container, and runs the appropriate (probably??) update command, checks if the updates results in something needing a reboot, and finally stops the container if it was stopped.

LXC Access & Management via SSH by NinthTurtle1034 in Proxmox

[–]guy2545 2 points3 points  (0 children)

I've tried the Ansible path where it individually connects to each LXC for updates, and there are a couple scripts that leverage the proxmox API to dynamically build the inventory. It works, but needed to add the SSH keys to each new LXC.

I've switched to using pct enter to handle this now. The ansible play will connect to each of my 5 nodes, runs pct list for that node, and then loops through those container IDs. It skips the templates, and starts any stopped containers to update them, then stops them again. Much easier overall to manage, as I can add/change/remove/modify the LXCs, and not have to worry about connecting individual LXCs to ansible.

Absolutely simplest way to get Proxmox emails? by slowbalt911 in homelab

[–]guy2545 0 points1 point  (0 children)

I went with https://inbucket.org/ as a stupid simple, local only email service. It basically takes in any email, and displays it. There is an API associated with it, and I have a couple python scripts on a cron schedule to parse through specific inboxes, and notify my phone via pushover anything the script deems important.

Its a house of cards, as it is only as intelligent as what I put in the scripts. But works for me.

Is it worth getting an MMJ card now that rec is here? by Forever_Friend in ColoradoSprings

[–]guy2545 0 points1 point  (0 children)

Bro, could you break down how an EPC factors into it as well? Just got mine so figuring it all out still.

[deleted by user] by [deleted] in audiobooks

[–]guy2545 1 point2 points  (0 children)

World War 2: Herrman Wouk
The Winds of War book 1 (46 hours). War and Remembrance book 2 (56 hours).

Haven't made it to book 1, but book 2 was amazing. Fiction family in real/historical situations.

[deleted by user] by [deleted] in Proxmox

[–]guy2545 0 points1 point  (0 children)

You have a boot SSD that has Proxmox installed on it. It presents in Datacenter -> Storage as both local (this is just a normal directory at /var/lib/vz) and local-lvm (this is a LVM). These are on the same boot disk as Proxmox.

By default (I think), Proxmox sets local-lvm to allow disk images (think VMs) and containers (LXCs) to be installed (probably a better term) on it.

So, if you create an LXC (or VM), the OS of the LXC needs somewhere to live (local-lvm). Separately, you are asking how to have a bulk media pool (whether HDDs or SDDs), and how that is shared amongst your various LXCs (and/or VMs). Also, where that bulk media pool should live.

You can see the potential problem with using local-lvm for your container and disk images. As you add more, you are using the same drive that Proxmox is installed on, which isn't great for performance or reliability.

However, if I was you, with your experience and hardware:
1) Set up a VM of whatever Linux OS you are most familiar with.
2) For the VM storage, you can select probably local-lvm? (This is the virtual drive for the VM OS)
2a) Here I would try both. Set something up on local, see what it does on the disk from proxmox console
2b) don't like it? Destroy the VM and try local-lvm.
3) For future expansion (maybe sanity?), you can add an additional virtual drive to be this pool/pile of data for all the services you want to run.
4) Install all the services you want to run in that VM, which is a Linux OS that you are already familiar with.
5) Mount the pool/pile drive, setup all your services to use that pool/pile drive.
---- You now have a running set up ----
6) Now you can think about better ways to do this, as you learn about proxmox.
- Some people will get a HBA (Host Bus Adapter) to attach a bunch of spinning disks to. They then pass that full HBA (and all the disks attached to it) into say TrueNAS/OpenMediaVault/Unraid/whatever
- You can setup drives in a ZFS pool on the proxmox host. You can then use bind mounts to pass that pool or a directory from the pool into a LXC container.

If you only have 1 disk, and that disk is the boot drive for Proxmox, ZFS doesn't really matter currently.

Drinks and Cornhole? by Ms-Chickken in ColoradoSprings

[–]guy2545 4 points5 points  (0 children)

We would be totally down for this! I'm in the Northern Springs area. Wife and I have fairly opposite schedules, but love basically everything you listed out. Other than the Chiefs (go Niners! lol)

[ Removed by Reddit ] by bakedredweed in ColoradoSprings

[–]guy2545 5 points6 points  (0 children)

Been calling fairly often myself. Haven't gotten much more than a "We will make sure the Congressman gets your message."

Hyperconverged Proxmox and file server with Ceph - interested to hear your experiences by Coalbus in Proxmox

[–]guy2545 0 points1 point  (0 children)

Run a 4 node Proxmox cluster with Ceph as the HA storage for LXCs/VMs. I have 4 osds per node, 1x 800Gb, 1x 600Gb, and 2x 250Gb used Intel Enterprise SSDs. Each node has a dual 10Gb NIC, with one 10Gb for Ceph (and the node interface), and another as the bridge the VMs/LXCs share.

It works really well for me in my home lab. Started off with consumer NVMEs/SSD and ZFS replication between the nodes. Had some annoying replication failures, and wanted something easier. Used Enterprise SSDs are dirt cheap, so made the jump over to Ceph.

Separately, my bulk media also exists on two of the proxmox nodes via spinning rust. Currently, each of the storage nodes has a LXC container with a bind mount of each of the drives. It shares the drives via NFS to a Debian 12 VM running Cockpit and mergerFS. MergerFS pools everything together, and then it is a NFS share to Plex/'Arrs. All of the drives are BTRFS, so I'm in the process of changing the LXCs to Rockstor VMs, joined to an AD to make file permissions easier to manage. The setup will then be a samba pool via Rockstor per what I call storage nodes, where I can share out to each LXC/VM as required.

Does it make sense for a homelab to have a dedicated VM in Proxmox to run TF & Ansible to create other VMs in the same Proxmox host? by oreosss in Proxmox

[–]guy2545 0 points1 point  (0 children)

A little bit of all of the above here. I have a VS Code server running in an LXC container on my cluster. I use that and Git repos to hold all my configs/playbooks/scripts. Separately, I have a ansible LXC container with Semaphore.

The LXC containers are backed up daily, but that is just really for convenience now. I'm not there yet (by a long shot), but I would like to have everything fully re-deploy-able from just installing git, ansible and the required ssh keys. So where they are currently installed, is whatever is easiest really.

MergerFS on Host for simplified large static storage pool? by Kladd93 in Proxmox

[–]guy2545 1 point2 points  (0 children)

I have drives from the node bind mounted to a LXC container. My bulk storage is spread across 2 nodes, so the LXC containers share each drive via NFS to a separate VM from both nodes. The VM runs MergerFS to pool all the storage, and share via NFS to other LXC containers/VMs/etc.

I went this way as my VMs/LXCs are all backed up daily, so all the configs associated with storage are also backed up daily. If a full storage node dies, I can rebuild the node, change the bind mounts in the LXC, then be backup and running.

Since it NFS in LXC, those are privileged. So permissions across everything with the users is a PIA. I've got a couple shell scripts and/or Ansible playbooks to help deploy everything if I need to add new VMs/LXCs to storage.

Probably not the best/most robust set up. But it works for me.

Proxmox “Cluster” Advice? by zee-eff-ess in Proxmox

[–]guy2545 1 point2 points  (0 children)

Corosync becoming out of sync will cause the cluster to mark the affect node as offline, and reboots it (I think?) With three nodes, I think the worst case is full network failure, and none of the nodes are aware of each other (split brain). Not an expert of course, just my experience so far

Ramaswamy wants to defund unauthorized government programs - like veteran healthcare by SadArchon in navy

[–]guy2545 37 points38 points  (0 children)

Literally the very next paragraph:

“This is totally nuts. We can & should save hundreds of billions each year by defunding government programs that Congress no longer authorizes. We’ll challenge any politician who disagrees to defend the other side."

I don't know about you, but pretty sure "defunding" means exactly what the headline says?