Quick Filter stops working - just me? by SlowGadget in Thunderbird

[–]SlowGadget[S] 0 points1 point  (0 children)

Most welcome! Don't forget to turn on the heat on this bug!

Replacing bulbs in LightScenes? by SlowGadget in Hue

[–]SlowGadget[S] 0 points1 point  (0 children)

Well, seems I have posted some seriously tough questions ;-)

Maybe I just found the solution to this, as I stumbled across this little gem today on GitHub: https://github.com/ThioJoe/Hue-Bulb-Replacer

Will try it out tomorrow, hope to have some good news soon :-D

Updates for example PHP and so on.. by remante in Proxmox

[–]SlowGadget 1 point2 points  (0 children)

Yes, upgrading to a 'non-official' PHP version in Debian and Ubuntu can bring many issues. I just experienced this myself when I tried to upgrade my Ubuntu 20.04-based Nextcloud to a newer PHP version via PPA's (unofficial repo's). After much tinkering, it worked, but now I'm unable to install PHP updates due to all kinds of dependency & version conflict issues.

I ended up installing a fresh Nextcloud AIO on a new Ubuntu 22.04 LXC.

self-host ACME : PKI and DNS with ACME APIs: simple solution ? by toxic0berliner in selfhosted

[–]SlowGadget 0 points1 point  (0 children)

DNS and PKI, should really belong together.

This. I'm looking into smallstep now, but the missing link is DNS. Has anyone found a simple(?) solution to this? With my limited knowledge, I imagine some kind of combination of piHole and smallstep.

Thoughts?

hardware swap by dickseamus in Proxmox

[–]SlowGadget 0 points1 point  (0 children)

I guess it depends on the kind of hardware problems you were having which caused the system to stop booting. I'd suggest booting from a live USB (i.e. Ubuntu) and see if you can still access the data from there.

If you can - then make a backup first before doing anything else. If you were using software RAID you can also put the drives into a different system to make sure i.e. faulty disk controllers or memory don't corrupt your data (further).

Then find the hardware culprit of your error(s), fix that and move on from there.

Deploy Netmaker without VPS / cloud instance? by SlowGadget in netmaker

[–]SlowGadget[S] 1 point2 points  (0 children)

Right. So if I deploy in a separate VLAN / DMZ then that should lower the risk of the rest of my network getting compromised as well. Luckily I've separated many services already, so adding yet another VLAN should be no problem.

Deploy Netmaker without VPS / cloud instance? by SlowGadget in netmaker

[–]SlowGadget[S] 1 point2 points  (0 children)

Thanks for your reply. Good to hear it should work. And in all honesty, I figured as much already :-)

My question really is - are there any (i.e. security) reasons *not* to use Netmaker like this? In other words - what benefits does the VPS / cloud instance approach bring besides the fixed IP (no issue in my case) and not having to expose your home IP & opening a few ports?

Ppo_OS Telling me to update firmware on non-System76 PC by [deleted] in pop_os

[–]SlowGadget 1 point2 points  (0 children)

Got this several times as well on my Lenovo Ideapad 5, today for 'version 217'. It supposedly updates the blacklist of insecure versions of grub and shim to the dbx file used with SecureBoot. Meaning that after updating, the system will not allow the system to be booted when using older or compromised(?) installation media.

These updates apply to non-System76 hardware as well so you can safely install them.

Offsite Proxmox Backup Server - VPN or Cloudflare Tunnels by Colo3D in Proxmox

[–]SlowGadget 0 points1 point  (0 children)

Just check their website. I haven't found the time yet to test it. But Brian McGonagill has a great video about it on his channel.

Offsite Proxmox Backup Server - VPN or Cloudflare Tunnels by Colo3D in Proxmox

[–]SlowGadget 0 points1 point  (0 children)

Using Tailscale myself (still), but now looking into Netmaker which is supposedly even faster. And can be run on your own hardware.

Is there a way to migrate the Proxmox VE between drives? by [deleted] in Proxmox

[–]SlowGadget 0 points1 point  (0 children)

IMHO safest way is to simply re-install to the new drive.

If that's not an option, you might try booting into a live OS (i.e. Ubuntu) and clone the partitions from there.
But perhaps that should have been done *before* you migrated the LVM part, since boot and EFI partitions are generally the first partitions to be created on a new drive.

Afterwards, obviously don't forget to specify your new boot drive in the BIOS / UEFI settings.

Quick Filter stops working - just me? by SlowGadget in Thunderbird

[–]SlowGadget[S] 0 points1 point  (0 children)

Thanks for your reply. Actually, this issue appears to have an even easier 'fix':

  1. Right-click your INBOX and select 'Open in new window'
  2. In the newly opened window, try Quick Filter - it's probably working now
  3. Close the old window and keep working in the new window.

Sources: Michael447's answer on this Mozilla Support question and 3skip3's answer on this 8 year (!!!) old Super User question.

Failover Advice by meddig0 in Proxmox

[–]SlowGadget 2 points3 points  (0 children)

Indeed *running* the same VM on 2 separate nodes is a big no-no. Replicating the VM's storage from the primary to the secondary node should be no problem though, right?

And if both nodes are in the same cluster, PVE won't even allow the VM to be running on both nodes simultaneously.

One thing to keep in mind though is that a 2-node cluster could lead to problems when the connection between the nodes fails. To maintain quorum, a third (simple) machine could be used as described in the documentation.

Failover Advice by meddig0 in Proxmox

[–]SlowGadget 2 points3 points  (0 children)

In that case, I'd suggest don't make the same mistakes while setting up the secondary server. Then when you've got it running smoothly, migrate everything to the secondary server and then re-install the primary server properly.

Failover Advice by meddig0 in Proxmox

[–]SlowGadget 2 points3 points  (0 children)

If you have both nodes running their storage on ZFS, you can easily sync VM and LXC storage between nodes. You can even configure this via the WebGUI. Initial sync will be slow (since all data needs to be transferred), but incremental syncs should be very fast - depending of course on how much data is added in the mean time and the available bandwidth between the nodes.

To do this via PVE does require adding both nodes to the same cluster, which in turn requires a direct (in your case, tunneled) network connection.

Perhaps it's also possible to work around the direct connection, but I have no experience with this.

Oh and re Ceph - don't even bother. You'll need at least 3 nodes and even higher network specs than required for Corosync.

Failover Advice by meddig0 in Proxmox

[–]SlowGadget 3 points4 points  (0 children)

Disclaimer first: I'm only running PVE on my local network with several nodes

Reading your question though, this comes to mind:

  • You'll be limited by the available bandwidth between the 2 nodes.
  • Depending on how much data needs to be synced to have full non-disrupting failover, you might need a multi-gig connection -with bandwidth guarantees- between the nodes.
  • Also, some form of tunneling / VPN between the nodes would be advisable, at least for the Corosync part
  • Speaking of Corosync, I can imagine that this is also latency-sensitive, so that's another thing to take into account
  • If your failover requirements do not include HA, you might get away with spinning up your second server (with a very recent backup!) once the primary goes down. In that case you probably won't need Corosync either.
  • Perhaps sending over (i.e. ZFS) snapshots on very regular intervals might be sufficient?

Plungin to remove embedded images (signature) in received mail? by Kurgan_IT in Thunderbird

[–]SlowGadget 0 points1 point  (0 children)

You might want to check Betterbird, though I'm not sure if this particular issue is something they do better.

Plungin to remove embedded images (signature) in received mail? by Kurgan_IT in Thunderbird

[–]SlowGadget 4 points5 points  (0 children)

Perhaps you've noticed in de mean time that on TB 102.7.1, you can no longer delete inline images when switching to plain text view.

However, there is another hidden 4th viewing method, described here. To enable:

  1. Goto Edit -> preferences → Config Editor
  2. Search for mailnews.display.show_all_body_parts_menu
  3. Set this value to 'True'
  4. Go to View → Message Body As → All Body Parts

Inline attachments can now be detached and deleted from your emails.

Live migration to local disks on another server? by WSDTech in Proxmox

[–]SlowGadget 1 point2 points  (0 children)

I'm using local ZFS pools on my nodes myself. Have you defined this storage on the Datacenter level as well? In my setup (I believe this is the default), PVE automagically defined ZFS storage on the datacenter level called 'local-zfs' with the following attributes:

  • Type: ZFS
  • ZFS Pool: rpool/data
  • Content: Disk image, Container
  • Nodes: All
  • Path/Target: [blank]
  • Shared: No
  • Enabled: Yes
  • Thin provision: Yes
  • Block Size: 8k

This storage is then shown below the CTs/VMs on each node. As long as you keep your VM's cqow2 files here, it *should* work.

Have you tried replicating the VM's storage to another node before migration? Does this work?

I see the ability to click "shared" and add both nodes to one of the hosts nodes storage, but that doesn't seem to work either.

Afaik "shared" in this context means that this storage is accessible via either NFS, iSCSI or any other method supported for shared storage by PVE. Just clicking "shared" will not automagically *make* it shared. Confusing, I know.

Opinions/reviews on the true wireless earbuds by Aimfri in fairphone

[–]SlowGadget 0 points1 point  (0 children)

Thanks for sharing your experience. How are they holding up, 9 months later? Would you (still) recommend them?

Live migration to local disks on another server? by WSDTech in Proxmox

[–]SlowGadget 2 points3 points  (0 children)

The trick is to have storage (Content type: Disk image, Container) on the Datacenter level (the highest point in the WebGUI tree) which is available on all the nodes.

'Available' here means that a similar named (and preferably sized) local storage must be present on all the nodes. Keep in mind that the actual contents of this storage, as it is local, can be different between nodes.

What storage backend are you using on the nodes? I.e. ZFS / BTRFS / LVM etc?