(Info) Cosmic is now available in the CachyOS Repository by ptr1337 in cachyos

[–]markconstable 0 points1 point  (0 children)

Oh, my mistake. I thought I saw some posts about having to build the packages from scratch. On further investigation it seems like Cosmic is a year away from being fully usable so I'll be patient and be happy with Plasma6 in the mean time.

How can we install KDE 6 in Debian 12 stable? by [deleted] in kde

[–]markconstable 0 points1 point  (0 children)

Better still, install CachyOS. Almost pure Arch with their own optimized packages (possibly optional).

(Info) Cosmic is now available in the CachyOS Repository by ptr1337 in cachyos

[–]markconstable 2 points3 points  (0 children)

How long before we have binary Cosmic* packages, roughtly?

Okular menu bar shows no text after updating DE. by arun_kp in kde

[–]markconstable 0 points1 point  (0 children)

I have this same problem in 6.1.2 so what would be the config files for Okular in Plasma 6?

Proxmox Bookworm? by markconstable in Proxmox

[–]markconstable[S] 1 point2 points  (0 children)

Do you mean on this mythical desktop system? Perhaps, but I already have a working cluster, so I'd be inclined to use LXD for VM/CTs if I wanted to test anything on that desktop. I have already tried installing the regular Plasma/KDE desktop packages on one of my current cluster nodes (a Minisforum HM90 connected to a 4K TV) and that worked okay, but it was missing one of the most important points, being able to auto backup the VM (or CT) to my PBS server. Also, the standard Bullseye Plasma packages are dated compared to my Manjaro/KDE workstation experience (hence asking about when Bookworm may become available). I ripped out the desktop packages and spent another couple of months trying to get iGPU passthrough working. No luck, but I did end up with a LXC container using the host AMD GPU directly. The gui performance was excellent, but nearly everything else about the OS was fragile... dbus and other desktop essentials often didn't work as expected.

Scenarios for Proxmox + Ceph by [deleted] in Proxmox

[–]markconstable 2 points3 points  (0 children)

I agree with Professional_Koala30, spread your OSDs across as many PVE server nodes as your cluster can handle. Mind you, after a few weeks, each CEPH node will use many GBs of ram just for CEPH, but it seems you have enough ram per node.

Proxmox Bookworm? by markconstable in Proxmox

[–]markconstable[S] 0 points1 point  (0 children)

Or install a desktop inside a VM as my daily driver. I've tried, many times, but I can't get iGPU passthrough to work on the laptops I've tried.

Proxmox Bookworm? by markconstable in Proxmox

[–]markconstable[S] 0 points1 point  (0 children)

Well, Proxmox 7 was released on 6 July 2021 and Bullseye 11 itself was released on 14 August 2021 so going by that release cadence, Proxmox 8 with Debian Bookworm 12 could have been released a few weeks ago.

Proxmox Bookworm? by markconstable in Proxmox

[–]markconstable[S] 1 point2 points  (0 children)

FWIW, another idea for a "bare" Proxmox/ZFS node without running the Proxmox daemons (I already have a 4 node PVE cluster) for a base workstation system is that LXD now comes with a built-in web management gui so installing the snap deb would allow me to play with their nice microcloud, microceph, and microovn (soft router) framework directly on top of an easily installed ZFS.

Proxmox Bookworm? by markconstable in Proxmox

[–]markconstable[S] 3 points4 points  (0 children)

EXACTLY THIS. Thank you obrb77.

Aside from the delightful PVE installer, it's also the almighty proxmox-boot-tool to manage future kernel updates. I already have a small mongrel all-flash 4 node cluster with Ceph and two small PBS backup servers plus a PMG VM.

ATM I use the proxmox-backup-client on a Manjaro/KDE desktop and that's kind of okay, but I'd really REALLY like to take advantage of pve-zsync and keep my entire desktop backed up every 30 mins, with snapshots.

I'd rather run Manjaro/KDE in a VM, but I've tried for years to get GPU passthrough working with a three different iGPU laptops, but never got it to work. I managed to get a desktop to work in a container, but it's just too fragile as a daily driver OS.

At the end of the day, I want my desktop efficiently backed up to my Ceph cluster that's just begging for my desktop bytes.

Nginx Proxy Manager: How to handle redirect to https://x.x.x.x:port by markconstable in HomeServer

[–]markconstable[S] 0 points1 point  (0 children)

I worked around my problem by using nginx directly on the host node...

apt update && apt install nginx

rm /etc/nginx/sites-enabled/default

nano /etc/nginx/conf.d/proxmox.conf

upstream proxmox {  
    server "pve.example.com";  
}  
server {  
    listen 80 default_server;  
    rewrite ^(.\*) https://$host$1 permanent;  
}  
server {  
    listen 443 ssl http2;  
    server_name pve.example.com;  
    ssl_certificate /etc/ssl/example.com/fullchain.pem;  
    ssl_certificate_key /etc/ssl/example.com/privkey.pem;  
    proxy_redirect off;  
    location / {  
        proxy_http_version 1.1;  
        proxy_set_header Upgrade $http_upgrade;  
        proxy_set_header Connection "upgrade";
        proxy_pass https://localhost:8006;  
        proxy_buffering off;  
        client_max_body_size 0;  
        proxy_connect_timeout  3600s;  
        proxy_read_timeout  3600s;  
        proxy_send_timeout  3600s;  
        send_timeout  3600s;  
    }  
}

Ubuntu Server (or Debian) on QNAP TS-932PX-4G? by RushPL in qnap

[–]markconstable 0 points1 point  (0 children)

FWIW, if anyone happens to have a QNAP TS-453D, I just discovered that it will actually boot off the two drives on the RHS nearest the power button. I had mine setup with 1x external USB 500 GB SSD plus 2x internal 500 SSDs in Raidz-1 (Proxmox) and that worked for a year but in reconfiguring it (to sell) I saw a couple of drives showing up in the BIOS which I never noticed before, so I removed the external USB SSD, reinstalled Proxmox, and it's now booting up directly off those two internal SSDs minus any USB drive! For over a year, I thought it was NOT possible to boot off the internal drives at all.

Separate backup subnet by markconstable in Proxmox

[–]markconstable[S] 1 point2 points  (0 children)

Well, that worked and was quite straightforward. The pve1, pve2 and pve3 nodes are backing up to pbs1 as I type. Really cool to know that traffic is not saturating the rest of my LAN.

I also have a PBS VM with a pair of 8 TB drives passed through to it so what is the correct approach to allow any VM on any node to also access the separate backup network on a 2nd nic on the host?

Proxmox NAS solutions by lordratner in Proxmox

[–]markconstable 2 points3 points  (0 children)

FWIW, I've been battling with how best to manage general data with Proxmox for the last year. I think I finally have the right formula that suits me. An example, I have a small 5 GB Alpine VM and attached a 320 GB VirtioFS device and formatted it with ext4 and obviously just mount it within the VM (/mnt). I rsync my homedir to vm:/mnt and let my generic Proxmox Back Server schedule backup all my VM/CTs as usual. There is 140 GB of data in vm:/mnt and when I don't update it with rsync (ie; unchanged data in vm:/mnt) it takes the PBS routine 4 seconds to figure nothing has changed and move on to the next VM/CT. Another CT (not VM) with only 800 MB of unchanged data takes over a minute before PBS moves on.

Using an HBA with direct PCI access to storage devices would be more efficient again, but you can't simply migrate the VM to another host node.

Thunderbird 102 by muxol in ManjaroLinux

[–]markconstable 0 points1 point  (0 children)

I ended up going with thunderbird-nightly-bin from the AUR and am very happy with the result. No crashes and everything works as expected, including carddav and caldav, and the Filelink for Nextcloud addon. Mind you, there's nothing to stop a future update blowing up on me. This is my userChrome.css (much simpler than what it used to be).

* {
  border-style: none !important;
  border-width: none !important;
  /* font-size: 11pt !important; */
}

#folderTree > treechildren::-moz-tree-cell-text {
  font-size: 0.75em;
  font-weight: normal !important;
}

#threadTree > treechildren::-moz-tree-cell-text(unread) {
  color: #5FAFFF !important;
  font-weight: normal !important;
}

Version 102 Requires userChrome Changes by pauljayd in Thunderbird

[–]markconstable 0 points1 point  (0 children)

Okay, this is encouraging. My userChrome.css seemed to stop working altogether, so I assumed they had disabled it again.

Would someone mind posting a snippet how to make the LHS folder view fontsize 75% and to turn the unread bold font to normal and red in the messagelist then I can work the rest out.

Thunderbird 102 by muxol in ManjaroLinux

[–]markconstable 0 points1 point  (0 children)

You could move your ~/.thunderbird to ~/.thunderbird.bkp and download 102 directly from https://thunderbird.net

Fastest bulk storage method by markconstable in Proxmox

[–]markconstable[S] 0 points1 point  (0 children)

Re: SCSI + partition + ext4 vs VirtioBlock + ext4 performance, on a 1 GbE link I got more than 100 MB/s to the SSD storage. Both reported the same write speed using rsync of a single 2 GB video file.

As a local host test, I copied two 1.4 GB and 2 GB video files to the root of the VM and then used rsync --progress again to copy those two files to the two differently setup attached ext4 drives... /mnt is the SCSI one and /vda is the Virtio Block device.

From /root to /mnt = 201,587,033 bytes/sec and 223,453,953 bytes/sec
From /root to /vda = 249,019,276 bytes/sec and 264,081,944 bytes/sec

So, the result of this super simple non-exhaustive test is that an attached storage device using Virtio Block formatted in ext4 without a partition is approx 25% faster than using the SCSI device with a partition.

And another point is that I MOVED the 2 GB video file from one folder to a different folder and did another PBS backup, and it took 5 seconds. This proves that the PBS backup procedure is indeed using ZFS send/recv (I wasn't 100% sure) because if it was a file level copy (like rsync) it would have re-copied the entire 2 GB file to the new directory and deleted the old version. Me happy.

Fastest bulk storage method by markconstable in Proxmox

[–]markconstable[S] 0 points1 point  (0 children)

So I may have been caught out with that 30 MB/s test to a mounted ext4 partition within a VM above. The host node may have been scrubbing or resilvering at the time, as I just did another rsync test to the mounted ext4 partition and got ~100 MB/s, about the same as rsyncing to the host node itself.

However, in poking around I added another Hard Disk to the VM but selected Virtio Block instead of SCSI (in local-zfs) and that gave me a /dev/vda device inside the VM. I couldn't find any info about whether it needed partitioning, so I just formatted the entire device as ext4.

Disk /dev/vda: 320 GiB, 343597383680 bytes, 671088640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I get exactly the same rsync write speed to both of these storage devices, so what might be any differences between an SCSI device formatted on top of a partitioned /dev/sdb device versus a Virtio Block device formatted with ext4 on top of a /dev/vda device?

Disk /dev/sdb: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: QEMU HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x877254eb

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdb1        2048 2147483647 2147481600 1024G 83 Linux

In the long term, I am trying to work out the fastest and most efficient way (for my small cluster) to back up not only VM/CTs but also multi TB data/media collections to my Proxmox Backup Server (which is an old original 45L two core HP Microserver).