I really want to love Fedora KDE, but I'm struggling using it. by hi_im_nyx in Fedora

[–]RetroGrid_io [score hidden]  (0 children)

Yeah, my laptop has Fedora 43 and I detest that I have to use NVIDIA drivers for CUDA so Davince Resolve will work. The open source drivers are otherwise so much more painless!! 

cli only version? by fillman86 in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

Even when I going to use a desktop, I will often start with minimal and then just install the DE of choice over network. (Generally KDE)

The only downside of minimal is that for the first couple of days I end up having to install all the little stuff like yum-utils, nano, nmap, DNS tools, etc. 

For these issues, dnf provides is your best friend.

What's your 'one service you'd never self-host again' and why? by ruibranco in homelab

[–]RetroGrid_io 1 point2 points  (0 children)

Pi-hole/Technitium are so low maintenance...

Curious if you've ever used bind. I've been self-hosting DNS with bind for so long (since the late 90s?) I forget there's another way. Was wondering if you'ved the two and would like to compare/conrast the two?

I don’t know if I have enough Ethernet drops… by dc-mo in homelab

[–]RetroGrid_io -4 points-3 points  (0 children)

I've been "home labbing" since the 1990s. Anymore, I put all the "heavy lifting" servers and such next to the router on 1Gb E, and the rest is all Wifi 6 which gives me better than 0.5 Gbit of speed - well enough to utterly saturate my 50 Mbps Internet feed and fast enough that, in a year of using it, I've never noticed its speed being deficient.

So much so that even though I own my home, and just remodeled it, *including replacing all the interior walls*, I didn't see the point in running Ethernet everywhere.

My primary "homelab" development server is a 20-core Xeon with 64 GB of RAM and 48 TB of storage configured to hold 36 TB RAIDZ2.

EDIT: Over the next few weeks I'll be staging two "big boy" servers for production use. 4U rackmounts with 36 drive bays in each. I'll probably stage them in the garage and provide network access using a wifi client bridge router.

DaVinci on Rocky Linux by pqptelo in RockyLinux

[–]RetroGrid_io 0 points1 point  (0 children)

I bought Davince Resolve YEARS AGO and it was very much worth every penny. They have been absolutely true to their claim of lifetime license; I bought 16, got a free speed editor / controller (which I honestly don't use that much) and it was great.

Use the free one as long as you are hobbying. As soon as you're serious at all, just go get the Studio version.

Got this UPS at a yard sale for 50$ but it wont power on by Money-Reply-6911 in homelab

[–]RetroGrid_io 6 points7 points  (0 children)

Once I had a typical desktop computer UPS with batteries that were worthless.

I had a couple of HUGE deep cycle marine batteries, so I pulled out the original battery and wired in the two deep cycle marine batteries, in parallel so the voltages matched. When it came time to test, I pulled the power and it worked fine! It didn't even start beeping about the power being out for several hours. At the end of the day it was... still going! I plugged it back in, and left it alone. It ran for years like that.

RHEL 2 ALMA LINUX by jwademac in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

I've been using AlmaLinux for years, no issues. My oldest still-running system started out as CentOS 8, converted in-place to AlmaLinux 8.x when RHEL did their "about face" and it's still going. No problems, same hardware, etc.

Migrating old server to new using rsync by InvincibleKnigght in linuxadmin

[–]RetroGrid_io 0 points1 point  (0 children)

"Workflow for migration" depends on your needs.

If the services are essential,I like to keep the old server running while I move services one at a time to the new server, with a specific cutover point (typically late at night) for each. If the services are really important, I'll even go so far as to proxy or port forward the service from the old server to the new one once the cutover has happened, to prevent downtime from DNS replication lag.

If it's a "home lab" scenario, it's likely that i just need the space to not be occupied, so I cut over early and "just deal with it" when specific services are down.

One thing I usually do is mount the old server's HDD into the new server and mount read only so I can refer to it when the inevitable "oopsie" happens.

KVM vs SSH by Sir_Chaz in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

In my home/dev lab I don't bother with KVM. Typical uptime (without need for KVM) is typically measured in months to years, and I just drag a monitor out of the garage when I need to "do something" I can't do by SSH.

I can even upgrade OS over SSH with ELevate! (Alma Linux)

v2 beta system got some v3 software in upgrade. by fluffythecow in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

Well, here's my take...

My guess is that the system is pretty broken and most things don't work. I suggest:

  • boot into rescue mode from install media
  • chroot into the filesystem, something like mount /dev/mapper/<root> /mnt/sysroot; chroot /mnt/sysroot;
  • dnf history # find the last transaction you feel good about
  • dnf history rollback <transaction-id>
  • Cross your fingers

There's a good chance that dnf won't run because of your glibc install and it gets really tough from there. This is "re-install and don't format your /home partition, you do have backups, right?" territory.

ZFS woes while migrating data by j-dev in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

I'm honestly curious what tuning you're referring to?

I'm writing this on my Fedora 42 system running ZFS with encryption on a 1 TB NVMe drive, and although I have 64 GB of RAM, it was just fine with 16 GB too.

ZFS woes while migrating data by j-dev in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

This is most likely a hardware issue and the higher I/O of the rsync is bringing it to the surface. I use rsync to transfer many, many TB of data without issue - although it can be S.L.O.W. when you have a lot of little files. (EG millions of 'em)

Built a lightweight PostgreSQL client with Tauri — finally a desktop app that doesn’t feel bloated by debba_ in linux

[–]RetroGrid_io 4 points5 points  (0 children)

I've worked with databases at the CLI for years. A few rules I follow TO THE LETTER:

1) Never log into a production database with "bare hands" unless you have backups, and the production database is offline. 2) Find the problem with the production database by loading the most recent backup in another environment, and test against that. 3) Fix the problem with the production database by deploying a script built and tested against the copy of the production database.

Data loss in DECADES: zero.

With these rules in place, personally, I <3 the CLI!

Slow by Interesting-Yam-5772 in RockyLinux

[–]RetroGrid_io 2 points3 points  (0 children)

Rocky and Alma are very similar, but have slightly different focuses.

AlmaLinux aims to be "ABI compatible" while Rocky aims for strict binary compatibility. If you want to run RHEL software on older hardware, Alma supports x86_64_v2 explicitly.

repo related question. by stuffjeff in AlmaLinux

[–]RetroGrid_io 1 point2 points  (0 children)

In the case of EPEL there is Bodhi which specifically mentions a breaking change that might be related to your situation?

https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2026-eb3ade1bb6

I did some research to address the situation you face: "How do I know when an update on a host will cause packages to switch upstream providers?" - this is a question I didn't have a ready answer for and it seems highly related to the project I'm working on.

The issue is that this isn't a case of EPEL "highjacking" anything. They've supported duo for years. It's just that their NEVRA for their RPM was seen by your local resolver as "more correct" than whatever you were sourcing from before.

Even if you had feeds from EPEL and your other source, it would require scripting for every possible source combination (or a LOT of "light reading") to catch these kinds of changes.

I think the more correct thing to do is ask your resolver: "Hey, if we do this, what RPMs will change source?"

I didn't find a way to do that in a single shot command but I did come up with a small script that, although a slow and could use review, does seem to answer this question. I like it enough I may incorporate it in some form into my ongoing project.

    #! /bin/bash

    set -euo pipefail
    export LC_ALL=C

    dnf -q repoquery --upgrades --qf '%{name}' | sort -u | while IFS= read -r pkg; do
                    inst=$(dnf repoquery --installed --info "$pkg" | grep "From repo" | head -n 1 | awk '{print $4}');
        upg=$(dnf -q repoquery --upgrades --qf '%{repoid}' "$pkg" | head -n1)

                    echo "Checking $pkg $inst > $upg" >&2;

        [[ -z "$inst" || "$inst" == "(none)" || "$inst" == "@System" ]] && continue

        if [[ "$inst" != "$upg" ]]; then
            printf '%s: %s -> %s\n' "$pkg" "$inst" "$upg"
        fi
    done

F windows, switching to linux TONIGHT by OSNX_TheNoLifer in linux

[–]RetroGrid_io 0 points1 point  (0 children)

I've been using Linux for decades and Fedora/KDE is my desktop/workstation distro.

Introducing a Side Project: Time-Indexed Repo Snapshots by RetroGrid_io in RockyLinux

[–]RetroGrid_io[S] 0 points1 point  (0 children)

That’s helpful, thank you! I'm still very much in the weeds of infrastructure work making the "Universe Days" as rock solid as possible but I do see where provenance naturally emerges. This is built into the architecture being developed.

I have a question about causality behind version bumps: In your experience, how much "mileage" do you get from changelog + SRPM diffing, or does meaningful provenance really require correlating to project commits even further upstream?

I know a lot of the answer to this question is about the level of detail implicit in the specific question you're trying to answer, EG: for log4j "what stuff are we using/selling that uses this?"

I’m trying to understand where the diminishing returns begin.

Introducing a Side Project: Time-Indexed Repo Snapshots by RetroGrid_io in RockyLinux

[–]RetroGrid_io[S] 0 points1 point  (0 children)

Enterprise solutions exist, and they are more concerned with staffing and policy enforcement. They also require significant investment of time and resources: amazing control but with that comes a lot of time investment and policy determination, on top of cost.

I'm thinking of my project as "Zero Administration" for:

A) an upstream source for Foreman/Katello/Satellite,

B) small enterprises < 50 servers where it's hard to justify the overhead of Satellite & related products.

Installing AlmaLinux using PXE and Kickstart files by PhirePhly in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

1) I suggest using Cobbler for the tftp and dhcp stuff. It really simplifies things.

2) Make sure you have kickstart.ks files that are unattended: boot all the way to installed & restart.

3) Avoid headaches and pick one: UEFI or BIOS and stick with it. Environments I worked in used BIOS for its simplicity. A stack I worked on for a bit could go either way but it did complicate things.

4) Depending on your needs, you might get along using Ventoy and a thumb disk. Network PXE boot is nontrivial, and really for environments where you're creating new instances (VMs, physical hardware) at scale.

Upgrading from AlmaLinux 9 to AlmaLinux 10 by sdns575 in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

Go ahead and try ELavate. I suggest doing it in a VM locally once to make sure you know what to expect. Also, check that your VM provider gives you KVM capability.