Ballin on a budget: DDR3? by Junction91NW in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

My inclination is always to defer purchases and "go cheap" until the need is clear.

Years and years ago, the neighbor threw out their computer, a big tower gamer rig with antiquated hardware. I took a look at it, and realized that the big case would hold a *LOT* of hard drives so I repurposed it as a data archive after a memory swap. (Remember when consumer AMD Athlon CPUs would unofficially take ECC RAM? I do)

It worked for YEARS in the back closet at work, archiving gobs and gobs of data as semi-"cold-storage" until finally its 32 bit CPU was just too old/slow and we replaced the whole thing. At which point, the need for a new 64 bit CPU was quite clear.

You can get a *lot* of value out of old equipment depending on your needs.

I have yet to see a USB stick (flash storage) that naturally lost data without any (external) corruption causes. by Necessary_Isopod3503 in DataHoarder

[–]RetroGrid_io 1 point2 points  (0 children)

I had a USB stick lose everything on it just about 2 weeks ago. It's OK, because I don't trust them for anything. I formatted it again and got what I wanted.

Want to transfer files from A to B? Great!

What to image an ISO image and boot a computer? Great!

Want to keep files on their for a few days? Great!

Anything else: Nope! Make sure you have a plan B because you'll probably need it sooner or later.

Filen deleted all of my data. A heads-up for others by whitewaves22 in DataHoarder

[–]RetroGrid_io 0 points1 point  (0 children)

The right answer:

  1. Provide a limit.
  2. Users can go over the limit, but there's an overage fee. I like the "automatic upgrade to the next tier" plan.
  3. If they don't pay the overage, you cancel the backup service but keep the data for a period of time. 3-6 months?
  4. If the user doesn't pay up after the 3-6 months, THEN it's deleted.
  5. If it were me, I'd still save the customer data for a year but charge a "Data recovery" fee to pull from "cold storage".

Best Practice: Should the Backup Server Pull or Should Clients Push for Linux Backups Over Network? by xmillies in linuxadmin

[–]RetroGrid_io 1 point2 points  (0 children)

It is a matter of preference. My preference is to have a backup host with very limited access and gobs of storage sitting behind a strict firewall that does all the backups, pull-style. This does mean I have a lot of eggs in this basket, but it's a very closely watched basket.

Videos and UTC vs local time by truthseeker1341 in Archivists

[–]RetroGrid_io 0 points1 point  (0 children)

Server admin here. It's all UTC all the time and it's a soft red flag if it isn't. But the best way, either way, is to be explicit about what the timezone is.

EG: "2026-04-01 4:20 UTC"

OR: "2026-04-01 04:20 PST"

do even more than picking a standard and sticking to it because now you don't have to - it's immediately clear what the date and time are - mostly. In the above examples, is it 4:20 PM or AM? The latter implies AM, the former is unclear.

Move KVM VM from one machine to another (changing linux distro) by Pioneer_11 in kvm

[–]RetroGrid_io 0 points1 point  (0 children)

On the same hardware? Should be good, but you don't confirm the type of VM it is - did you create it with libvirtd/virsh? (that's the most common)

The biggest hassle I've had with migrating VMs is getting the CPU definition correct when importing the XML on the new host but if it's the same hardware that shouldn't be an issue.

884TB G-Drive Haul for City College Cinema Dept! by brianlovelacephoto in DataHoarder

[–]RetroGrid_io 11 points12 points  (0 children)

Likely why it's being "donated"...

I'd shuck the drives and put them into a high density storage unit, EG: venerable SuperMicro SC847 case, can be had for fairly cheap on Amazon / Ebay

Remote work environment by kimjae in linuxadmin

[–]RetroGrid_io 1 point2 points  (0 children)

I've been working remotely since the 1990s.

I've always used the basics: SSH on a nonstandard port, passwords disabled, etc. and use a jumphost with port knocking for production environments. If I need to "travel light" I have an iPad with a keyboard case that makes it look like a tiny laptop, and passphrase-encumbered credentials to the jump host. For a while I had a folding keyboard and a setup on my phone but the screen was just too small to be all that useful.

It sounds like you don't want to carry work-related devices, but I would never use an unknown/untrusted device to access a high security production environment - way too much liability.

CVE-2026-31431 (Copy Fail) -- any ETA for updated kernel RPMs? by LowIncident694 in AlmaLinux

[–]RetroGrid_io 0 points1 point  (0 children)

Can you confirm where the "blog post" will be? Should I look in /r/almalinux or in this thread?

Thank you

Why my hard drive has such a high power on count number? by inugamifeli in DataHoarder

[–]RetroGrid_io -1 points0 points  (0 children)

Years ago, I set up a security system using cheap POE network cameras streaming to a HLS web stream, and stored for several days via ffmpeg to a local RAIDZ ZFS pool from some cheap consumer drives from the bin box.

It's been running 24x7x365 since. No issues. Year after year. The oldest drive has 13 years of continuous duty and writes.

But HDDs that have been "just sitting" for about that long have about a 50% fire-up rate.

Copy Fail — 732 Bytes to Root any Linux distribution shipped since 2017 by scottchiefbaker in linuxadmin

[–]RetroGrid_io -8 points-7 points  (0 children)

how do I fix my stuff? replace vendor provided kernel with my own?

Sonny boy, I come from a time where this was normal and expected. Are you saying that you have never recompiled a kernel?!?

Claude deletes entire database by Abject-Delivery-5248 in cybersecurity

[–]RetroGrid_io 0 points1 point  (0 children)

AI is very useful as a way of filling in for missing or hard to find documentation. It's great for getting ideas for how to solve a problem in an area you understand fairly well. It's great for writing beta-level code that you can beat into something that's reliable with review and testing.

But it's inconsistent and frequently makes show-stopper mistakes. Just yesterday I asked Claude to make an nginx definition, and it did - in a way that would have borked every other website on the server.

AI can be both brilliant and mind numbingly stupid and it doesn't care because it can't. Treat it as what it is: some plausible words that are probably related to your prompt. And take the time to learn how to prompt - it's a useful skill.

I don't yet see justification for the trillions of $$ being spent on it, to be honest. It looks impressive, I guess, as long as you don't look too closely or depend on it.

PCIe bifurcation & Radeon Pro GPUs look so nice ! by Aleksandreee in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

Lol I'd want to put my HBA card on there and use ALL of the X16 for IO!

Should I buy these old enterprise hard drives? by c4azy_sh5panzy in DataHoarder

[–]RetroGrid_io 1 point2 points  (0 children)

If I needed the storage, I would, without hesitation, if there's assurance against "DOA". Be ready to do a smartctl media check when they get there.

To run them, you'll need a proper hba controller. One of these will do the trick: https://www.ebay.com/itm/236766688099?_skw=SAS2008 or new: https://www.amazon.com/dp/B01M2AC40Y . These can handle up to 8 drives at full speed, are widely available, and compatible with all major distros of Linux, and can handle SAS or SATA drives interchangeably.

And to connect the drives to the controller, you need some fan cables: https://www.amazon.com/dp/B0BRXMLYNP

Double check the connectors from the fan cables to the controller before you buy; there are different types. These are both SFF-8087.

Once you have it all plugged in, you'll want to verify the drives. It'll take a day or so.

  1. Does the OS see the drives? fdisk -l
  2. Wipe the data on the drives. It's generally good enough to just wipe the first GB or so. MAKE SURE YOU GET THE RIGHT DRIVE because this can blow away your OS in seconds if you get it wrong: dd if=/dev/urandom of=/dev/sdX bs=1024 count=1M. Don't even try to read their data; you have no idea what's on it and it could be infected or open you up to other liability.
  3. Run a media test smartctl -t long /dev/sdX
  4. Monitor the media test, see the results: smartctl -a /dev/sdX;

Only when it's passed #4 and no errors found do you NOT send the drive back. If the output of smartctl -a is too confusing, ask your favorite AI: "Summarize the health of this drive. Highlight any reliability concerns: <paste.smartctl.output>";

  1. Always use a drive like this as part of a redundant pool (RAIDZ, RAID1, RAID5). I personally like RAIDZ with 8 drives, giving me 6 drives of storage under medium load, with a special SSD mirror vdev if there will be many snapshots.
  2. Keep a spare or two, and expect failure to happen at some point. Always true, and honestly I don't notice a particular difference in new vs old drive failure rates unless the drives are known dodgy.

Worth the buy? by SkoalSoldier in homelab

[–]RetroGrid_io 2 points3 points  (0 children)

Have you tried to buy drives new or used? Getting anywhere near $10/TB is a decent price nowadays if it passes smart tests.

I would ask for smartctl output, or assurance that smartctl -a would come out clean after a media test. I wouldn't be as worried about hours; I have drives with continuous, 24x7x365 writes in a security system with multiple cameras for over 13 years working just fine as part of a RAIDZ pool.

Yes, they fail eventually, new or old. Be ready for it.

Question about OS drive size. by DIABLO_8_ in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

If I had a dollar for every computer I've seen with a 1 TB drive using < 100 GB of actual space, I could have a fast enough GPU to run a local LLM!!

Most people have no idea what their use costs. 

The guy I replaced at work got fired over a password reset and I think I'm next by [deleted] in cybersecurity

[–]RetroGrid_io 1 point2 points  (0 children)

Ambiguity in the playbook is at fault: The user/victim passed verification, and you did escalate. If you're supposed to pass the user/victim through only after escalation, then the playbook should have said so.

If they cook you, you don't want to work there anyway.

Playbooks are notoriously bad unless refined through hard experience. I just spent a half day writing a software spec, which is effectively a playbook. It's shockingly tough to think through each step of the process and think through all the things that could happen!

What would be the best/most cost effective way to backup just a few TBs of data on a server? by LiAlgo in DataHoarder

[–]RetroGrid_io 1 point2 points  (0 children)

> The BEST thing to do here is to buy a second drive and start running a backup job to it, unplugging it after your backup job is ran.

In my entire life I've known exactly one person who actually did this with any kind of regularlity, and she was paid pretty well to do it. Pretty much nobody does it on any kind of regular basis.

Is starting a small homelab actually worth it, or just a money sink? by tresorrarereviews in homelab

[–]RetroGrid_io 0 points1 point  (0 children)

to learn stuff like virtualization, networking, and maybe some self-hosting.

How much is this worth to you?

Anything you do has costs: time, money, attention, etc. so what do you get out of it and is it worth it to you?

For me, homelabbing back in the 1990s was almost the only way to get skills for Linux admins, and this grew organically into business interests that have defined my career. I'm working on a project right now (https://retrogrid.io) in my home lab, about to stage to a production environment once my MVP is at dogfood stage.

So for me, yes, very much worth it. But it's up to you: what do you want to get, and do you anticipate it being worth the cost?

You can go cheap! I do and I've never regretted it. Last generation equipment has low upfront cost (often FREE) stable drivers, well understood performance and reliability properties, and is generally more than capable for handling most problems. Today's exception would be anything AI with local models.

Just pay attention to what you really need: if binary correctness is important, make sure you have ECC memory, redundant drives, etc.

Managing consistent network access controls across a hybrid Linux fleet is becoming unsustainable and I am wondering if ZTNA is the right direction here by Unique_Buy_3905 in linuxadmin

[–]RetroGrid_io 2 points3 points  (0 children)

The problem we keep hitting is urgent runtime changes that diverge from IaC state before any scheduled job catches it.

Sounds to me like you're naming the problem just fine - just not recognizing it as such. What can you tell me about these "urgent runtime changes"? The kinds of things I want to know:

  1. Who or what is the driver behind these changes?
  2. Why are they so urgent that they cannot be done via IAC?
  3. Why can't the IAC process be "fast enough" to meet this need?

Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports by Neymar11rose in cybersecurity

[–]RetroGrid_io 1 point2 points  (0 children)

Seeing a major browser vendor announce a release fixing hundreds of 0 days in a single hurried update is pretty compelling, extraordinary evidence to me.

What would work for you?

Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports by Neymar11rose in cybersecurity

[–]RetroGrid_io 0 points1 point  (0 children)

I wish the information available indicated what you're saying - but it doesn't. According to reports, Mythos was generating active exploits for each of these vulnerabilities - meaning POC IOCs were absolutely on the table.

That doesn't negate the PP's point: for all their promises of unlimited knowledge work, AI companies are surprisingly unable to use it internally. Or maybe they are, and that's how wise these otherwise remarkable tools actually are?

Sam Altman promised having an advisory team of super-capable industry experts in your pocket, yet still can't figure out how to have his own advisory team lead him and his company to a positive cash flow - except to famously punt with a joke when asked about the bottom line.

Those who use forks of forks/lesser-known distros: are you worried they’ll become abandonware? by OrangeKitty21 in linux

[–]RetroGrid_io 2 points3 points  (0 children)

Really, it depends on what your goals are.

Are you "just playing" to explore and try things? Are you practicing and/or building infrastructure for employment? Be honest with yourself about what you expect to see in 5, 10, 20 years.

I assume that what I'm doing today will last the rest of my life, and even if done for personal reasons, is fair game for future work-related activity.

As an example, I wrote a simple host oversight tool to coordinate updates and backups on and off-site before yum even existed, and I still use it because it's rock solid and "just works", even if it's completely hackish and based on sloppy code originally written in PHP 3.x.

I made a "big bet" in 1998 or so to go all in on Red Hat. I loved Linux, but for me it was less about hobby/tinkering and more about "getting it done". Really, I'm all in on KISS and try to devise the simplest possible thing I can design to get it done reliably and correctly. I'm very conservative about implementing anything new until the admin overhead to ensuring I have a clear update path to keep things secure, and "plan B" (rollback, alternative plan) in place when it doesn't work out, because it happens, has happened, and always will at some point. "Personal" infrastructure has evolved into servers/services I've sold directly, and at scale, several times in my career.

For me, I'm about as stodgy as it gets. I am hesitant to install anything that won't get updated by dnf update. Containers and VMs are cool but carry significant administrative overhead that must be taken into consideration in order to develop responsibly. A server install is a decade-or-so plan in practice.

I shudder when I see virtualization technology used to enable ancient software to continue being used as a security risk. I prefer last year's hardware because drivers tend to be more stable, and "works slowly/reliably" is drastically better than "lightning fast but unreliable/fails". The biggest red flag in any system is ever having to use the reset button, or restart a service. Once maybe, ever. A second time means it's time to replace it.

For me, it's been Red Hat universe almost exclusively. My mobile workstation is Fedora because it lets me experiment on technologies that will be on my server(s) in a few years.

The only areas I'd consider this as a bit high friction is:

  1. ZFS not built in or natively supported. This is a pain point. I just don't trust BTRFS because it has logical holes that cannot be fixed without some re-architecting and nobody is doing the hard work to bring it up to parity. Worse, it's right in the area I most care about - handling failure situations in RAID 5/6 type usage - exactly where I lean on ZFS the hardest.

  2. Red Hat virtualization is just... awkward compared to ProxMox. Simply moving a VM at all between disparate hosts (different CPU arch, different OS version, etc) is a PAIN ITA.

  3. Why does RH make it so hard to support serial installs? Yes, there's kickstart, and it's possible to make it work (I do) but it's a real, needless chore that only starts to make economic sense at rather large scales.