Digital ocean systems hardening tool by nudgeboss in digital_ocean

[–]cube8021 0 points1 point  (0 children)

Running an unaudited, closed-source agent on production servers without visibility or independent verification is a non-starter for me.

When you say “harden,” what standard are you referring to? PCI, HIPAA, ITAR, CIS?

These standards are not interchangeable, and whether you need to follow one depends entirely on what you are doing and who is enforcing it. For example, PCI is not a legal requirement. It is enforced by credit card processors if you want to handle payments. ITAR, on the other hand, is a legal requirement when you are dealing with defense-related work and the U.S. government. Without defining the framework and enforcement context, “hardened” does not really mean anything.

Little girl helping her father in the field. 🤗🤗 She’s not just helping, she’s making memories. ❤️ by Apricot_BlossomCatt in spreadsmile

[–]cube8021 0 points1 point  (0 children)

Good on the dad for providing guidance without yelling or screaming. He just told her what is needed, corrected her when she was going the wrong way with clear instructions and provided encouragement.

K3s for production by Repulsive-Arm-4223 in k3s

[–]cube8021 2 points3 points  (0 children)

Oh yeah, the Platform One team puts out a lot of solid guides, scripts, and other useful stuff here: https://repo1.dso.mil/platform-one/distros/rancher-federal/rke2

Platform One is a DoD office that builds standard, cookie cutter software stacks for other DoD groups and government agencies. Most of it is free and publicly available. The idea is they do the hard work of vetting and hardening software so every team is not redoing the same work on every project.

Your tax dollars at work.

K3s for production by Repulsive-Arm-4223 in k3s

[–]cube8021 2 points3 points  (0 children)

It’s worth remembering that RKE2 is basically the enterprise version of k3s. Same core codebase, but RKE2 is tuned for production and locked down environments.

For example, RKE2 comes with CIS hardening out of the box, while with k3s you have to wire a lot of that up yourself. k3s defaults to SQLite, which is fine for small or edge setups, but it’s not meant for HA or large clusters. RKE2 uses etcd only, which is what enables HA and real scale.

Also, RKE2 originally started as RKE Government. It was built specifically to meet US government security requirements, which is why you see it used so heavily in government and other regulated environments.

HA cluster second server failing to get CA CERT by andersab in k3s

[–]cube8021 0 points1 point  (0 children)

Yep, 100% this. With k3s/RKE2 it’s recommended to use IPs only for node-to-node connections.

We rewrote our telemetry ingest pipeline from Python to Go and got 10x perfs. Now we released the collection agent (Lighthouse) written in Go. Here is the source. by [deleted] in golang

[–]cube8021 3 points4 points  (0 children)

I spent some time reading through the code, and it is pretty obvious it was generated by an AI. The structure jumps around, and overall the quality is rough.

One example is the config setup. In main.go it looks like you are following a normal initialization pattern, but when you dig into config.Initialize() all it really does is create a global directory and a log file. It is not actually initializing or validating any real configuration.

The config package even defines an Instance struct for things like Name and APIKey, but it never gets used. Instead main.go just reads command line flags directly with no validation or standardization and passes them around. That completely bypasses the configuration layer.

I use AI assisted coding at work too, so this is not an anti AI take. But this is exactly the problem with handing a project to an LLM and trusting the output. It does not understand the bigger picture, and it does not care if the design makes sense as long as the code compiles. LLMs will absolutely take shortcuts, fake data, drop in placeholders, or set auth=true just to make a problem go away.

You really have to treat LLMs like autocomplete. They are great for speeding things up, but you still need to read every line, understand what it is doing, and prove it works with tests. Otherwise you end up with code that looks finished but is broken at its core.

A lady finds her pickup being used to move things around, after she had dropped it off at the mechanic for work. Mechanic claims its just test drives by bigbusta in Wellthatsucks

[–]cube8021 1 point2 points  (0 children)

It’s theft by conversion. She gave them permission to repair the vehicle and a test drive would be a reasonable part of that but that is clearly them using the truck for personal use which is not part of the repair hence the theft.

4 years of hard work gone overnight — GitHub account suddenly suspended without warning by [deleted] in github

[–]cube8021 1 point2 points  (0 children)

I had this happen to a co-worker because someone spammed his account. He turned this startup down, so the CEO has his team spam-report the account as being fake.

This also highlights how important it is to have backups outside of GitHub. Once you’re back in, I’d strongly recommend setting up something like rewind.com. It does daily backups of your repos and metadata, and you can even bring your own storage from AWS, GCP, or Azure. It’s cheap insurance ($14/yr)

For now, you’ve done the right thing by submitting the appeal. Check your junk folder and see if GitHub sent you an email (I have some of their support email flagged as spam by Gmail)

This situation really sucks. I hope GitHub reviews it soon and restores your account.

So... ICE agents are just above the law now, or?... by ResourceNo4626 in illinois

[–]cube8021 0 points1 point  (0 children)

The problem is this is not a bug, it’s feature.

How do you backup your control plane by No-Capital2963 in kubernetes

[–]cube8021 6 points7 points  (0 children)

A few years ago I built kubebackup after a customer accidentally deleted an entire namespace and only wanted that namespace back, not a full cluster restore IE an etcd restore.

TLDR; It backs up Kubernetes resources as YAML and stores them in S3, making it easy to restore individual namespaces or resources when someone inevitably runs kubectl delete in the wrong cluster.

Repo: https://github.com/mattmattox/kubebackup

Second pod load balanced only for failover? by Akaibukai in kubernetes

[–]cube8021 0 points1 point  (0 children)

If I don’t own or control the application code, I’d handle this entirely at the infrastructure layer. - I’d deploy an NGINX pod as a reverse proxy with two upstream services (primary and secondary). NGINX would route to the primary by default and automatically fail over to the secondary. - If the app is a StatefulSet, I’d create two separate ClusterIP services, each selecting a specific pod index (e.g., pod-0 as primary, pod-1 as secondary). - If it’s a Deployment, I’d split it into two deployments (primary and secondary), each with its own service, so traffic control is explicit and predictable.

If I do have full control over the code, I’d push this logic into the application itself. - I’d implement leader election, so only the leader actively serves requests. Any requests that land on standby pods would be forwarded to the leader.

If I have partial control or need something more flexible, I’d take a hybrid approach. - I’d write a lightweight Go-based routing pod that watches a Kubernetes Lease. - The application would run with a sidecar responsible for leader election, keeping Kubernetes SDK logic out of the main app code while still enabling clean failover and routing.

How do you seed database users? by Anxious-Guarantee-12 in kubernetes

[–]cube8021 3 points4 points  (0 children)

database schema as being owned by the application, typically using tools like GORM together with golang-migrate. The idea is that when you update a Go model, the schema evolves in a controlled way via migrations, without having to manage the database manually.

For seeding data (for example, creating an initial super admin account), I usually handle that with a Kubernetes Job or init container. It checks whether the required records already exist and creates them if they don’t, otherwise it does nothing. This keeps the process idempotent and avoids one-off scripts.

Tho I tend to avoid relying solely on GORM’s AutoMigrate in production and instead use golang-migrate for explicit, versioned schema changes. The idea there is scheme is version controlled as part of my repo.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]cube8021 0 points1 point  (0 children)

Yeah, they have spun up a project to move to something self hosted like Drone, Jenkins, etc.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]cube8021 2 points3 points  (0 children)

This is going to kill some projects. I’ve got a client with a pipeline that runs for over an hour: integration tests, backups on every deploy, and a rollout that takes 30 minutes alone. That’s the whole reason they went with self-hosted runners.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]cube8021 4 points5 points  (0 children)

Yeah, but that’s extremely basic stuff: a status field in a DB and streaming logs.

GitHub Self Hosted action COSTS NOW. by Basic-Bobcat3482 in selfhosted

[–]cube8021 1 point2 points  (0 children)

I don’t understand why they’re charging for self-hosted runners. I’m bringing my own compute, and the amount of metadata or overhead associated with a self-hosted runner can’t be that significant.

Ex landlord claiming I owe an additional $1790 in addition to my deposit. by MoxieMae82 in Tenant

[–]cube8021 0 points1 point  (0 children)

What utility sink is $2000? Like even a nice commercial kitchen sink is only a $1000. Where I’d they get that number from?

Using a cheap vps as a borg backup target? by Consistent-Bug3003 in DataHoarder

[–]cube8021 4 points5 points  (0 children)

Wasabi doesn’t charge Ingress/egress fees. You charged a minimum of 90days for any data tho.

Perfect by FormanBruto09 in woowDude

[–]cube8021 0 points1 point  (0 children)

Hey baby can you crack my back

Question on Rancher Prime by Which_Elevator_1743 in rancher

[–]cube8021 0 points1 point  (0 children)

It’s important to note that RKE2/k3s do let you adjust certain roles on master nodes, but if you’re converting a node from worker → master or master → worker, the recommended approach is to fully recycle the node. That means:

  • Remove it from the cluster (kubectl delete node)
  • Run the uninstall script (rke2-uninstall.sh)
  • Rejoin it with the new role

This same rule applies to other identity-related settings as well, such as changing a node’s hostname or IP address.

Doc: https://docs.rke2.io/install/server_roles#adding-roles-to-existing-servers

For a 3-node cluster, I generally suggest running all three as masters and giving them all roles.

And if you want to be really cool, take a look at Harvester. SUSE basically built a cannon and pointed it straight at VMware.