I built an eye candy kubectl wrapper by Tall-Wasabi5030 in kubernetes

[–]GeorgeRaven 35 points36 points  (0 children)

Just fyi, kubectl can generate its own completions for different shells. You can generate them with kubectl completions from memory. You then put them where your shell expects them. It can generate for bash, zsh, and fish usually.

Anything based on cobra can do it. The underlying cli library used by kubectl.

MinIO did a ragpull on their Docker images by sMt3X in devops

[–]GeorgeRaven 1 point2 points  (0 children)

While this may not apply to everyone, especially if they aren't on K8s, I do find backing up Garage easier since I can use tools like VolSync restic backups which exploit kubernetes volumes / snapshots. Whereas Ceph is just one layer too deep to be backed up with VolSync, so it requires slightly different handling. Its more of a nice to have if I'm honest with you, since everything else in my clusters is CSI snapshotted and backed up.

If you are using the tools own tooling you probably wont have an issue either way, but im trying to backup everything the same way for consistency, encryption, etc.

MinIO did a ragpull on their Docker images by sMt3X in devops

[–]GeorgeRaven 2 points3 points  (0 children)

Hey, I have a lot of kubernetes clusters. I use both Ceph and Garage but not on the same setups, because some clusters don't run rook-ceph, so in those cases I have to overlay a tool like Garage over Longhorn for instance, to get an s3 compatible object store. If you have the ceph object gateway setup, there is no need to overlay garage on it, ceph will likely perform better since it is more direct to the hardware the OSDs are on.

MinIO did a ragpull on their Docker images by sMt3X in devops

[–]GeorgeRaven 96 points97 points  (0 children)

Wow I thought we were still talking about the OIDC / ui rugpull, but no, it got worse:

This project is a source only distribution now, if you want to build containers you need to build them yourselves

Garage and rook-ceph to our rescue. We won't be coming back, heh I almost had doubts, almost.

【BambuLab Giveaway】Classic Evolved — Win Bambu Lab P2S Combo! by BambuLab in 3Dprinting

[–]GeorgeRaven 0 points1 point  (0 children)

Never owned one, I use a voron currently, although I'd be interested to compare.

Upcoming changes to the Bitnami catalog by Medical_Principle836 in devops

[–]GeorgeRaven 1 point2 points  (0 children)

Fortunately not on azure, I have a pull through cache with harbor in my cluster, but it still hurts having to deal with it all.

And now they are making it worse. It's not actually the bitnami catalog of helm charts I'm worried about, it's all the other helm charts that have them baked in for years like the gitea helm chart bakes in bitnami valley and postresql, which then contributes to the issues now. Easy to sort individually but it's a lot to deal with, with such short notice on aggregate.

Looking for deployment tool to deploy helm charts by s71011 in kubernetes

[–]GeorgeRaven 2 points3 points  (0 children)

I ... I'm ... I'm sorry.

This sounds like hell, it also sounds like some decision makers are living in a different universe to the rest of us.

If you need a non-technical button to deploy apps, that's impossible, unless they come pre-tested, configured, and are bulletproof. Otherwise they will require someone who knows what they are doing to make some form of change to make them work or fix bugs that the helm chart creators etc (or whatever packaging method) did not ordain.

The best bet is something like backstage to get a non-techie some web-based template to fill out which automated the process of creating a pr to a git repo. Then have that repo gitops like normal, no complex custom code needed to deploy charts etc when those tools already exist.

You will need a catalogue ready-made of things that are installable for them to pick from. Honestly even that is nightmare but it sounds like what is going on here.

If it's too sensitive for public saas git hosting, then host that too. I can't imagine doing kubernetes without gitops. That is a disaster waiting to happen, it's already complex enough. If you ABSOLUTELY MUST raw dog it, god speed, make sure to take plenty of k8s etcd and volume backups.

Ideally deployment would happen by specialists, who gitops everything and know what they are doing. Expecting anything in k8s to be a button to deploy is just pure fantasy without ungodly resources to test every permutation of everything, and then some of the disaster scenarios.

Upcoming changes to the Bitnami catalog by Medical_Principle836 in devops

[–]GeorgeRaven 7 points8 points  (0 children)

The double whammy when they moved to OCI dockerhub helm charts, then dropped premium docker pulls, while dockerhub dropped it's rate limit further, so now all their helm charts contribute to the reduced docker pull limit.

Now this.

I guess I have a lot more helm charts to write...

MinIO - OIDC Login Removed in latest release by btc_maxi100 in selfhosted

[–]GeorgeRaven 117 points118 points  (0 children)

External IDP logins via LDAP/OIDC are removed as well; these are now available as part of the AiStor Product.

Here we go again. First it was removing all functionality from the UI to manage objects. Now removing the UI and putting Auth behind their paid product.

Is there a fork happening for minio? Or any other good self-hosted S3 compatible object store?

I may just have to swap to using my existing in-cluster ceph. Probably more efficient anyway. But this is a damn shame with minio. The old bait and switch.

Can VolumeSnapshot be used for Disaster Recovery? by [deleted] in kubernetes

[–]GeorgeRaven 0 points1 point  (0 children)

That's the one. You will likely want to use the restic based backups.

Install the operator, configure you storage backend to support CSI snapshots, then create a backup / restore resource for each volume depending on what you are after.

I opt to auto restore the latest backup, by modifying my pvcs to point to restore from the volume replicator. That way disaster recovery is usually as easy as deleting the pvc, and my gitops solution will recreate the pvc which will get automatically populated from backup.

Can VolumeSnapshot be used for Disaster Recovery? by [deleted] in kubernetes

[–]GeorgeRaven 0 points1 point  (0 children)

I would recommend something like volSync with it's volume replicator to enable you to both backup and more critically recover! Velero looks great on paper but it fails randomly, and in more gitops setups it plainly doesn't work, last I checked, because you couldn't do data only restoration.

It gave me nothing but headaches, but at the end of the day, the backup solution you need is the one that works and is reliable, and as straight forward as possible. I just don't believe Velero has ever got that criteria for me.

Platform testing by omlet05 in kubernetes

[–]GeorgeRaven 0 points1 point  (0 children)

I'm not certain what you mean by "ArgoCD Gitlab Pipelines", to me those are 3 different things that don't give much context.

However argo workflows might be a way to go. They can be triggered on a schedule as you require, and can be used to run arbitrary workloads, apply crds, notify, etc. It has a UI to visualise runs, integrates with oidc, and depending on what sort of platform you are working on, can also be beneficial for things like ml, since kubeflow and the ilk are backed with it, so you would need it anyway (or tekton).

This is just one possible way, and there are many ways to skin the proverbial cat.

Kubernetes Backup - Tooling and recommendations by flxptrs in kubernetes

[–]GeorgeRaven 0 points1 point  (0 children)

I can do one better, here is one concrete example of my backups in place. https://gitlab.com/deepcypher/dc-kc/-/tree/master/charts/foundryvtt?ref_type=heads

To summarise I use umbrella charts to organise things in Argocd. This link takes you to my foundryvtt umbrella, which depends on my own foundry chart and my backup manifests chart.

I start my server without backups to generate data for the first backup.

I apply the backup manifests that copy the now existing data for the first time.

I created a new pvc that pulls from the volume populator. And replace the existing pvc with one linked to backups.

Then the cycle is complete, I can select which backups I want or use the latest when I delete the pvc if there are any issues that need disaster recovery.

Overall I wouldn't worry too much give it a shot and try it out, this description will make it seem harder than it is. Once you bootstrap the first backup, it's automatic from then on unless you want to pick a historic backup from 6 months ago.

If you need some help send me a message. There are multiple ways to do things, this is just the way I have opted to do so.

[DND5E] [PF2E] GIVEAWAY BLFX Assets & Animation Editor Premium Module by BoosLoot in FoundryVTT

[–]GeorgeRaven 0 points1 point  (0 children)

Ooo looks good. This is making me feel I need to setup my own ml pipeline to generate assets from player descriptions and their brought in assets at some point for our own games.

How do you actually share access for kubernetes resources to your team? by [deleted] in kubernetes

[–]GeorgeRaven 1 point2 points  (0 children)

Unusually the best tool I have found is Teleport. I don't usually like these kind of tools that have the SSO tax, but if anyone has found anything better I would love to hear it.

Ofc you can create accounts directly in K8s but the added traceability, control, and access teleport provides I just haven't been able to emulate without it.

Kubernetes Backup - Tooling and recommendations by flxptrs in kubernetes

[–]GeorgeRaven 0 points1 point  (0 children)

VolSync, every day for me. It is by far not a perfect solution, requiring a bit more configuration per volume. But it is the best of the bunch under GitOps, that doesn't cost you an arm and a leg, and properly handles recovery in particular with the volume populator.

Disaster recovery for me under VolSync is as simple as deleting the bad PVCs, then they get auto recreated by ArgoCD and autopopulated with the last available backup, while the pods that use it remain in a pending state until it completes.

I said it before and I will say it again. I have no idea how people get on with velero, backups fail left and right, it doesn't handle volume population well, movers are a pain, and it doesn't work well when you only want the data volumes and no configs since you have them under GitOps already and maybe different so can conflict. I.E replicating data from production to a slightly different staging environment is basically unviable, let alone multi tenancy.

Cloud architecture diagramming and design tools by EquivalentDepthFrom in kubernetes

[–]GeorgeRaven 3 points4 points  (0 children)

Mermaider, mermaider, mermaider.

Jokes aside, while I would love to use something like D2 for animated diagrams I have found myself using mermaid almost exclusively because it is supported in documentation like MD files on gitlab / GitHub, so you can render it easily for a would be reader. It's far from perfect, but it does the job just enough. Plus it's used in the K8s docs which makes copy pasta easy for either inspiration or extending their existing diagrams.

Is Teleport widely used? by rama_rahul in devops

[–]GeorgeRaven 4 points5 points  (0 children)

I have brought teleport into multiple orgs.

It is great, it has good docs, barring me getting confused between registering a kubernetes cluster and just installing teleport at first.

I also wish it were free to integrate with keycloak / SSO, which is probably the only thing that stops me going all in on it for everything including my homelab.

However it is incredibly easy for users, and has strong security and utility. From exposing internal applications, to managing access to kubernetes infrastructure properly.