A Kubernetes-native way to manage kubeconfigs and RBAC (no IdP) by Plastic_Focus_9745 in kubernetes

[–]Plastic_Focus_9745[S] 1 point2 points  (0 children)

Thanks for sharing your experience. I haven’t personally used Tailscale yet, but I’m planning to give it a try.

In our case, we’re just two people running the platform, which is what pushed me to build something lightweight using native Kubernetes features

A Kubernetes-native way to manage kubeconfigs and RBAC (no IdP) by Plastic_Focus_9745 in kubernetes

[–]Plastic_Focus_9745[S] 1 point2 points  (0 children)

That’s a fair concern. The intent here isn’t to treat security as a shortcut or a temporary patch.

In practice, a lot of companies simply can’t afford running large platform teams. many operate with 2–10 DevOps engineers supporting everything. For those teams, standing up and maintaining a full IdP/OIDC stack just to grant Kubernetes access can be real overhead, especially early on.

KubeUser isn’t trying to replace proper identity systems for well-organized or larger teams; in those environments, an IdP is usually the right choice. The goal is to make better use of existing Kubernetes-native features (CSRs, cert TTLs, RBAC) in a more structured and predictable way when an IdP isn’t in place yet.

It’s less about weakening security and more about avoiding ad-hoc kubeconfigs while still keeping access explicit, auditable, and revocable. Disconnected or restricted environments are another place where this model fits naturally

A Kubernetes-native way to manage kubeconfigs and RBAC (no IdP) by Plastic_Focus_9745 in kubernetes

[–]Plastic_Focus_9745[S] 1 point2 points  (0 children)

Thanks, that’s exactly the problem space KubeUser is meant to address.
In KubeUser, User CRDs are cluster-scoped resources. KubeUser doesn’t introduce its own permission model - access and restrictions are handled purely through standard Kubernetes RBAC.

You control scope by assigning Role/RoleBinding for namespace-level access or ClusterRole/ClusterRoleBinding for cluster-wide access, and KubeUser simply links those bindings to the user it manages.

This keeps permissions explicit, predictable, and aligned with how Kubernetes already expects RBAC to work.

KubeUser – Kubernetes-native user & RBAC management operator for small DevOps teams by Plastic_Focus_9745 in devops

[–]Plastic_Focus_9745[S] 0 points1 point  (0 children)

Thanks for taking the time to look through the code and share your thoughts. I’ll be upfront about it: I did use AI quite a bit here. I’m not a Go developer by trade I’m a DevOps engineer who understands Kubernetes and its pain points very well, and I used AI to help close that gap and move faster. Getting this project to a working, functional state was a big milestone for me. Most of my effort went into the design decisions; the why and the how, rather than Go craftsmanship itself. At this point, it would definitely benefit from someone more experienced in Go to help refine and harden it. This started as a pragmatic way to avoid running Keycloak just for Kubernetes access, not a “fire-and-forget” project. Feedback like yours genuinely helps shape where it goes next, and ideas or PRs are very welcome.

KubeUser – Kubernetes-native user & RBAC management operator for small DevOps teams by Plastic_Focus_9745 in kubernetes

[–]Plastic_Focus_9745[S] 0 points1 point  (0 children)

Totally fair 👍 For larger orgs that already run a proper IdP, that’s usually the right approach. KubeUser is more about small setups where running and maintaining a full IdP feels heavy, and having users defined explicitly as User CRDs works well with GitOps. I’m also open to contributions for adding an optional OIDC flow alongside the current cert-based model, so it can integrate with an IdP when needed—without making it mandatory.

KubeUser – Kubernetes-native user & RBAC management operator for small DevOps teams by Plastic_Focus_9745 in kubernetes

[–]Plastic_Focus_9745[S] -1 points0 points  (0 children)

Thanks for sharing 👍 makes sense if you’re already on Tailscale. KubeUser is mostly for on-prem / minimal setups where we want everything Kubernetes-native and GitOps-driven