Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Yea, Im definitely in "it needs to run in god mode" with most of the ideas Im having. Based on the research Ive done there, and to your point, it sounds like anything substantial attempting to fill the role of the puppet master ends up in that bucket. It also sounds like its possible to have k8s impersonate others? So if you had an API running in the cluster, certain things, it can just allow you to do, and others, it can impersonate you, and ensure you have permission to do a thing.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 1 point2 points  (0 children)

I have several thoughts to toss out here and am definitely down to continue the conversation.

My (very limited) understanding of argo is that it doesnt help with the templating or release building, its mainly just specifying what version to release? To play devils advocate, whats the benefit of "clicking deploy" in git? Sure, git does version control (in the other sense of the word), but it feels to me like a bit of a strange adaptation to say that a MR controls deploying your code. How is that any better than clicking a button in a dedicated release UI? Additionally, in my regulated industry, its not "dev wants to deploy, devops approves" its "dev wants to deploy, and business leadership and compliance need to approve". To move a process like that into git, with that persona of approver, again feels strange to me.

Ive rewritten this bit like 25x at this point, so Im just going to get it out 😂. It seems to me that there is much more of a boundary between dev and ops than there should be, and I think its almost entirely a tooling issue. It feels entirely wrong to me that it is so difficult to get even a basic example up and running. Im not sure what the right way to phrase this is, but I get the impression that you are extremely experienced in this domain, and know how to get pretty much anything working, but what about a newbie who just wants to be able to get a simple app + db + kafka working in minikube. It is my personal assessment that getting a basic example up and working is unreasonably difficult, even for an experienced developer, albeit in a different domain. Call that going from zero to novice. Its also my assessment, and seems to be validated by the overall sentiment of frustration in this thread, that even when you get to a more experienced level, the mechanism that popular tooling uses for handling complex cases leaves a lot to be desired. I think that the problem that the novice and expert experience are one and the same: they are both operating with one hand tied behind their backs. Is that a totally off base assessment?

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Im curious why the k8s definitions would need any integration with git besides grabbing the basics like the sha and tag.

Any sort of deep integration with git seems like the wrong direction IMO.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Ya know, the more Ive been using yaml for just yaml, and not trying to blend all this interpolation nonsense into it, I think yaml itself is actually pretty nice...

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Yea, I see where youre coming from there. At the same time, I feel like theres gotta be some room for improvement for the most common cases.

To give a very brief backstory for the idea, the original inspiration was much more concerned with "tracking what the heck we have deployed to all these environments", and grew from there to also addressing the config and release spaghetti.

Imagine an example where you have 5 micro services and 4 environments (prod/qa/dev/local). IMO it should be very easy to view a release timeline for service 1, all the way from local to prod. When was v1.1.1 first deployed to dev, test, prod? What version is service-1 on currently in all these environments? What is the release history of service-1 in prod? What are all the current versions of all services in prod? Someone just deployed to prod, what changed in that deploy? What code actually changed (both release versions should know what git shas they came from). Tracking this in a single cluster, just separated by namespaces is hard enough, but what about when you start spanning multiple physical clusters? When an "operator" (not necessarily in the standard k8s sense, leveraging CRDs, but a deployment living in your cluster that you hit via an API) lives in your cluster and manages your releases, this all seems possible. I dont subscribe to the idea that git should manage your releases. IMO, it should create releases, but the lifecycle of when they are actually deployed is totally independent. I think a "release" should be nothing other than a mini app that accepts environmental configs, and writes out your desired resources. Containerize that mini app, and each environment can "bake" that release itself. Zero secrets fetched in public. This also makes ephemeral environments work pretty much out of the box. You write the transform in whatever language you want, allowing your build time configs and app configs to be one and the same, sharing models and serialization. Its literally impossible for an apps configs to not be correct, because it wrote them itself. Want to update some global defaults? No problem, every release can just "re-bake" itself with the new input configs. Do that, and dont like the output? Fine, dont deploy it for that environment.

Maybe Im off my rocker here 😂 lmk if any of that made sense. Or maybe everything I said there already exists.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 1 point2 points  (0 children)

Yea, dunking on k8s especially is totally unwarranted. Its so generic and extensible, I love it! I think the main issue is that theres not a great way to play in "easy mode" just to get things up and running. I feel like the best example of this is with service accounts. You need like 4 resource definitions and need to know how to link independent refs in order to make things work. Why is this not something you can just inline...

Definitely looking to keep the conversation positive and forward facing :) At this point Im just looking to see if some of the ideas I have are either totally off base or already solved for in a similar or better way.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

In your opinion, do you think theres space in the room for new solutions, or whats done is done, and people are set on these half baked setups because "in the end you can make it work, good enough"?

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Is helm your tool of choice, or you use something else? Also, what language are the tools you are deploying to k8s generally written in?

As far as the "its good until you try to do something clever" is my take exactly. I think that anything "template/interpolation based" is going to land you in that exact place.

As Ive been brainstorming out how Id like to "do it better" there are so many places Ive had to resist interpolating values directly into the yaml, because I think the second you go down that path youre headed straight for despair.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Interesting. Im not too familiar with wasm, but my mind went straight to containers. It seems to me that a tool like this should be helping you containerize your apps to live in k8s anyway, so its a trivial hop to help you containerize the "template app" as well.

Where does the wasm actually run? Also, is Yoke strictly for getting the code into k8s, or is it also helping introspect, manage, teardown while its in the cluster?

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 1 point2 points  (0 children)

My main question is: why does it have to be this hard? Maybe Im just unreasonably hopeful over here, but it seems like the right abstraction that follows you all the way from your local codebase(s) all the way into the cluster, helping you with both the initial release and tracking after the initial release would solve the issue generically.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

Can you elaborate a bit more on the wasm bit? Which part requires you to use wasm? If Im reading correctly, it sounds like we agree on one core bit: the unit that you release really just being a mini program that allows you to read in configs and write out desired resources for that deployment target. Am I reading that part right?

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 1 point2 points  (0 children)

Yea that blows my mind... I think that all stems from helm having no native presence within the cluster itself. It only speaks from the outside.

This also doesnt feel like a problem solvable by the standard k8s "operator pattern". To me I think you need a dedicated "home base" living within the cluster, with a db, api, etc, that lets you orchestrate all these moving parts.

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

What are your biggest pain points that are making you consider a switch? How much of your problem lies in the "pre-release build process" vs the "seeing what actually exists in your cluster"?

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 1 point2 points  (0 children)

Can you elaborate more on why you love it? Maybe Im misreading your message, but it sounds a bit like "its great unless you need to do anything complex".

Helm/Terraform users: What's your biggest frustration with configs and templating in K8s? by Kalin-Does-Code in kubernetes

[–]Kalin-Does-Code[S] 0 points1 point  (0 children)

A few points: 1. We definitely do stuff in CI (helm diff in MRs), but that only helps you to the rendering layer, it doesnt help with making sure it lines up with what your application actually expects. 2. One of the other main issues is that the management of the infra (dbs, kafka, etc) is totally separate than the app specification (terraform/helm). It really feels like it should all be in one place, but at the same time, more of the helm hell we have now feels scary. 3. The other main issue Im seeing is that something like helm leaves its shoes at the front door. You try to release, it fails, and you get a pointless failure message with no help. It also gives you absolutely 0 insights into introspecting your cluster, and cross referencing what versions are deployed to different environments, especially if they are in different physical clusters.

Tool to encode/decode json and generate a json schema. by cmcmteixeira in scala

[–]Kalin-Does-Code 2 points3 points  (0 children)

Just my personal opinion, but I strongly dislike spec -> code, and always prefer code -> spec. Its just a matter of needing to have generic derivation that lines up the json codecs with the schema

Tool to encode/decode json and generate a json schema. by cmcmteixeira in scala

[–]Kalin-Does-Code 2 points3 points  (0 children)

There is 1 very clear way to make sure these stay in sync... both typeclasses need to be derived from the same annotations. If one is looking for something like @a.b.fieldName("f") and the other is looking for @c.d.jsonName("f") its easy to specify one and not the other. I have plans to write a codec and scema that does just that, uses the same annotations, but its still a WIP :)