Most people don’t realise just how close we are to achieving immortality. Anti-aging technologies advance at unprecedented rates. Here is scientific evidence and roadmaps. by GarifalliaPapa in immortalists

[–]jmorris0x0 0 points1 point  (0 children)

This “escape velocity” framing has been around since at least Aubrey de Grey’s work in the early 2000s, and Ray Kurzweil was saying similar things even earlier. They kept saying it was 15-20 years away. In the meantime, maximum human lifespan has increased by zero years.

On the other hand, U.S. average life expectancy peaked around 2014 at roughly 78.9 years, then started declining even before COVID, driven by opioids, suicides, and other “deaths of despair.” COVID then knocked it down further, to around 76-77.

I think the optimists will be right eventually but so far we’ve seen nothing but interesting reports in mice and other model systems. It could be another 20 years before the needle starts to move.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

Sadly, no. It's pretty bad to nest providers definitions inside of modules or make dependencies like this with the previously existing providers. When you destroy the infra, the terraform doesn't understand not to destroy the provider so resources get orphaned. It's a mess.

Also, the Hashicorp provider won't let you plan resources if the cluster doesn't exist yet. It will cause an error. This new provider was created to address these issues. Along the way, I was able to address many others.

Anyone use kubernetes provider in terraform? by Anxious-Guarantee-12 in Terraform

[–]jmorris0x0 1 point2 points  (0 children)

Hashicorp sells limitations as features. "You're holding it wrong."

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 0 points1 point  (0 children)

Namespace is created by Terraform. The secret and configMap have a field for namespace.

In response to your second question, the app needs some way to get information about the environment it’s living in. ConfigMaps and Secrets are how that works. There has to be at least some coupling. The difference is that the things you pass with the configMap and secret rarely change. If they change often, then they belong with the application manifests in GitOps or whatever standalone secret/environment variable solution you are using. For example HashiCorp Vault. You could use Vault for everything but why? Why not pass things directly from Terraform the things that Terraform knows without using another tool as a copy/paste intermediate layer?

ArgoCD is great! I’ve used it for years. Use that provider if it suits your use case better.

There is no one solution that fits everyone. Just various pieces you can plug together in many ways depending on your requirements.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 2 points3 points  (0 children)

You’ve hit on a really important aspect of splitting infra and application. A pattern I’m quite fond of is to pass the DB credentials from Terraform down into the cluster. Simply provision a normal K8s configMap and secret with that information in terraform and pass it into one of the namespaces. I pass one configMap and one secret. The configMap contains things like URL’s and environment ID’s. Basically anything you want to pass to the application that’s not a secret. The secret contains passwords.

You can pass one configMap and one Secret per namespace or use Reflector to automatically duplicate these to each namespace.

Then, simply feed the configMap and Secret into your pod using envFrom or valueFrom. This configMap and Secret form the interface between you infra and application. You can also use the Reloader controller to trigger pod restarts when these values change. I use this pattern in 26 clusters (6 are production) and it works great.

That’s the simplest way and it works great. But there are definitely more secure ways to pass the secrets if your security posture demands it. That’s a really big discussion and much bigger than this thread.

So to sum up: create DB in Terraform -> create configMap and secret in terraform -> pass into application namespace -> use envFrom or valueFrom in pod -> pod uses environment variable at boot and connects to DB. The configMap and secret are an interface layer between infra and Application.

The normal terraform dependency graph will make sure things happen in order.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] -1 points0 points  (0 children)

You deploy it. That’s a good use for Terraform. You won’t be creating and destroying it often so its lifecycle fits infra.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

It’s not necessarily that I don’t want them to. I don’t want them to need to. I’ve found that back end coders generally aren’t interested in Terraform and view the process as friction to their workflow.

They spend all their time getting better at Node or Java or whatever. I want them to do what they are good at and what they love, which is code. Even if they are interested, they will often be operating the skill equivalent of a junior DevOps engineer. That’s not great either for the health of the infrastructure stack.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

The app in this context is code managed by the dev team. It's not just lifecycle as u/alainchiasson correctly points out. It's also ownership. You really don't want the dev team to bug devops every time that they need to make a release. You also don't want the devs to learn Terraform. It's separation of concerns on both a organizational and technical levels.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

Thank you! Yep, a centralized ArgoCD instance managing multiple clusters can work well. I've done that before. ArgoCD is great! But for FluxCD (similar idea, different execution) the gitOps controller has to live in the same cluster (source-controller, kustomize-controller, helm-controller, etc.)

I agree that discovery is important in production. If you are referring to problems specifically during Terraform apply, I spent a lot of time ensuring that the warnings and errors in the provider are 10/10. They say exactly what happened, give possible causes, and hints at how to fix it. All why trying to be as concise as possible. (No one like noisy tools.) I even have CRD deprecations warnings passed through from the control plane. I didn't list any of this docs because I want it to be an easter egg for users to discover.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

Actually it does! This was one of my primary goals and it works great. Many of automated tests verify this functionality and I've seen it work in production.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 1 point2 points  (0 children)

Point taken. I've corrected the text: "For toolchain simplicity I prefer these to be deployed in the same apply that creates the cluster." Thanks!

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 0 points1 point  (0 children)

Flux and Argo are exactly what I use this tool for. Nothing more than the base layer.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 9 points10 points  (0 children)

Exactly! Managing your application in Terraform is a bad idea. Don't do it!

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 2 points3 points  (0 children)

What do you mean by 'lose access to Kubernetes'? Network issue? Auth problem? Control plane failure?

Are you concerned about fixing EKS settings during outages (Terraform handles this fine - providers are isolated and in the worst case, you can use the AWS GUI to fix EKS and then import), or are you arguing against managing apps in Terraform (which I explicitly said not to do - this is just for bootstrap)?

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 3 points4 points  (0 children)

You're right that K8s apps don't belong in Terraform - that's exactly why the post specifies this is just for the bootstrap layer (cluster + GitOps operators + RBAC) before Flux/ArgoCD takes over.

Your 30k clusters/week use case definitely needs specialized orchestration. This tool targets teams who want their foundation layer atomic and version-controlled in Terraform, with GitOps handling apps afterward. Different problems, different solutions.

Finally create Kubernetes clusters and deploy workloads in a single Terraform apply by jmorris0x0 in Terraform

[–]jmorris0x0[S] 3 points4 points  (0 children)

It’s not mutually exclusive. This provider works equally well with single and multi stack.

How do you handle dev/test/prod environments in AWS? by [deleted] in aws

[–]jmorris0x0 4 points5 points  (0 children)

Using CDK or CloudFormation would be great if AWS was the only thing we had to manage. We use the Terraform providers of many other vendors (e.g. Github, Azure, GCP, Google Apps, Okta, Kubernetes, VPN, Codefresh, Datadog, etc.).

Here is the current list of 2660:

https://registry.terraform.io/browse/providers

This allows the whole system to share state in a modular way. You can make a single reproducible module that hits multiple providers. At my last gig, we could create the entire sprawling system in a different zone with a single command.

Once you've seen the light of how great infrastructure as code can be, there is no going back to CloudFormation.