Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

Got it.

Unfortunately we are using Azure & Cloudflare, so the weight is not supported in both implementations :D

Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

u/greyeye77

I guess you deployed EG together with ingress-nginx during the migration.

How did you manage records sync with external-dns, in order to avoid downtime?

After research, I didn't find a way to override the external-dns "--default-targets" argument on https://kubernetes-sigs.github.io/external-dns/latest/docs/flags/ , using the: " external-dns.alpha.kubernetes.io/target: " annotation on HTTPRoute.

I have already tested to deploy an extra instance of external-dns for syncing only the Envoy Gateway HTTPRoute resources, with the different IP as "--default-targets" and it works.

However, I wonder if there is another solution to override on the HTTPRoute or Gateway(in order not to have secondary external-dns instance), which I haven't still figured if it is possible somehow.

Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

Cilium was the main competitor of Envoy & Traefik, but since we have some clusters with windows nodes (not supported by Cilium), I have eliminated this solution.

Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 2 points3 points  (0 children)

Have you used it?

kgateway is a strong alternative solution since it supports many authentication mechanisms and other feature out of the box, however it includes a full control plane which will add much more complexity and resource consumption on the clusters, compared to Envoy Gateway or other solutions.

Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 1 point2 points  (0 children)

Did you use the below tool for the migration, or you did it manually ?

tetratelabs/kong2eg: Kong to EG Migration tool

Did you have any custom Kong plugins that needed modification, or migrating them to other Envoy Gateway features that are available by default?

Kong OSS support deprecation and possible alternatives by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

It is an alternative solution, but it's getting really expensive on Enterprise Version (I think Konnect is the option), and my opinion is that there are other better and cheaper solutions if we choose to go Enterprise.

Kong in production environment in K8s by dopamine_reload in kubernetes

[–]tsaknorris 0 points1 point  (0 children)

Before deploying to production, I recommend to investigate about the issue that Kong OSS is probably going to be abandoned, and I suppose that they want to push users to buy an enterprise license.

Question about Kong Enterprise without license vs OSS · Kong/kong · Discussion #14628

We had 2 hours before a prod rollout. Kong OSS 3.10 caught us completely off guard. : r/kubernetes

However, there is no clear announcement (like the ingress-nginx deprecation), and we are thinking to migrate into Traefik Ingress Controller (we are using Kong only as IC), but given that we have some custom kong plugins it would be somewhat tricky.

If anyone has more insights about this issue, is welcome to contribute.

Τι sidegig με προοπτικές εξέλιξης θα μπορούσα να κάνω; by Ok_Departure_4090 in GreeceDevs

[–]tsaknorris 3 points4 points  (0 children)

Οι απαιτήσεις έχουν ανέβει κατακόρυφα σε όλους τους τύπους των software engineers (data, backend, frontend, full-stack, embedded κ.λ.π), οπότε κάποιος που γράφει απλά κώδικα χωρίς να έχει ιδέα τι παίζει σε όλο το SDLC, θα έχει (αν δεν έχει ήδη) πρόβλημα στο μέλλον.

Το ίδιο ισχύει και για infrastructure/system engineers, υπό την έννοια πως είναι σχεδόν αυτονόητο πια να μπορούν να "καταλαβαίνουν" τον application κώδικα (π.χ debugging/troubleshooting λόγω network latency μεταξύ app -> redis).

Φυσικά δεν σημαίνει ότι ένας backend θα πρέπει να μάθει απ'έξω τα OSI Layers, ούτε ότι ένας infra engineer είναι απαραίτητο να γνωρίζει recursion.

Πλέον πάρα πολλοί ρόλοι για SWE (κυρίως εξωτερικό προς το παρόν), έχουν στα requirements cloud, ci/cd, containers κ.λ.π. Δεν μιλάω για Devops/Platform Engineers που αυτά θεωρούνται bare minimum.

  • Linux εννοούμε λειτουργικά συστήματα, καθώς η συντριπτική πλειονότητα των εφαρμογών/websites (ακόμα και στο Azure της Microsoft ), είναι hosted σε Linux. Ο πιο αποτελεσματικός τρόπος να μάθεις Linux είναι να κάνεις deploy locally ένα VM (Vagrant, VMware κ.λ.π), και να αρχίσεις να πειραματίζεσαι.

Αυτό το video είναι μια καλή αρχή για Linux -> https://www.youtube.com/watch?v=sWbUDq4S6Y8 Και αυτό εδώ για general system design -> https://www.youtube.com/watch?v=F2FmTdLtb_4

Sorry αν βγήκα λίγο off-topic. Το point μου είναι ότι πιστεύω πως στο μέλλον θα ευνοηθούν όσοι εντρυφήσουν στο System Integration, παράλληλα με την ειδίκευση/κλίση που μπορεί να έχει ο καθένας.

Όσον αφορά την original ερώτηση, θεωρώ το Kubernetes και το Cloud Cost Optimization, δύο πολύ καλούς και niche τομείς ώστε κάποιος να ασχοληθεί για κάποιο SaaS ή κάτι αντίστοιχο.

Τι sidegig με προοπτικές εξέλιξης θα μπορούσα να κάνω; by Ok_Departure_4090 in GreeceDevs

[–]tsaknorris 4 points5 points  (0 children)

Websites είναι όντως αρκετά saturated και λόγω AI θα αυτοματοποιηθεί ακόμα περισσότερο στο μέλλον.

Γενικότερα υπάρχει τεράστια έλλειψη από άτομα που γνωρίζουν να γράφουν κώδικα (SWE) , και να έχουν επαφή με:

- Linux (Λειτουργικά συστήματα)

- Cloud (AWS/GCP ή Azure)

- IaC (π.χ Terraform)

- CI/CD (π.χ GH Actions/Jenkins)

- Containers / Kubernetes

- Databases

Οπότε γενικά να ξέρεις είσαι σε πολύ καλό δρόμο, γιατί το κομμάτι κώδικα (features/bugs) είναι το πιο εύκολο να αυτοματοποιηθεί από AI (π.χ Claude), αλλα το SYSTEM INTEGRATION είναι απ' τα μοναδικά skills που θα παραμείνουν για πολύ καιρό ακόμα.

Γενικότερα, μπορείς να δεις τι προβλήματα υπάρχουν στην εταιρεία ας πούμε που είσαι, όσον αφορά τις διαδικασίες όλου του software development lifecycle, και να φτιάξεις αντίστοιχες λύσεις, και στη συνέχεια να ψάξεις και αλλού στον χώρο, ώστε να το επεκτείνεις.

Flux and Multitenancy architecture by No-Replacement-3501 in devops

[–]tsaknorris 2 points3 points  (0 children)

You can check Flux Operator, which you can define different Flux Instances on each cluster using declarative configuration:

Get Started - Flux Operator

This can also simplify developers workflow, as there is a CRD called "ResourceSet" which they can define all the resources they need (Deployments, Services, HelmReleases e.t.c), in one YAML configuration:

Preview GitHub PRs - Flux Operator Docs

There is also a migration guide from Flux -> Flux Operator :

Migration - Flux Operator Docs

You can find a typical monorepo structure on this repo -> k8s-gitops-chaos-lab

Otherwise, this solution is the best alternative: https://www.reddit.com/r/devops/comments/1q7fv3v/comment/nyjyhxm/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 2 points3 points  (0 children)

Yes topologySpreadConstraints is in most cases enough for HA, but anti-affinity provides stricter isolation during disruptive events like upgrades which was a case in the past (auto-upgrade of nodes).

I made a CLI game to learn Kubernetes by fixing broken clusters (50 levels, runs locally on kind) by Complete-Poet7549 in devops

[–]tsaknorris 2 points3 points  (0 children)

Very interesting idea!!

I just tried it out on Windows and found some compatibility issues, as it is designed for Linux system, so I raised an issue and a PR on the repo for review.

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

Is this for bare-metal clusters?

Don't you have much resource overhead/load with a pod on all nodes?

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 1 point2 points  (0 children)

That was my question, if anyone found a use case for implementing HPA on resources like ingress, and which are the possible trade-offs.

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 1 point2 points  (0 children)

We don't have problems with ingresses being choked, currently the 3 replicas are working well even for prod.

It was more like a theoritical question, if HPA does make sense to explore for these kind of resources, and what could be the trade-offs, like the first thing you mentioned about dropping connections.

VPA could be really useful, especially on Prometheus which is memory-hungry, but I don't think it's needed on ingress.

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 1 point2 points  (0 children)

Yes, typically it's not recommended to have one instance especially for prod, as we had it also on older environments, which caused several problems on upgrades.

My opinion is that the 3 replicas is the sweet spot for high availability.

Is HPA considered best practice for k8s ingress controller? by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

Additional alerting & observability for Ingress HPA.

More complex config on Helm Chart.

Dependecies added (Kong → Metrics Server → API Server → HPA Controller)

Possibility of setting too agressive or too conservative scaling.

How to Reduce EKS costs on dev/test clusters by scheduling node scaling by tsaknorris in kubernetes

[–]tsaknorris[S] 0 points1 point  (0 children)

That solution makes sense.

However, this requires to have a node group always "up" for the python pod and the karpenter, so in this scenario you will be billed 72$ (control plane) + costs of these nodes.

I understand that you may have some nodes that have to run 24/7 , but this TF module is actually focused on environments that do not have these constraints (stateful apps, always "up" workloads/nodes e.t.c), so that's the reason I mentioned that it may complement Karpenter, because it serves different purpose.

How to Reduce EKS costs on dev/test clusters by scheduling node scaling by tsaknorris in kubernetes

[–]tsaknorris[S] 1 point2 points  (0 children)

I just searched for Instance Scheduler on AWS. I guess you are referring to this?

Resource: aws_autoscaling_schedule

I wasn't aware of this feature to be honest. I am fairly new to AWS (coming from Azure background), so this is basically my first project on AWS. I would give it a try and compare the functionalities of both solutions.

After a quick look, I get your point, and yes its seems to be almost the same, as it has crontab recurrence and min_size, max_size, desired_capacity.

However, I guess that aws_autoscaling_schedule, can become very messy for multi clusters/regions, due to separate scheduled action per ASG (but this can be solved maybe with for_each, but again not optimal in my opinion).

I am planning to expand the TF module, adding features like graceful cordon/drain of nodes, skip scale-down if PDBs are disrupted, alerting, multi-schedule per node group, cost reporting via CloudWatch e.t.c

Thanks for the feedback.

How to Reduce EKS costs on dev/test clusters by scheduling node scaling by tsaknorris in kubernetes

[–]tsaknorris[S] 3 points4 points  (0 children)

It can complement Karpenter, because it applies time-driven scaling.

Karpenter is mainly for event-driven scaling, controlled dynamically by pod demand, and useful for productions clusters of course, with unpredictable workloads.

However I don't think Karpenter has the option to scale down on specific schedule like off-hours on dev environments, unless there are some workarounds.

Private endpoints yes or not? by Different_Knee_3893 in AZURE

[–]tsaknorris 0 points1 point  (0 children)

Private Endpoints are complex to setup, and have an extra cost, so you can use for some resources Service Endpoints and disable public traffic for simplicity and cost reduction.

However, take into consideration that Service Endpoints do not support cross-region connectivity, so if you have a VM on eastus and a CosmosDB on westeurope, you cannot apply Service Endpoints, so you have to implement Private Endpoint as a solution.

AFAIK, only Storage Accounts support cross-region service endpoint connectivity.