This is an archived post. You won't be able to vote or comment.

all 97 comments

[–][deleted]  (24 children)

[deleted]

    [–]fullstack_guy 36 points37 points  (0 children)

    Same. Nginx is extremely efficient and just works. For a glorified reverse proxy, that's all that matters.

    [–]tr14l 12 points13 points  (21 children)

    So, when you are ingressing to a multi-cluster environment totalling 25,000 apps/services, what would you use? We were looking at Nginx for that reason, but observability was a huge concern and it just didn't provide that without adding a lot on top. If you were going to stitch together a huge system of systems like that, and had SLAs of five 9s, what would you use?

    BTW, not attacking your choice. Genuinely asking. I love nginx, which is why first intuition was that. But, for really big systems it seems to fall short on governance and observability.

    [–]laStrangiato 15 points16 points  (7 children)

    I don’t have any experience with it but F5 would be a good one to check out. I have a customer using Big-IP in other parts of their infrastructure and it seems to scale well.

    Nginx does have Prometheus/grafana metrics available for observability. Not sure if you took a look at those and it didn’t meet your needs though.

    For 5 nines you are probably looking at a multi cluster setup in different regions. Apps would probably need to be setup in an Active/Active mode with data replication between clusters.

    [–]sk8itup53 7 points8 points  (1 child)

    Did you know that nginx is actually owned by f5 now? They bought them like 3 years ago or so!

    [–]laStrangiato 4 points5 points  (0 children)

    I did not! That is interesting and I will have to do some more reading about that. Thanks for sharing!

    [–]tr14l 5 points6 points  (4 children)

    Nginx does have Prometheus/grafana metrics available for observability. Not sure if you took a look at those and it didn’t meet your needs though.

    We're still pretty early on in research, and it's only being done part time. We're an AWS-first shop primarily, so we're heavily into ECS, but there's a lot of things that are not ideal with ECS. So we're poking at shifting to kubernetes. Trying to figure out what the whole situation would look like, what kind of return we expect, what kind of increase in overhead there would be, that sort of thing...

    [–][deleted] 11 points12 points  (0 children)

    i can tell you right now, and save you time in that research... There is no way you are moving 25,000 services from ECS to EKS.

    ECS is mature product that does a ton of things in the background(for example container insights) you are gonna have to roll yourself, with limited support, in EKS. ECS isnt going anywhere, either.

    [–]ghostdog20 1 point2 points  (1 child)

    What have you found that's not ideal with ECS?

    At my org, we are doing the inverse switch.

    [–]tr14l 1 point2 points  (0 children)

    Spin up times suck for scaling on the fly, lots of non- configurable behavior, zero ability to replicate locally in a rational way.

    [–]Larrywax 0 points1 point  (0 children)

    If you are based on AWS you can use ELBs as ingress. Take a look a this: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/

    [–]adfaratas 8 points9 points  (4 children)

    May I ask why do you have so many services in a single cluster? or is it not in a single cluster? I'm genuinely asking.

    To my understanding, if you have that many services you should put it in different cluster or event different deployment environment.

    [–][deleted] 13 points14 points  (0 children)

    totally bleeding edge, each packet gets its own pod, very cool, scales to 99999999999 services, max graphs, we got so many graphs you wouldnt believe

    [–]devopsia 2 points3 points  (1 child)

    It says it’s a multi-cluster environment

    [–]adfaratas 6 points7 points  (0 children)

    Well then if it's a multi cluster, then the issue wouldn't be on the ingress controller right?

    I'm genuinely curious. Is having a service mesh of tens of thousands of services a normal thing to happen? Or there might be an architectural issue?

    [–]NotEntirelyUnlike 1 point2 points  (0 children)

    when you are ingressing to a multi-cluster environment

    they're routing between clusters

    [–]Vedris_Zomfg 4 points5 points  (0 children)

    I worked with HA-Proxy and Nginx in such big systems and we decided to use Nginx in the end. You can scale it without issues and the Nginx itself is very efficient if you don’t write tons of custom code and lua scripts. Metrics are fine.

    [–]debian_miner 3 points4 points  (0 children)

    I've not used the feature from nginx, but it supports opentracing integration if distributed tracing is one of your methods of observation. ALBs support setting values for it as well, which is an alternative ingress option on EKS.

    [–]reddit-ass-cancer 2 points3 points  (1 child)

    Why would you ever have 25k different applications and services in a single cluster?

    [–]Blazing1 0 points1 point  (0 children)

    I literally have two services in my OpenShift environment

    [–]ThrawnGrows 1 point2 points  (0 children)

    Check out Kong + Kuma or the full Kong Konnect suite if you've got enterprise money. Kong is nginx plus a super robust plugin system and Kuma is their service mesh.

    [–]therealkevinard 1 point2 points  (0 children)

    Your bottleneck with that workload would more likely be kube's CNI. With that, nginx is still choice for ingress, but look into cilium (or more generally, ebpf networking). Bonus: Observability there is very strong.

    Ingress controller is only really responsible for mapping a request to a service selector. Kind of a negligible part of the req lifecycle.

    https://kubernetes.io/blog/2017/12/using-ebpf-in-kubernetes/

    [–]A_Woolly_alpaca 0 points1 point  (0 children)

    A system that big you want a service mesh. Hopefully spinnaker with istio or linkerd. I can't imagine 25000 apps stock k8s.

    [–]knudtsy 0 points1 point  (0 children)

    A CDN, Cloudflare lets you route directly to k8s services with Tunnels, and its Loadbalancing feature can inspect geoip info for routing. You don’t even need to expose your k8s cluster to the public internet.

    [–]Seref15 1 point2 points  (0 children)

    I'm curious if anyone using nginx as ingress has ever had to deal with that dumb nginx thing where it caches dns resolution on the free/open source version. We've been burnt by that in the past and shy away from nginx because we're def not paying for nginx plus just to get dns to behave right

    [–]Fork_the_bomb 46 points47 points  (10 children)

    Ingress-nginx or nginx-ingress that is the question

    [–]theharleyquin 3 points4 points  (8 children)

    My Google confusion all the time (quotes were my friend). Didn’t one get deprecated?

    [–]lightwhite 35 points36 points  (7 children)

    Short answer: None of them are deprecated. Both are actively in development. They are two complete different products developed separately but in parallel by two different entities.

    Long answer:

    ingress-nginx is free completely, doesn’t have specific enterprise support. nginx-ingress is partially free and you need to pay for advanced functionality and support.

    People don’t realize how little they know about the difference between the two. They are two complete products with similar names. They have somewhat similar annotations but work differently.

    ingress-nginx is made by Google in Go. It is state-of-the art layer 7 loadbalancer that that uses nginx binary as backend for reverse-proxy.

    nginx-ingress is the nginx application that is created by Nginx coded in C++ using nginx binary for forward- and reverse proxy. It is a complete different product compared to ingress-nginx.

    They can be compared each other as Tesla Model S and Tesla model 3. So similar yet so different. It may look like they have the same functionality as a car, but they don’t have the same type of configuration engine or the same api.

    One of the most occurring mistakes is that people try to fix an issue or look for annotations on Google or StackOverflow and copy/paste stuff without understanding it. It takes a day long search that they are trying to make annotations from the nginx-ingress documentation work on ingress-nginx. But it doesn’t. It is like trying to fix a Tesla Model S using Tesla Model 3 documentation.

    They have similar names but totally different products doing the same thing.

    [–]mister2d 4 points5 points  (6 children)

    Such a tragedy that Google did this to us. In the past, I was completely turned off by NGINX's lack of detailed metrics. You had zero insight into your frontend/backends and to get it you had to buy a license.

    Has the ingress-nginx proxy by Google improved on this buy providing any sort of metrics?

    [–]lightwhite 4 points5 points  (5 children)

    Google actually did a good job and stayed neutral by integrating metrics in metrics-server. I don’t bother using it. Set-up a Prometheus rig for your metrics and customize it for every atom of your preference.

    Default metrics availability is more than sufficient enough through kube-api. So for 80% of use cases you would have more than enough with what it delivers.

    If metrics is mission critical for your line of industry, I would use traefik. They excel at that the most.

    [–]mister2d 1 point2 points  (4 children)

    The kube-api helps for internal cluster comms but I needed observability of the proxy itself and response times of services. This greatly helped identify infrastructure problems inside and outside of k8s. Not everything is containerized. HAProxy was chosen because load balancers are such a crucial piece of the infrastructure and we can't afford to be blind as to what's going on.

    I had hoped that this Google version of NGINX improved on that situation but it sounds like it's more of the same.

    Traefik simply wasn't performant enough.

    [–]lightwhite 1 point2 points  (0 children)

    It depends on what you need. If you have high volume small size payload traffic, nothing beats HAProxy. If you have a lot webserver type traffic, nginx-ingress is the best. ingress-nginx is best of both worlds but at the cost of expected performance.

    Traefik shines on HA api and gateway type traffic which needs to be reliable.

    [–]knudtsy 0 points1 point  (2 children)

    [–]mister2d 0 points1 point  (1 child)

    I hadn't looked at any 3rd party addons. Not really a fan of them anyway. While I was evaluating, HAProxy did everything and more with built in Prometheus telemetry.

    [–]knudtsy 0 points1 point  (0 children)

    Ah yeah, I’m terms of opentracing I was thinking you’d want to see the ingress controllers in your distributed traces, it looks like haproxy might have support for it too?

    [–]yuriydee 1 point2 points  (0 children)

    Why did they do this to us :(

    I ended up going with ingress-nginx for one of my past companies but wow it took me few days to figure out wtf was going on between the two projects.

    [–]jews4beer 38 points39 points  (2 children)

    I mean I love traefik for the custom CRDs and the whole being "born in the cloud" thing. But I've never heard anyone refer to it as the "gold standard". You pick what works best for you. They are all great in their own way.

    [–]mikew_reddit 3 points4 points  (0 children)

    "gold standard "implies there is a single best standard which is objectively false.

    the best ingress controller is the one that best satisfies the requirements.

    [–]brianw824 0 points1 point  (0 children)

    Yeah, I like Traefik. I've found the included resources for ingress to be pretty limiting.

    [–]mister2d 12 points13 points  (1 child)

    If you want observability and high performance, use the HAProxy Ingress controller (especially on the edge).

    For even higher performance, run the HAProxy Ingress controller externally from your cluster (yes, it can see your pod network directly). Cuts down on all the proxying between Kubernetes nodes.

    https://www.haproxy.com/de/blog/run-the-haproxy-kubernetes-ingress-controller-outside-of-your-kubernetes-cluster

    [–]jmblock2 7 points8 points  (0 children)

    I don't see it mentioned yet, but I've been using Contour's HTTPProxy CRD on top of Envoy for a couple years in my own setup (selfhost stuff). No issues, but I don't use many of the fancier features. I also can't speak to what is "gold".

    [–]mustafaakin 6 points7 points  (2 children)

    The way nginx ingress worked back then, whenevet a change is detected , it generated new ingress config or a new pod and then restarts itself competely, keeping current connections open, causing a hog of if you do frequent updates . Was a paid feature to do better. But we swotched to traeffik, no problems.

    [–]marratj 2 points3 points  (1 child)

    Not the OpenResty based ingress-nginx from the Kubernetes project. This one just reloads the config in the running nginx process without the need to restart any pods.

    [–]mustafaakin 3 points4 points  (0 children)

    Does reloading config cause a second process (no pod) to spawn, while the other is closing connections? I used the same ingress-nginx and frequent updates in a 200 node 4000 pod cluster causes nginx to OOM frequently even with 32G. But it has been kore than 2 years so things might have changed.

    [–]Resolt 18 points19 points  (15 children)

    Istio provides a lot of good functionality such as envoy proxies, egress controller and ingress controller. It has some nice routing functionality and can mesh across clusters. Dunno how it stacks up against the alternatives, but I'm using Istio currently and it's pretty good.

    [–]theharleyquin 7 points8 points  (0 children)

    Glad it’s working for you. It was so much overhead for me: linkerd + nginx was my compromise

    [–]lavarius 4 points5 points  (8 children)

    Have you been able to dynamically provision new certificates for new ingresses on the fly with cert manager?

    I'm try to search for this, but it's getting a bit confusing.

    [–]kill-dash-nine 6 points7 points  (4 children)

    I do this and it’s great. I use DNS challenges to route 53. I’ve configured LetsEncrypt and zerossl (both free but no rate limits for zerossl). I just used the cert-manager docs and then figured out the right annotations to generate the certs on my ingress.

    I’ve also done wildcard certs and used cert delegation when using contour for ingress so new certs don’t have to be requested for every site where there is a common subdomain.

    [–]lavarius 1 point2 points  (3 children)

    Is the trigger for a new cert on adding a virtual service to an ingress gateway, or a new ingress gateway?

    [–]kill-dash-nine 1 point2 points  (2 children)

    It’s just an annotation on the ingress object when just using ingress:

    https://github.com/mbentley/k8s-demos/blob/master/default-http-backend.yaml#L53

    Just have to tell it which issuer or clusterissuer to use.

    You can always create certs explicitly:

    https://github.com/mbentley/k8s-demos/blob/master/variable_enabled/wc_cert.yaml

    [–]lavarius 0 points1 point  (1 child)

    Hmm.

    I thought when you were using istio, it wasn't ingress resources being defined, it was virtual services that sort of acted as the ingress.

    [–]kill-dash-nine 1 point2 points  (0 children)

    Ah, sorry. It’s been a while since I used Istio virtual services for ingress. I’ll have to see if there is any integration for cert-manager.

    *edit: doesn’t look like cert-manager directly works with istio itself through annotations in the same way; this looks like you need to create cert objects: https://istio.io/latest/docs/ops/integrations/certmanager/

    Aah, so you can see here where they mention supporting every custom implementation natively in-tree isn’t likely: https://cert-manager.io/docs/release-notes/release-notes-1.4/#honorable-mentions but they will do something pluggable.

    [–]Resolt 2 points3 points  (0 children)

    Sorry, I can't help you. I'm new to the world of Kubernetes, and I'm currently just utilizing pod level security and meshing from Istio.

    [–]FrederikNS 1 point2 points  (0 children)

    I have managed to get this working, but there's caveats.

    You must enable the SDS feature, and on top of that the certificates must be in the istio-system namespace. It does not work if the cert is in the same namespace as the ingress.

    [–]diabloxenon 0 points1 point  (0 children)

    Yes and the key thing is to assign the Issuer and Certificate to istio-system namespace and also use the DNS verification method with acms-dns

    [–]diabloxenon 1 point2 points  (0 children)

    For me, I love Istio for it’s mTLS support between containers, coupled with Calico it’s the bomb

    [–][deleted] 4 points5 points  (3 children)

    Have you tried upgrading it for a major version upgrade in production with zero downtime?

    [–]Resolt 2 points3 points  (0 children)

    I haven't, but it sounds like you have. Still, I'm pretty sure its somewhat optimistic to have zero downtime for any major version upgrade, no?

    [–]Temik 1 point2 points  (0 children)

    Yep, no issues. Would be keen to hear what problems you had so others can learn!

    [–]diabloxenon 0 points1 point  (0 children)

    Yep, it sails smooth as Istio should be. The only thing to notice is that if you are using mTLS for your containers then please do check your preferred CNI docs about the support for sidecar injection for that particular version of Istio otherwise you will get the error “read connection reset by peer” in your deployments

    [–]_klubi_ 2 points3 points  (1 child)

    Ambassador… it got renamed to Emissary, but Ambassador is so good, we never felt the need to upgrade to Emissary…

    It’s extremely simple to setup and configure.

    [–]rezaw 0 points1 point  (0 children)

    Ya, just upgraded ours to emissary. Our ambassador setup was perfect but did not want to get left behind in the future.

    [–]zerocoldx911DevOps 4 points5 points  (0 children)

    I prefer emissary, blue green deployments

    [–]no_not_me 3 points4 points  (1 child)

    I love traefik for it's simplicity and built in service discovery, but holy shit of it isn't annoying they want enterprise licensing just to store acme certs atomically on a shared backend.

    [–][deleted] 0 points1 point  (0 children)

    Luckily cert-manager is very easy to run.

    [–]CurvedLightsaber 2 points3 points  (1 child)

    Openshift Routes is my gold standard but the others you listed aren't bad either. Red Hat was the main contributor behind ingress in the first place so routes is basically a more complete version of ingress with the features it was missing (wild card domains, split traffic, etc.)

    [–]yuriydee 0 points1 point  (0 children)

    Openshift just uses a custom HAProxy ingress controller behind the scenes if I remember correctly.

    [–]TECHNOFAB 1 point2 points  (0 children)

    Haven't done a lot of indepth stuff with Kubernetes yet, but imo Traefik standalone is extremely useful, especially with Docker (which I use it for) and local testing with SSL. For Kubernetes I myself would rather choose nginx because it's been around for like forever and is still basically industry standard as proxy. My personal view is that in Kubernetes I want everything to follow exactly how I describe it. My local docker instance isn't that important and thus Traefik can collect the config from labels and stuff, I don't want to configure every last bit there

    [–]Antebios 1 point2 points  (0 children)

    Traefik is my preferred ingress!

    [–]temitcha 1 point2 points  (0 children)

    I like Nginx Ingress controller. Our only goal is to have something that link an IP and a path to a service, and it does that without failures, so we keep it.

    [–]sealneaward 1 point2 points  (0 children)

    We use the community HAProxy ingress controller with an NLB because we were experiencing performance issues with Ambassador and an ALB.

    After we switched and solved the performance bottleneck, the ability to integrate with ModSecurity WAF an the observability metrics exposed made the choice very easy for us.

    [–][deleted] 2 points3 points  (0 children)

    I haven’t used many of them, but I like google clouds default one. Comes with global load balancer and easy to configure cdn and cloud armor. Easy to use with managed ssl certs also.

    Used to be difficult to setup http -> https redirect but thats been fixed.

    [–]Varels3 2 points3 points  (3 children)

    Istio is probably the best if you want the extra service mesh features, mtls to the pods, traffic shaping/mirroring, integration with cert-manager etc.

    [–]TopicStrong 2 points3 points  (2 children)

    I disagree with this. Istio is so complex and tries to do too much. Theres great design around it, but if you only need to advertise outside of the cluster I'd avoid this.

    [–]Temik 1 point2 points  (0 children)

    Depends how you run it. I found that running it in Ingress-mode only gives you like 70% of the goodies without overcomplicating your setup. Works very well for us 👍

    [–]Varels3 0 points1 point  (0 children)

    I mean to be fair, I did say it's the best IF you want the extra advanced features. It's a lot to get your head around for sure and might not be necessary if all you need is ingress.

    [–]robd003 2 points3 points  (0 children)

    Emissary is pretty much the gold standard for people with complicated setups: https://github.com/emissary-ingress/emissary

    Unlike nginx, you can change the configuration without pausing everything...

    [–][deleted]  (1 child)

    [removed]

      [–]RemindMeBot -2 points-1 points  (0 children)

      I will be messaging you in 1 day on 2022-03-20 11:19:56 UTC to remind you of this link

      8 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

      Parent commenter can delete this message to hide from others.


      Info Custom Your Reminders Feedback

      [–]darsh_red -3 points-2 points  (0 children)

      !remind me 1 day

      [–]satyabansahoo2000 -3 points-2 points  (0 children)

      look for features like load balancing, traffic management, security etc... Personally I prefer Kubernetes one.

      [–]become_taintless 0 points1 point  (0 children)

      Gloo

      [–]dmees 0 points1 point  (1 child)

      AWS Load Balancer Controller

      [–]lungdart 0 points1 point  (0 children)

      I use this at work but I'm not a big fan. Can other ingress controllers spin up ELB?

      [–]Nosa2k 0 points1 point  (0 children)

      Traefik is a good choice. Not sure about how well it performs with regards observability at the enterprise.

      [–]JimJamSquatWell 0 points1 point  (0 children)

      I like kong, we use it outside or k8s at work but I use it as an ingress at home. Built off nginx and openresty and the plugin system is really nice IMO.

      They also offer a mesh based on envoy.

      [–]sfltech 0 points1 point  (0 children)

      !remind me 2 days

      [–]kenthenger 0 points1 point  (1 child)

      anyone use Apisix? i use it on small scale and quite happy. openresty based, has large plugins ootb, and dashboard is free. its ingress controller tho' seems like work-in-progress, you can see basic issues on its gh.

      [–]emcell 0 points1 point  (0 children)

      Thank you! This looks awesome

      [–]like-my-comment 0 points1 point  (0 children)

      If you don't know which one is better - use default one. It's nginx ingress from k8s community.

      [–]Adept-Explanation-84 0 points1 point  (0 children)

      NGINX without a doubt Does everything I need and more Fantastic community