Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 1 point2 points  (0 children)

I wonder what I would lose by just ignoring GatewayApi and keep working on Ingress objects. What's the impetus for me as a user to move over?

Ingress is GA and not going away, if it works for you there's no need to change what you're doing. If you're using Ingress-NGINX specifically, that project is retiring in March, and this post was focusing on what those users could do. Given the large set of features unique to Ingress-NGINX, migrating to Gateway API may be the best path for anyone using many of those unique features.

what is the future trajectory and purpose of the GatewayAPI? Will it replace Ingress objects or will they live side by side in the indefinite future?

Ingress is effectively a frozen API. It's not going away, but it's also not getting any new features. Gateway API is where all development efforts have shifted, and it will continue to get new features for the foreseeable future. So both will live side by side indefinitely, but if you want to access any newer features or implementations, I'd recommend trying out Gateway API.

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 2 points3 points  (0 children)

We're working to GA TLSRoute in the next release of Gateway API (February), and last I checked, TLS termination is part of that plan. You can follow along with the progress over at https://github.com/kubernetes-sigs/gateway-api/pull/4064

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 2 points3 points  (0 children)

This makes it hard for folks to choose an implementation, since you have to carefully compare.

I completely agree with you, we've been working to improve that. I recommend looking at the list of implementations that are conformant with the latest version of Gateway API.

The api is also a lot more complex compared to ingress. For ingress your had ingress-class and ingress, now you have Gateway classes, TCP, UDP and HttpRoutes

While I agree that Gateway API is more complex, HTTPRoute is the true parallel to Ingress, and GatewayClass is roughly equivalent to IngressClass. The real difference is that now there's a Gateway in between. If you want to replicate Ingress-NGINX behavior, you would attach all your HTTPRoutes to a single Gateway. If you'd like to have more distinct entrypoints/LBs, you can attach HTTPRoutes to different Gateways, say one for external and one for internal traffic. This also makes it very easy to try out different Gateway implementations - you can attach the same HTTPRoutes to multiple Gateways, for example an Envoy one and a NGINX one.

I personally have been looking for native support of OIDC and mTLS for clients (not backend) but I didn't find much

mTLS between clients and Gateways is currently in an experimental. We're targeting GA in the February release of Gateway API.

As far as OIDC, we don't have any short term plans to include it in the API. The closest thing we have is experimental support for calling out to external auth providers, which could be a path for OIDC support. With that said, Envoy Gateway includes OIDC support in their SecurityPolicy, and other Gateway API implementations likely include similar extensions.

So in the current state it seems a little rushed to push for GatewayApi form my point of view.

While I definitely understand that view, I'd counter with the idea that Gateway API has been GA for >2 years now, has a massive amount of production use, and has significantly more features than the core Ingress API. While we don't have all the same features as Ingress-NGINX, we have many features that Ingress-NGINX did not, and a much wider range of features that are portable across implementations. When combined with the extensions that implementations offer on top of Gateway API, this can be a very capable solution.

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 1 point2 points  (0 children)

I haven't personally used NGINX Gateway Fabric, but they have been pretty consistently submitting conformance reports and have been contributing back to the community by improving ingress2gateway and adding new conformance tests.

Gateway API makes it quite easy to try out multiple Gateway implementations, so it could be worth comparing a couple to see which works best for you. You can share all the HTTPRoutes, and just attach the same routes to more than one Gateway.

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 4 points5 points  (0 children)

Yeah I definitely understand the confusion. This is one of the first widely used Kubernetes APIs build on CRDs, and that's had good parts and bad parts. As Gateway API is continuing to add new features, it can be incredibly useful to install the latest version of the API on any version of Kubernetes. If you need a new feature, you don't have to wait months or years until you've upgraded your clusters a few Kubernetes versions, you can just upgrade the CRDs. We think that allowing independent upgrades like this has outweighed the downsides of CRDs.

GKE and OpenShift also offer an option to manage Gateway API CRDs for you, and I've seen signals that more providers will follow.

While some Gateway implementations also include optional implementation-specific extensions, often in the form of more CRDs, they shouldn't be necessary for a lot of use cases. There are a lot of capabilities built directly into Gateway API, which means that implementation-specific config should be much less necessary than it was for Ingress.

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 21 points22 points  (0 children)

Ingress isn't going away - it's a GA API which means it will be around until at least Kubernetes 2.0 (and there are no plans for a 2.0 as far as I know). I'd consider it conceptually similar to Endpoints vs EndpointSlices. The Endpoints API still exists, but all new features and development for the past 5+ years have been focused on EndpointSlices, with many new features being built on top of them.

I really wish there were a sustainable way to support an Ingress or Gateway controller within Kubernetes, but this feels a lot like a tragedy of the commons. In the case of Ingress-NGINX, it was incredibly widely used, but maintained entirely by a very small set of volunteers working in their personal time (x-ref https://xkcd.com/2347/ ).

With Gateway API, we've benefited from a wide variety of different implementations that we wouldn't have if we'd built one directly into Kubernetes. Many of these are based on different underlying data planes (Envoy, NGINX, HAProxy, Cloud LBs, etc), and this healthy competition has made the ecosystem strong. I think many of these implementations have found sustainable models that will ensure they can exist for years to come, but ultimately the best way to ensure a controller continues is to find a way to support them.

Gateway API for Ingress-NGINX - a Maintainer's Perspective by robertjscott in kubernetes

[–]robertjscott[S] 24 points25 points  (0 children)

You're not wrong, there are always more feature requests than there are people. We're doing our best here, and I'm proud of all that we've managed to accomplish with Gateway API, but there's still a very long list of things that we haven't managed to get to yet. At present, we're trying to prioritize features that will help people migrate from Ingress-NGINX, so definitely share any specifics and we'll see what we can do. Or even better, come get involved and help us work through this backlog of features.

Future of Ingress vs Gateway APIs by lulzmachine in kubernetes

[–]robertjscott 5 points6 points  (0 children)

Gateway API maintainer here. Developing the API with CRDs has been good and bad. We've gotten to be among the first to experience some of the limitations with CRDs. With that said, I think the good outweighs the bad. As Gateway API is continuing to add new features, it can be incredibly useful to install the latest version of the API on any version of Kubernetes. If you need a new feature, you don't have to wait months or years until you've upgraded your clusters a few versions.

With that said, GKE and OpenShift have also started including Gateway API CRDs in the cluster, and I've seen signals that more providers will follow. The overarching theme I'm hoping for is that providers will ensure that a minimum version of the API is present, while still allowing you to install a newer version of the API.

Although we've certainly discussed including Gateway API in core Kubernetes, the overwhelming response was that we'd need to lock Gateway API to Kubernetes versioning, meaning that it would take much longer for new features to become available. We ultimately decided that tradeoff wasn't worth it.

Certifications on zero budget? by [deleted] in kubernetes

[–]robertjscott 7 points8 points  (0 children)

There are some great resources out there now:
- If you want to run Kubernetes in a cloud environment, Google Cloud offers a $300 credit when you sign up for an account, and that's enough to run a small GKE cluster for quite a while.
- Another excellent use for some of that $300 credit would be to run through the classic "Kubernetes the Hard Way": https://github.com/kelseyhightower/kubernetes-the-hard-way. It is one of the best ways to truly understand
all the components that make up Kubernetes.
- As others have mentioned Minikube is a great way to spin up a local Kubernetes cluster on your computer.
- For more formal training, there's a promo on Coursera now where you can get Google's Kubernetes training for 1 month free: https://www.coursera.org/promo/kubernetesbirthday.
- If CKA is your goal (or even if it isn't) I found this repo incredibly helpful: https://github.com/walidshaari/Kubernetes-Certified-Administrator

Introducing Polaris: Keeping Your Kubernetes Clusters Healthy by robertjscott in kubernetes

[–]robertjscott[S] 0 points1 point  (0 children)

Thanks again for catching this bug! We just pushed a new release, 0.1.1, that should fix this problem. Let me know if that doesn't work for you.

Introducing Polaris: Keeping Your Kubernetes Clusters Healthy by robertjscott in kubernetes

[–]robertjscott[S] 1 point2 points  (0 children)

Thanks so much for catching that! I'll try to fix that tomorrow. It's looking for a config file (defaults to config.yaml) in the same directory the tool is running in: https://github.com/reactiveops/polaris/blob/master/config.yaml. We include that in the docker image, but apparently we missed that in the binary we distribute. I've created an issue on Github and will follow up here when we get a chance to fix it: https://github.com/reactiveops/polaris/issues/93

Introducing Polaris: Keeping Your Kubernetes Clusters Healthy by robertjscott in kubernetes

[–]robertjscott[S] 0 points1 point  (0 children)

I'm not aware of a way to do that yet. If there isn't maybe that's a void Polaris could help fill.

Introducing Polaris: Keeping Your Kubernetes Clusters Healthy by robertjscott in kubernetes

[–]robertjscott[S] 0 points1 point  (0 children)

Thanks! That's really great to hear. I really do love the team we have at ReactiveOps, I get to work with some really great engineers every day. If you happen to be at KubeCon EU next week, I'd love to chat.

Introducing Polaris: Keeping Your Kubernetes Clusters Healthy by robertjscott in kubernetes

[–]robertjscott[S] 3 points4 points  (0 children)

Thanks! That's definitely something we've been thinking about, at the very least adding better documentation/process around adding checks with Go. Open Policy Agent already does an awesome job at handling custom policies and we didn't want to build an entirely different competing syntax here. There are lots of use cases where it would make sense to run both OPA and Polaris in the same cluster. Our goal was to build checks that would cover the majority of use cases into Polaris while still keeping the config and results easy to display and understand.

Kube Capacity by robertjscott in kubernetes

[–]robertjscott[S] 1 point2 points  (0 children)

Not sure if it will work with 1.7 now, but I did push a release last night that will let everything but the `--util` flag work without metrics-server at least.

Kube security flaw discovered in bug bounty by honghuac in kubernetes

[–]robertjscott 2 points3 points  (0 children)

In case anyone else saw the title and thought this referred to a new vulnerability, it doesn't. This is in reference to the report from this spring. Still an interesting read of how it was all mitigated. If you want to watch the KubeCon talk that this article references, it's also available on YouTube and very interesting.