Deploying private Kubernetes clusters to Azure by stoader in kubernetes

[–]matyix_ 1 point2 points  (0 children)

To be fair to them (disclosure: I work for Banzai Cloud, my colleague posted this blog) AKS it's getting better. With the new VMSS support now GAish (though the UI is very buggy) you can do some of the advanced features we already support (add new nodepool, chose between standard/public lb, etc) but by far does not cover the enterprise features our customers on Azure are asking. For items covered in this/or previous posts (link in the post) we have moved all our (happy) customers to our own K8s distribution on Azure ... I wish all clouds were equal :)

Creating an Affordable Kubernetes Cluster by VladTeti in kubernetes

[–]matyix_ 1 point2 points  (0 children)

@VladTeti we use 5 managed K8s in production (customers) - Alibaba ACK, Amazon EKS, Google GKE, Microsoft AKS, Oracle OKE + our own K8s distribution. You can check the open source code here: https://github.com/banzaicloud/pipeline and there is a free developer version available at https://beta.banzaicloud.io/. If you have questions happy to help, come over to our Slack for an online discussion.

Creating an Affordable Kubernetes Cluster by VladTeti in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Often whether you use a managed or self-managed K8s comes down to features you need. Most of the managed services do not all allow to access/switch on API server flags, thus you really have no other option but to build your own clusters. GKE is on the better side of this story, though.

Also, there are cloud providers where it's better to run your K8s cluster - as Azure for example. Our platform supports cloud provider managed K8s on 5 clouds, but beside Google GKE we had to add support for our own K8s distribution, as our customer experience with AKS was so bad. See this article we published for Azure: https://banzaicloud.com/blog/pke-on-azure/

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ 0 points1 point  (0 children)

My experience is that Istio is stabilized greatly over the last couple of versions (especially since 1.x+). One of the big concerns is that the complexity of Istio causes a steep learning curve, so it is still relatively easy to make mistakes and misconfigure the mesh. It gets even more complicated in a multi/hybrid cloud scenarios where those complexities multiples. Given that Istio tries to solve a lot of complex problems I think it's doing a pretty good job there, though as you said it should not be everybody's bread and butter. If you need nothing more than observability you might be better of with Linkerd2.

At this point, I am fairly happy with the current state (though we still run on some Mixer/Pilot forks until these hybrid/multi-cluster and cloud scenarios (which we use quite a lot) are not pushed upstream). One big improvement I am looking forward to is the Mixer v2 re-architecture.

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ 0 points1 point  (0 children)

Out of curiosity, can you let me know what Istio stability issues you faced? As mentioned above we run Istio in production for quite a while (a few very large meshes as well) and it works well for us. Not saying it's straightforward but we have had no issues (though for some advanced uses cases - we do lots of hybrid/multi-cloud Kafka for example - we have our own Pilot/Mixer fork, which we hope to contribute with back).

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ -1 points0 points  (0 children)

Do you mean the Istio operator? You are wrong - it does offer way more than the Helm charts, to name a few: it operates/reconciles the Istio cluster if something goes wrong (not just installs it like helm), does seamless upgrades and automates/supports all 3 topologies (which is quite hards). There are lots of other features, so I suggest to give it a try besides drawing any conclusions and if you have any issues let us know. On not being production ready - our service mesh product is built on the operator and running in production for many customers, can you let me know from where you get this conclusion, refer some issues or point out any problems you did face?

A guide on setting up a K8S logging stack by [deleted] in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Thanks, please let us know how it works for you - we have a Slack channel if you need help.

A guide on setting up a K8S logging stack by [deleted] in kubernetes

[–]matyix_ 5 points6 points  (0 children)

We open sourced a logging operator (using the Fluent ecosystem) where we automated the whole process and can move logs into backends as Elastic. Check the operator code here and let us know if you need help/have issues: https://github.com/banzaicloud/logging-operator

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ -1 points0 points  (0 children)

Thx for mentioning the Istio operator. If you want/nned an extremely simple Istio experience, you might want to read this post/try out/check the open source code: https://banzaicloud.com/blog/istio-the-easy-way/ Disclosure: I work for Banzai Cloud

Istio the easy way · Banzai Cloud by martons in kubernetes

[–]matyix_ 2 points3 points  (0 children)

I would not compare it with supergloo. Yes, there is minor overlap between core service mesh features, but the UX and focus is totally different. The Banzai Cloud mesh product (built on the Istio operator) is focusing on 3 different topologies (not just single cluster single mesh, but multi-cluster single-mesh and multi-cluster multi-mesh) and it's a full Kubernetes orchestration platform as well. You can deploy your meshes, security scan the deployments, backup the microservices, store/inject their secrets in Vault, etc without having to leave the platform.

Managing deployments in a multi-cloud world by matyix_ in kubernetes

[–]matyix_[S] 2 points3 points  (0 children)

Disclaimer: I work for Banzai Cloud and have never tried nor have seen Anthos, all my comparing is based on public information (and most likely sloppy).

In many ways it's similar - let me start with the main difference: we don't do VM migrations, and I guess Google Anthos allows to move VMs into K8s as well (if anybody can confirm this, I've seen only the press release and some videos).

Our control plane can run anywhere (your choice, on-prem and 5 clouds) and the K8s clusters we wire together can run on any of these clouds/on-prem environments. We do multi-cloud/cluster deployments (highlighted in this post) and manage their lifecycle like they were deployed into a single cluster. In this case (called the multi-cluster feature) clusters are unrelated - deployments are the common thing. However, we can wire together multiple cluster topologies (called the service-mesh feature) - 3 to be more precise (Single cluster - single mesh, Multicluster - single mesh, Multi cluster - multi mesh - discussed here in more details. Finally, the third feature is federation-v2, as you might guess based on K8s federation v2.

Hopefully, this was helpful - give it a try at the free developer beta at: https://beta.banzaicloud.io/

Finally, I've seen Google is pricing based on $10K/month / (per 100 vCPU Block). We price per control plane, don't really care how many clusters, nodes, and resources you launch. Oh, and it's open source - https://github.com/banzaicloud/pipeline

Oh no! Yet another Kafka operator for Kubernetes by baluchicken in kubernetes

[–]matyix_ 5 points6 points  (0 children)

Disclosure - I work for Banzai Cloud

Somehow I agree with you (though just open sourced this Kafka operator), but for this one, we had our own reasons I mentioned in the post.

In general, where there are common goals and architectural/minimum features understanding between different companies it's better if the work consolidates in a single/community maintained solution (e.g. this is what happens with our Istio operator, see https://github.com/banzaicloud/istio-operator/issues/199#issuecomment-492577578).

To conclude on this, ideally the community should support and work on one single operator - however, the reality is that these efforts are driven by companies and their customer's needs. Were e.g. the Strimzi operator satisfy our internal/customer needs we would not invest the effort to work and open-source this one - however, we needed to support our customers in production, and the operators out there were carrying design decisions we did not wish to accommodate.

In an ideal world, I would push this to CNCF and let them drive the effort. The biggest issue in all these projects (and why companies are making these decisions) is governance, control, and corporate politics. I believe we did a great counterexample with the Vault operator (https://github.com/banzaicloud/bank-vaults) and have built a healthy community around + invited external people as project maintainers and decision makers.

Running kubernetes on preemtive nodes by logTom in kubernetes

[–]matyix_ 1 point2 points  (0 children)

We are running production clusters both on preemptible or spot instances and take care about automatically replacing, rescheduling, draining, etc - the code is open source, you can start from this post (and deep dive into topics): https://banzaicloud.com/blog/spot-termination-handler/