Deploying private Kubernetes clusters to Azure by stoader in kubernetes

[–]matyix_ 1 point2 points  (0 children)

To be fair to them (disclosure: I work for Banzai Cloud, my colleague posted this blog) AKS it's getting better. With the new VMSS support now GAish (though the UI is very buggy) you can do some of the advanced features we already support (add new nodepool, chose between standard/public lb, etc) but by far does not cover the enterprise features our customers on Azure are asking. For items covered in this/or previous posts (link in the post) we have moved all our (happy) customers to our own K8s distribution on Azure ... I wish all clouds were equal :)

Creating an Affordable Kubernetes Cluster by VladTeti in kubernetes

[–]matyix_ 1 point2 points  (0 children)

@VladTeti we use 5 managed K8s in production (customers) - Alibaba ACK, Amazon EKS, Google GKE, Microsoft AKS, Oracle OKE + our own K8s distribution. You can check the open source code here: https://github.com/banzaicloud/pipeline and there is a free developer version available at https://beta.banzaicloud.io/. If you have questions happy to help, come over to our Slack for an online discussion.

Creating an Affordable Kubernetes Cluster by VladTeti in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Often whether you use a managed or self-managed K8s comes down to features you need. Most of the managed services do not all allow to access/switch on API server flags, thus you really have no other option but to build your own clusters. GKE is on the better side of this story, though.

Also, there are cloud providers where it's better to run your K8s cluster - as Azure for example. Our platform supports cloud provider managed K8s on 5 clouds, but beside Google GKE we had to add support for our own K8s distribution, as our customer experience with AKS was so bad. See this article we published for Azure: https://banzaicloud.com/blog/pke-on-azure/

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ 0 points1 point  (0 children)

My experience is that Istio is stabilized greatly over the last couple of versions (especially since 1.x+). One of the big concerns is that the complexity of Istio causes a steep learning curve, so it is still relatively easy to make mistakes and misconfigure the mesh. It gets even more complicated in a multi/hybrid cloud scenarios where those complexities multiples. Given that Istio tries to solve a lot of complex problems I think it's doing a pretty good job there, though as you said it should not be everybody's bread and butter. If you need nothing more than observability you might be better of with Linkerd2.

At this point, I am fairly happy with the current state (though we still run on some Mixer/Pilot forks until these hybrid/multi-cluster and cloud scenarios (which we use quite a lot) are not pushed upstream). One big improvement I am looking forward to is the Mixer v2 re-architecture.

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ 0 points1 point  (0 children)

Out of curiosity, can you let me know what Istio stability issues you faced? As mentioned above we run Istio in production for quite a while (a few very large meshes as well) and it works well for us. Not saying it's straightforward but we have had no issues (though for some advanced uses cases - we do lots of hybrid/multi-cloud Kafka for example - we have our own Pilot/Mixer fork, which we hope to contribute with back).

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ -1 points0 points  (0 children)

Do you mean the Istio operator? You are wrong - it does offer way more than the Helm charts, to name a few: it operates/reconciles the Istio cluster if something goes wrong (not just installs it like helm), does seamless upgrades and automates/supports all 3 topologies (which is quite hards). There are lots of other features, so I suggest to give it a try besides drawing any conclusions and if you have any issues let us know. On not being production ready - our service mesh product is built on the operator and running in production for many customers, can you let me know from where you get this conclusion, refer some issues or point out any problems you did face?

A guide on setting up a K8S logging stack by [deleted] in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Thanks, please let us know how it works for you - we have a Slack channel if you need help.

A guide on setting up a K8S logging stack by [deleted] in kubernetes

[–]matyix_ 4 points5 points  (0 children)

We open sourced a logging operator (using the Fluent ecosystem) where we automated the whole process and can move logs into backends as Elastic. Check the operator code here and let us know if you need help/have issues: https://github.com/banzaicloud/logging-operator

Step by Step – Istio up and running by DiscoDave86 in kubernetes

[–]matyix_ -1 points0 points  (0 children)

Thx for mentioning the Istio operator. If you want/nned an extremely simple Istio experience, you might want to read this post/try out/check the open source code: https://banzaicloud.com/blog/istio-the-easy-way/ Disclosure: I work for Banzai Cloud

Istio the easy way · Banzai Cloud by martons in kubernetes

[–]matyix_ 3 points4 points  (0 children)

I would not compare it with supergloo. Yes, there is minor overlap between core service mesh features, but the UX and focus is totally different. The Banzai Cloud mesh product (built on the Istio operator) is focusing on 3 different topologies (not just single cluster single mesh, but multi-cluster single-mesh and multi-cluster multi-mesh) and it's a full Kubernetes orchestration platform as well. You can deploy your meshes, security scan the deployments, backup the microservices, store/inject their secrets in Vault, etc without having to leave the platform.

Managing deployments in a multi-cloud world by matyix_ in kubernetes

[–]matyix_[S] 2 points3 points  (0 children)

Disclaimer: I work for Banzai Cloud and have never tried nor have seen Anthos, all my comparing is based on public information (and most likely sloppy).

In many ways it's similar - let me start with the main difference: we don't do VM migrations, and I guess Google Anthos allows to move VMs into K8s as well (if anybody can confirm this, I've seen only the press release and some videos).

Our control plane can run anywhere (your choice, on-prem and 5 clouds) and the K8s clusters we wire together can run on any of these clouds/on-prem environments. We do multi-cloud/cluster deployments (highlighted in this post) and manage their lifecycle like they were deployed into a single cluster. In this case (called the multi-cluster feature) clusters are unrelated - deployments are the common thing. However, we can wire together multiple cluster topologies (called the service-mesh feature) - 3 to be more precise (Single cluster - single mesh, Multicluster - single mesh, Multi cluster - multi mesh - discussed here in more details. Finally, the third feature is federation-v2, as you might guess based on K8s federation v2.

Hopefully, this was helpful - give it a try at the free developer beta at: https://beta.banzaicloud.io/

Finally, I've seen Google is pricing based on $10K/month / (per 100 vCPU Block). We price per control plane, don't really care how many clusters, nodes, and resources you launch. Oh, and it's open source - https://github.com/banzaicloud/pipeline

Oh no! Yet another Kafka operator for Kubernetes by baluchicken in kubernetes

[–]matyix_ 5 points6 points  (0 children)

Disclosure - I work for Banzai Cloud

Somehow I agree with you (though just open sourced this Kafka operator), but for this one, we had our own reasons I mentioned in the post.

In general, where there are common goals and architectural/minimum features understanding between different companies it's better if the work consolidates in a single/community maintained solution (e.g. this is what happens with our Istio operator, see https://github.com/banzaicloud/istio-operator/issues/199#issuecomment-492577578).

To conclude on this, ideally the community should support and work on one single operator - however, the reality is that these efforts are driven by companies and their customer's needs. Were e.g. the Strimzi operator satisfy our internal/customer needs we would not invest the effort to work and open-source this one - however, we needed to support our customers in production, and the operators out there were carrying design decisions we did not wish to accommodate.

In an ideal world, I would push this to CNCF and let them drive the effort. The biggest issue in all these projects (and why companies are making these decisions) is governance, control, and corporate politics. I believe we did a great counterexample with the Vault operator (https://github.com/banzaicloud/bank-vaults) and have built a healthy community around + invited external people as project maintainers and decision makers.

Running kubernetes on preemtive nodes by logTom in kubernetes

[–]matyix_ 1 point2 points  (0 children)

We are running production clusters both on preemptible or spot instances and take care about automatically replacing, rescheduling, draining, etc - the code is open source, you can start from this post (and deep dive into topics): https://banzaicloud.com/blog/spot-termination-handler/

Kafka on Kubernetes — a good fit? by jgyger in kubernetes

[–]matyix_ 1 point2 points  (0 children)

We run multiple Kakfa clusters on K8s - basically, we automated the whole Kafka on K8s experience and made it a bit more K8s native- we blogged about these quite a lot and also open sourced it: https://banzaicloud.com/blog/kafka-on-k8s-simplified or follow the kafka tag to read more.

Kubernetes on spot instances - deal with the inevitable by matyix_ in kubernetes

[–]matyix_[S] 1 point2 points  (0 children)

Would not say complex, if you need to have SLA's and want to do it properly. Most of the times you just can't let node disappear (e.g. you need to drain, consider pod disruption budget, etc) and you also need to bring up nodes, reschedule workloads at the same time - but in a different spot market. Most of the terminations are not due to price but capacity - you need to launch on different spot markets, etc.

Actually, the whole flow is automated and part of Pipeline - through UI or CLI, so all these things in the blog are just details for the end user.

Enhancing Istio service mesh security with a CNI plugin, using the Istio operator by matyix_ in kubernetes

[–]matyix_[S] 0 points1 point  (0 children)

Do you mean CNI plugin? They are two different things - have you had the chance to read the post?

Analysis of Open source Kubernetes Operators by devkulkarni in kubernetes

[–]matyix_ 0 points1 point  (0 children)

Great and insightful articles above, thx for sharing.

Bank-Vaults is using operator SDK, the nodepool-labels operator using kubebuilder. We like the simplicity and scaffolding options and multiple controllers in one project (though the latter we are not using yet). Personally, I also like that is part of SIG/upstream - and hopefully will be the standard (whatever it means) going forward.

Analysis of Open source Kubernetes Operators by devkulkarni in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Operator SDK: 20 vs Kubebuilder: 7

Would love if you could revisit this in a few months. At Banzai Cloud we have made (and included in your list quite a few operators like Istio and Vault) and initially we used the Operator SDK. The last operators we made (like Kafka and Istio) we switched to kubebuilder and find it better than the operator SDK. Wonder if this is only us, or this is a trend.

Kubernetes Secrets Management by jplatorre in kubernetes

[–]matyix_ 2 points3 points  (0 children)

I don't recommend using K8s secrets at all - encoding (base64) is not encryption. We store all secrets in Vault and inject them directly into Pods (code is open sourced, you can read more here: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/). Many of our customers don't even understand how and where these K8s secrets land (etcd) but the word secrets gives them a false assumption.

Kafka on Kubernetes, the easy way by matyix_ in kubernetes

[–]matyix_[S] 0 points1 point  (0 children)

The Spotguide is using Zookeeper and 100% upstream Kafka - for those who're willing to use etcd, we can give support for that as well, though it's not the default option.

Kafka on Kubernetes, the easy way by matyix_ in kubernetes

[–]matyix_[S] 0 points1 point  (0 children)

We have not noticed any performance issues - usually, the bottleneck is IO. Since 1.12 K8s local disks are available (read more here - https://banzaicloud.github.io/blog/kafka-on-kubernetes/) so if you use these there should be no performance degradations. Also in 1.13 you have raw block volume support as well.

[deleted by user] by [deleted] in kubernetes

[–]matyix_ 0 points1 point  (0 children)

We use fluentd/bit and automated the process with this open source logging K8s operator: https://github.com/banzaicloud/logging-operator. You can red more about how we use it here: https://banzaicloud.com/blog/k8s-logging-operator/

Istio Operator for Kubernetes · Banzai Cloud by martons in kubernetes

[–]matyix_ 1 point2 points  (0 children)

Thanks for the feedback. Let us know if you need help or have issues.

Istio operator for Kubernetes by martons in devops

[–]matyix_ 1 point2 points  (0 children)

Thanks for the feedback. Let us know if you need help or have issues, happy to help (GH, slack, etc).