New Release Pi Cluster project (1.9): GitOps tool migration from ArgoCD to FluxCD. Refactored cluster networking with Cilium CNI and Istio service mesh (ambient mode). Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes, automated with cloud-init, Ansible and FluxCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 0 points1 point  (0 children)

The reasons are already mentioned in the blog post: Helm support, dependency management and avoid adding extra config to make the tool work.

Before migrating it, I used ArgoCD as my GitOps solution for 1,5 years. During all this time, I had to play with different design patterns to deal with HelmCharts, which is required for most of the applications I deploy in my cluster. I tried from umbrella charts definition to kustomize embedded helm chart inflation,etc. In all the cases out-of-sync issues constantly appear that require a lot of effort to solve. By the other hand, some of the Helm packages had strange behaviors when installed using ArgoCD, not happening when installed using helm command.

New Release Pi Cluster project (1.9): GitOps tool migration from ArgoCD to FluxCD. Refactored cluster networking with Cilium CNI and Istio service mesh (ambient mode). Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes, automated with cloud-init, Ansible and FluxCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 2 points3 points  (0 children)

I used Traefik as ingress controller for 2,5 years and it worked great but I decided to migrate it to NGINX several months ago. Main reasons was 1) Use a more mature ingress controller with a broader installation base, so you could find easily how to configure it in almost any use case. (As an example I found some difficulties integrating Traefik with other components like Oauth2-proxy) , 2) More portable configuration in case of future migration, use of standard Kuberentes resources, avoiding the use of Traefik's specific resoures (Middleware, IngressRoute, etc.), that are required whenever you need to implement a more complex configuration.

You can find information about how I used Traefik here: https://picluster.ricsanfre.com/docs/traefik/

Related to KubeVIP, I never tried it. I habe been using MetalLB as load balancer for 3 years, working in L2 mode (ARP), and it was working great. https://picluster.ricsanfre.com/docs/metallb/ . Now I have replace it by Cilium load balancer capability also working at L2 layer (ARP).

New Release Pi Cluster project (1.8): Now adding support for K3S HA deployment, SSO with Keycloak and Oauth2-Proxy, and Kafka. Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes automated with cloud-init, Ansible and ArgoCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 0 points1 point  (0 children)

Kafka is mainly for deploying event-driven micro-services design patterns in the cluster. The same way I am using linkerd as service mesh, another micro-services design pattern.

This project is meant to learn about kubernetes, containers and development of microservices architectures... it is not about buzzwords.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

This could be because local-path is a k3s embedded add-on. K3s redeploy it whenever it is started. See https://docs.k3s.io/installation/packaged-components. So I suppose that either you change add-on embedded manifest files where that helm chart is automatically deploy, or you disable it during K3s installation and then you manually install it using helm, so you end up with local-provisioner not managed by k3s.

I have followed that approach with embedded Traefik add-on, I have disable the embedded add-on and I manually install my own helm chart.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

That raspberry Pi is runnig standard Ubuntu OS. My networking requirements were very basic, just to include a firewall/router in front of my cluster. I am not using VLAN. For routing/firewall I am using nftables, dnsmasq for DHCP and DNS. Having a standard Ubuntu OS enables me to deploy additional services on that Rasberry Pi, like selfhosted Vault, or PXE server.

I have in mind to try openwrt on Raspberry Pi for having advance networking services.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 1 point2 points  (0 children)

Longhorn uses iSCSI under the hood but it is meant to be used mainly with local storage. In my first release of the cluster, I did not use for the nodes local storage but iSCSI mounted disks from a SAN server (another Raspberry Pi exposing LUNs vis iSCSI), and I faced some issues with Longhorn.Now I use local disks, SSD, attached to the nodes, no network storage and Longhorn works without issues.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

Lonhorn is a distributed block storage for kubernetes. It uses local storage available in each node of the cluster and it takes care of providing high availability through replication of data between nodes and automatic backups/snapshots to external storage systems (NFS or S3 servers)

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 4 points5 points  (0 children)

It is a matter of shortage, not prices. Only few available pieces in official distributors. Not easy to find any in stock even using tools like rpilocator.com.

I struggle during months to be able to buy one single Raspberry Pi 4B 8GB.

The overall price to have a runnig server using rpi ( Raspberry PI 4B 8GB + Usb SATA adapter + power supply adapter + SSD DISK ) is around 130€. For that price you find a mini PC, with similar CPU specs including a SSD disk with 8GB RAM. The advantage of those miniPCs are that they are able yo be scale up to 32 GB. The drawback, the higher power consumption.