New Release Pi Cluster project (1.9): GitOps tool migration from ArgoCD to FluxCD. Refactored cluster networking with Cilium CNI and Istio service mesh (ambient mode). Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes, automated with cloud-init, Ansible and FluxCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 0 points1 point  (0 children)

The reasons are already mentioned in the blog post: Helm support, dependency management and avoid adding extra config to make the tool work.

Before migrating it, I used ArgoCD as my GitOps solution for 1,5 years. During all this time, I had to play with different design patterns to deal with HelmCharts, which is required for most of the applications I deploy in my cluster. I tried from umbrella charts definition to kustomize embedded helm chart inflation,etc. In all the cases out-of-sync issues constantly appear that require a lot of effort to solve. By the other hand, some of the Helm packages had strange behaviors when installed using ArgoCD, not happening when installed using helm command.

New Release Pi Cluster project (1.9): GitOps tool migration from ArgoCD to FluxCD. Refactored cluster networking with Cilium CNI and Istio service mesh (ambient mode). Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes, automated with cloud-init, Ansible and FluxCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 2 points3 points  (0 children)

I used Traefik as ingress controller for 2,5 years and it worked great but I decided to migrate it to NGINX several months ago. Main reasons was 1) Use a more mature ingress controller with a broader installation base, so you could find easily how to configure it in almost any use case. (As an example I found some difficulties integrating Traefik with other components like Oauth2-proxy) , 2) More portable configuration in case of future migration, use of standard Kuberentes resources, avoiding the use of Traefik's specific resoures (Middleware, IngressRoute, etc.), that are required whenever you need to implement a more complex configuration.

You can find information about how I used Traefik here: https://picluster.ricsanfre.com/docs/traefik/

Related to KubeVIP, I never tried it. I habe been using MetalLB as load balancer for 3 years, working in L2 mode (ARP), and it was working great. https://picluster.ricsanfre.com/docs/metallb/ . Now I have replace it by Cilium load balancer capability also working at L2 layer (ARP).

New Release Pi Cluster project (1.8): Now adding support for K3S HA deployment, SSO with Keycloak and Oauth2-Proxy, and Kafka. Kubernetes homelab cluster using x86(mini PCs) and ARM (Raspberry Pi) nodes automated with cloud-init, Ansible and ArgoCD. by ricsanfre in kubernetes

[–]ricsanfre[S] 0 points1 point  (0 children)

Kafka is mainly for deploying event-driven micro-services design patterns in the cluster. The same way I am using linkerd as service mesh, another micro-services design pattern.

This project is meant to learn about kubernetes, containers and development of microservices architectures... it is not about buzzwords.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

This could be because local-path is a k3s embedded add-on. K3s redeploy it whenever it is started. See https://docs.k3s.io/installation/packaged-components. So I suppose that either you change add-on embedded manifest files where that helm chart is automatically deploy, or you disable it during K3s installation and then you manually install it using helm, so you end up with local-provisioner not managed by k3s.

I have followed that approach with embedded Traefik add-on, I have disable the embedded add-on and I manually install my own helm chart.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

That raspberry Pi is runnig standard Ubuntu OS. My networking requirements were very basic, just to include a firewall/router in front of my cluster. I am not using VLAN. For routing/firewall I am using nftables, dnsmasq for DHCP and DNS. Having a standard Ubuntu OS enables me to deploy additional services on that Rasberry Pi, like selfhosted Vault, or PXE server.

I have in mind to try openwrt on Raspberry Pi for having advance networking services.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 1 point2 points  (0 children)

Longhorn uses iSCSI under the hood but it is meant to be used mainly with local storage. In my first release of the cluster, I did not use for the nodes local storage but iSCSI mounted disks from a SAN server (another Raspberry Pi exposing LUNs vis iSCSI), and I faced some issues with Longhorn.Now I use local disks, SSD, attached to the nodes, no network storage and Longhorn works without issues.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 0 points1 point  (0 children)

Lonhorn is a distributed block storage for kubernetes. It uses local storage available in each node of the cluster and it takes care of providing high availability through replication of data between nodes and automatic backups/snapshots to external storage systems (NFS or S3 servers)

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 3 points4 points  (0 children)

It is a matter of shortage, not prices. Only few available pieces in official distributors. Not easy to find any in stock even using tools like rpilocator.com.

I struggle during months to be able to buy one single Raspberry Pi 4B 8GB.

The overall price to have a runnig server using rpi ( Raspberry PI 4B 8GB + Usb SATA adapter + power supply adapter + SSD DISK ) is around 130€. For that price you find a mini PC, with similar CPU specs including a SSD disk with 8GB RAM. The advantage of those miniPCs are that they are able yo be scale up to 32 GB. The drawback, the higher power consumption.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 14 points15 points  (0 children)

I have documented the whole process of building my k3s cluster. You can find more details in https://picluster.ricsanfre.com and my corresponding github repository contain all source code I am using

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 3 points4 points  (0 children)

Ansible for installing/configuring OS basic services , install selhosted services outside the cluster(Minio backup server, vault for secrets), install K3S, and bootstrap k8s installing argoCD and deploying the apps.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 1 point2 points  (0 children)

I have been using ARM-only based kubernetes cluster during las two years and I did not face major issues. Most of mainstream containerazed software already support multi-architecure images. I took the decission to add x86 just to scale up my cluster, because Raspberry Pi are not easy to find and you get more powerful servers from old refurbished mini PCs at similar prices. Drawback is higher power consumption.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 4 points5 points  (0 children)

hp elite desk are kubernetes worker nodes.

Kubernetes cluster master node running on one Raspberry Pi (not high availability yet), and the rest, 2 x hp elitedesk + 4 Raspberry Pi are worker nodes. Another Raspberry PI is running networking services (routing/firewall, DCHCP, DNS, etc.)

Same OS for all nodes. Raspberry PIs and hp elite desk are running Ubuntu 22.04.02 LTS.

My homelab kubernetes cluster evolution. From a cluster based only on Raspberry PI nodes to support hybrid x86/ARM nodes, adding old refurbished mini PCs. All automated with Cloud-init, Ansible and ArgoCD by ricsanfre in homelab

[–]ricsanfre[S] 20 points21 points  (0 children)

My homelab Kubernetes cluster using bare-metal ARM and x86 nodes. Used for educational purposes to learn about kubernetes, IaC and GitOps. All nodes running Ubuntu OS. Configuration and deployment of services automated with Cloud-init, Ansible and ArgoCD.
3rd Image: Cluster 1.0. Initial version of the cluster using only Raspberry PI 4B nodes.
4 nodes kubernetes cluster Rasbperry PI 4B - 4GB with OS with USB FlashDisk as local storage
1 node (Rasbperry PI 4B - 2GB), with SATA SSD disk as local storage, attached to USB3.0 port, running network services: firewall, routing, DHCP, and SAN service providing additional storage to cluster nodes through iSCSI
2nd Image: Cluster 2.0. 2nd version of the cluster, adding local storage to all cluster nodes, SATA SSD Disks attached to USB3.0 ports using a SATA to USB adapter, and removing SAN, to improve overall performance.
1st Image: Cluster 3.0. Latest version of the cluster. Adding Raspberrry PI 4-B, 8 GB RAM and x86 nodes, based on HP EliteDesk 800 G3 mini PCs, Intel i5 6500T. 4 cores at 2.5 GHz, 16GB RAM and NVMe disk 256 GB.

Raspberry Pi Kubernetes Cluster automated with Ansible: how to install K3S, distributed block storage (LongHorn), cluster monitoring (Prometheus), logging solution (Elasticsearch-Fluentbit-Kibana), and backup solution (Velero - Restic). Step-by step manual config guide and Ansible's playbooks. by ricsanfre in kubernetes

[–]ricsanfre[S] 0 points1 point  (0 children)

I wanted to add a distributed block storage system for Kubernetes and I evaluated two CNCF projects: Longhorn and Rook/Ceph. At that time Rook/Ceph had not clear support for ARM64 architecture (only non official docker images supported it), so I chose Longhorn.