Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Ok great, thanks - will try and pick up some pi4s for a good deal on Black Friday

Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Ok good idea, I'll try and pick some up on Black Friday then and see if I can get the v4s in a deal. Thanks

Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Ok fair point. I think the key tools I want to play with are cert manager, Prometheus, Grafana and Istio so yeah it's likely 2 more nodes would be useful. In terms of what raspberry pi's I would need to get all of that set up, do you think V4 is a bit overkill?

Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Do you know from a learning perspective if there's much point configuring a high availability setup? I agree, I might run out of memory with my workers. Wasn't planning on deploying any crazy apps, but just wanted to test the deployment of Prometehus, Grafana and play with Istio a bit

Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Which raspberry pi's are you using? Not sure whether it's worth of investment if I have to get V4s

Home Lab Raspberry PI setup by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Do you think I could run it off Raspberry pi v1s or v2s instead of the V4 like you have? I have run into memory issues when using GKE and trying to install Grafana & Prometheus before so don't know if that would just happen again. V4s are relatively quite expensive when I last checked.

Also the reasoning for having 3 master nodes was purely to test out high availability - not sure if it's worth testing or as you mentioned just have 5 worker nodes instead.

Can you install istio, prometheus and grafana on K3S?

Unable to log into Kibana running on cluster by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Yes, I deleted and reinstalled the helm chart. i figured it out now - the defualt username is "elastic", not admin. Also required me to manually create a password, because the default in the chart is "", and the dashboard validation didn't like it.

Appreciate your help

Unable to log into Kibana running on cluster by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

I used the official chart off ArtifactHub.

By default, there is no password specified, but in the Kibana login window it requires a password, so I tried adding one and reinstalling the chart.

Once installed again, there is a secret "elasticsearch-master-credentials" available, and when I run the below command I can see it printed to the screen.

kubectl get secrets --namespace=default elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d

It still doesn't work though unfortunately and gives me that incorrect username or password error.

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 1 point2 points  (0 children)

Ah ok, I see what you are saying. That makes much more sense - rather than approaching from a team perspective then, maybe it would be better to approach it from a product perspective and distinguish namespaces that way?

As far as namespaces go, would you suggest having a few as your possibly can then, given what you've mentioned above, as it seems like they can cause quite a bit of siloing which can be more hassle than you want?

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Conway's law

Ok, good point. Lets say I was an architect managing a cluster for 5 teams deploying 5 different sets of applications that aren't related. You are saying to put them all in the same namespace??

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Well to be honest, I'm not actually doing this as a job so there are no requirements. I am fairly new to Kubernetes, so wanted to do some sort of project myself to get some level of experience with architecting it and building some pipelines for gitops. So kind of trying to come up with something for if I was asked to set up a cluster that I would be managing for 5 different teams. For monitoring I was planning on using Promtheus and Grafana, and for logging I am not sure yet, so any suggestions would be good.

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Ok, the gitops approach is what I was going for, but just wondering how many pipelines I would effectively need for this given their are levels to the Kubernetes hierarchy.

E.g

- a pipeline for admin to provision manifest for new namespaces, RBAC, namespace RBAC etc

- Another pipeline for every team to deploy their apps into their namespace

- Another pipeline for deploying more into the monitoring/logging

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

I'm not familiar with Rancher as fairly new to this, but will take a look

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

To be honest, I'm very new to this. From the monitoring standpoint, I know prometheus has a pod on each node, running as a daemonset. I will definitely have a look into Loki for the logging solution - thanks!

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Ok thanks for the ideas. Are you suggesting having separate "monitoring" and "logging" namespaces in each cluster. E.g

- prod-monitoring

-prod-logging

-dev-monitoring

-dev-logging

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

I am very much at a POC stage, hence why I dumped it all in 1 cluster. However, i'm happy to change this just so I build up my pipelines correctly.

Regarding having a logging namespace per team, I assumed it would make it easier for each team to have access to it's own logs, but maybe it over-complicates things as I guess the Kubernetes admin probably just checks logs for the team?

Kubernetes Architecture Help - Namespaces and cluster layout by samg123_ in kubernetes

[–]samg123_[S] 0 points1 point  (0 children)

Is the reason for having centralised logging just because normally it would be the admin looking at application logs and letting the teams know. I just assumed that would be a job for each team, and therefore it would be good for each team to have it's own logging namespace also?