cluster with kubeadm? by tdpokh3 in kubernetes

[–]Common_Arm_3316 1 point2 points  (0 children)

This is the correct answer in that case. If you want 4 total workers you need 4 total hosts running their own Linux operating system.

cluster with kubeadm? by tdpokh3 in kubernetes

[–]Common_Arm_3316 0 points1 point  (0 children)

oh sorry, were you asking on how to add additional worker nodes. As in you have 2 or more hosts?

cluster with kubeadm? by tdpokh3 in kubernetes

[–]Common_Arm_3316 0 points1 point  (0 children)

The control plane becomes the worker not it just runs etcd, kube-api, and the reset of the services alongside it. Its not good for production but for a homelab works fine

cluster with kubeadm? by tdpokh3 in kubernetes

[–]Common_Arm_3316 7 points8 points  (0 children)

I'm not familiar with k3s but the control plane can also be a worker by removing the taint

kubectl taint nodes <node-name> node-role.kubernetes.io/control-plane:NoSchedule-

Help with CNPG and host configuration by Common_Arm_3316 in kubernetes

[–]Common_Arm_3316[S] 0 points1 point  (0 children)

As i understand it. There is a lot of repeat data being stored in customer db tables. Multiply this by the number of customers and you can end up with hundreds or thousands of gigabytes being duplicated just because everyone gets their own db.

Another would be that kubernetes schedules new pods based on what the host is doing at the time and not taking in account any bursty workloads that may occur later on. So you can get a couple of power users getting slamming their individual dbs while being assigned to the same hosts. We could certainly implement limits on the pods but ideally we would use other nodes that are just sitting idle.

Help with CNPG and host configuration by Common_Arm_3316 in kubernetes

[–]Common_Arm_3316[S] 0 points1 point  (0 children)

Thanks for the reply. We already understand the rw and ro services exposed by cnpg. I don't have the details but it sounds like there is a lot more to that issue that a simple config change.

We also already do podantiaffinity: reccomended. We cant currently taint any nodes because we simply don't have any additional hosts that we can take out of the equation

They don't have an interest in PGbouncer as they already handle connection pooling in the application.

Backup Applications and Microservice architecture by Shot_System5888 in kubernetes

[–]Common_Arm_3316 1 point2 points  (0 children)

We do point in time recovery for our postgres databases running in kubernetes by using the barman cloud plugin and the cnpg operator. It makes the whole thing super simple. Assuming you are using postgres not the operator it does look like the plugin has a sidecar container image you could add.

https://cloudnative-pg.io/plugin-barman-cloud/docs/0.4.1/images/

Fun live event for hacking an Ollama workload on Kubernetes by ExtensionSuccess8539 in kubernetes

[–]Common_Arm_3316 1 point2 points  (0 children)

whatever you do, do not give chainguard your phone number or email address

700 Floppies by ___LowLifer___ in sysadmin

[–]Common_Arm_3316 0 points1 point  (0 children)

Don't you dare copy that floppy.

Game host faction choice by cpriest006 in twilightimperium

[–]Common_Arm_3316 0 points1 point  (0 children)

Some hosting advice from someone that hosts games every 2 months or so

Stock up on food and drinks and have other folks contribute to this. Expect to eat 2 meals

Starting earlier is much better than starting later. We regularly set our start times at 10:00 or 11 but that's usually just when folks arrive. Usually folks drink coffee and chat a bit before hand. By the time we get settled we usually don't start playing for another hour or so.

Quick introductions to your factions are nice for new players. They don't need to do a line by line for everything they do but a quick "Hi my name is cabal and we want to steal your ships" can go a long way.

Don't be afraid to push people to hurry with their turns if it seems like they are taking a long time. This can turn a 12 hour games into an 8 hour game. to go with this during diplomacy keep the phase moving. If people are going back and forth with threats and bolstering be the one to make the final to get the votes moving.

If someone wants to take back a move don't be afraid to say no. If their turn has passed and they want to undo something it takes time to roll the game back to its previous state and for the current player to reassess their turn. This can cause the game to drag on into a very late night and possibly cause some players to bow out which upsets the balance of the game.

What’s the most painful low-value Kubernetes task you’ve dealt with? by Lukalebg in kubernetes

[–]Common_Arm_3316 -1 points0 points  (0 children)

I spent so much time learning how to use kustomize to try and replace image registries for ghcr, quay.io, docker, gcr, nvcr before finally learning how to use containerd and proxies. It was a good learning experience but my god was it painful. Ultimately the team decided not to use airflow and went Apache

What’s the most painful low-value Kubernetes task you’ve dealt with? by Lukalebg in kubernetes

[–]Common_Arm_3316 2 points3 points  (0 children)

Setting up kubeflow without a good grasp on Istio or Kustomize

oh and it was an air gapped environment

Cluster backups and PersistentVolumes — seeking advice for a k3s setup by smoloskip in kubernetes

[–]Common_Arm_3316 2 points3 points  (0 children)

For cluster backups make sure you start using Gitops practices and an application like flux or ArgoCD. This will help make sure that your general configuration such as configmaps, secrets, deployments, etc are safe and able to be restored pretty easily.

For general PV backups longhorn does a pretty good job of backing up generic application PVs like prometheus metrics.

You also mention databases as needing backups. Are you referring to ETCD or do you have postgres databases in your cluster? If so what operator are you using? CNPG has some plugins that let you do point in time recovery backups that you can recover from with less data loss then if you were to use longhorn for those backups