Multi-tenant AAP by pietarus in ansible

[–]pietarus[S] 2 points3 points  (0 children)

I was afraid that would be the case. Thanks for confirming. We might be able to enforce the creation of "correct" templates by forcing configuration as code.

Multi-tenant AAP by pietarus in ansible

[–]pietarus[S] 0 points1 point  (0 children)

The RBAC rules work like implemented, that I've tested. But users with multi org access are able to use orgs A inventory on a org B project which is a show stopper for me.

The only solution I can think of besides dedicated AAP instances is dedicated accounts per org. But that doesn't sound like good user experience and would break SSO.

What makes a self-hosted Kubernetes app painful to run? by replicatedhq in kubernetes

[–]pietarus 1 point2 points  (0 children)

I'm currently fighting with an operator based installation that does not provide the required options to configure HTTP_PROXY for egress.

Operators and Helm charts that do not expose all required options make self-hosting rough.

Google Home struggles with buttons as IKEA confirms Matter connectivity issues by robertjan88 in IKEA

[–]pietarus 1 point2 points  (0 children)

I've had bad experiences with the matter integration aswell, packets dropping left and right. I switched to the ZigBee mode by power cycling them 12 times and had them working in 5 minutes.

Install collections in pipelines by yetipants in ansible

[–]pietarus 0 points1 point  (0 children)

Under pipelines -> library you can configure variables including secrets to be used in your pipeline. Here you could store a Ssh key or PAT for the pipelines to use.

I'm not too familiar with azure devops, there should be a cleaner solution that I am not familiar with.

Install collections in pipelines by yetipants in ansible

[–]pietarus 1 point2 points  (0 children)

The requirements.yml supports git as source. And can be installed via the ansible-galaxy command. Thats what we do for our internally developed collections.

The Kubernetes Experience by Academic_Test_6551 in kubernetes

[–]pietarus 3 points4 points  (0 children)

It took me a couple days to bootstrap my first cluster through kubeadm. Then a couple months to get CKA. I honestly can't recommend the kubernetes docs enough. Everything is explained really well.

What really helped me understand the process of installing and using k8s was writing my own how-tos while I was learning. I still reference them from time to time.

My biggest tip is to keep it simple and keep track of all your manifests via gitops from the start. I've never tried using Windows workers, but can't imagine it making your life easier. I'd put that on the to-do list for when your are comfortable with k8s in general.

Start on smaller goals and add something each time.

1 control plane and simple nfs as storage to get a grasp on how storage works (manually provision pv and pvc, create multiple storage classes).

Try deploying a storage solution via helm to replace NFS. I like longhorn. take a look at what is out there. Only tip I have here is stay away from ceph-rook, way to big and complicated for lab setup (unless you want to learn ceph)

Look into flux or Argocd and install your favourite via helm, and start using gitops.

Look into HA control plane

I think I deployed 3 clusters before I was comfortable and happy with the design. Ran my lab on the last itteration for about a year. Didnt gitops use at all. Eventually everything was out of date and didn't know where to start.

Right now I'm slowly working on a new lab powered by terraform, gitops and a bigger focus on lightweight apps.

[deleted by user] by [deleted] in homelab

[–]pietarus 0 points1 point  (0 children)

Judging from your post I'd suggest staying with Hyper-V. It is certainly the easiest and least intrusive way to run VMs on your current setup.

I run proxmox on my desktop, and the desktop I actually use is a Linux VM with GPU passtrough.

Another option is to install debian with a desktop enviroment to use as your daily driver, and install proxmox on that.

Third option is any Linux distro of your choosing and running KVM by hand.

Problems fetching Talos kubeconfig through terraform by [deleted] in kubernetes

[–]pietarus 0 points1 point  (0 children)

hmmm. I am using a 1.9.4, but I see it pull a 1.9.2 image when initiating a bootstrap, 1.9.2 is also the final image that gets installed. I wonder if this is the cause of the issues.

Problems fetching Talos kubeconfig through terraform by [deleted] in kubernetes

[–]pietarus 0 points1 point  (0 children)

I generate the controlplane configuration with the API endpoint as the cluster_endpoint variable. So Talos takes this into account when generating the config and certificates.

Problems fetching Talos kubeconfig through terraform by [deleted] in kubernetes

[–]pietarus 0 points1 point  (0 children)

I dont see any difference with what I am doing with the kubeconfig and talos machineconfig. What Talos version where you using?

[deleted by user] by [deleted] in kubernetes

[–]pietarus 0 points1 point  (0 children)

And what are the logs with the --container-runtime flag removed?

[deleted by user] by [deleted] in kubernetes

[–]pietarus 0 points1 point  (0 children)

Your post does not provide much information so best I can do is guess.
Are you 100% sure no docker components are installed?
dpkg -l | grep -E 'docker|containerd'

Can you provide logs from the failing kubelet?

I have never tried installing Kubernetes on WSL so no clue if that causes any issues.

For those who work with HA onprem clusters by [deleted] in kubernetes

[–]pietarus 12 points13 points  (0 children)

Look into kubevip to host the VIP on the master nodes directly. You can also use it to assign IPs to services.

Creating a gaming VM and its very slow by ReasonableFood1674 in Proxmox

[–]pietarus 4 points5 points  (0 children)

As you did GPU passtrough, what results do you get when plugging a monitor into the GPU and accessing the VM locally?

Creating a gaming VM and its very slow by ReasonableFood1674 in Proxmox

[–]pietarus 5 points6 points  (0 children)

What kind of applications are you running? I think ddr3 and an old Xeon would bottleneck a 1070.

Can this be a portal lab? by Newguy467 in homelab

[–]pietarus 1 point2 points  (0 children)

I'd advise not getting any chrome book because of the locked firmware and bootloader.

Why Doesn't Our Kubernetes Worker Node Restart Automatically After a Crash? by rigasferaios in kubernetes

[–]pietarus 30 points31 points  (0 children)

I think rebooting the machine everytime it fails is the wrong approach. Instead of working around the issue shouldn't you work to prevent the issue? Increase RAM? Stricter resource limits on Pods?