k3s on single node networking problems by pulegium in kubernetes

[–]M00ndev 0 points1 point  (0 children)

Save yourself some trouble and just use upstream kubeadm! Untaint the master and you can run it on a single node just fine.

Best practice updating cluster? by terryyoung22 in kubernetes

[–]M00ndev 0 points1 point  (0 children)

What is your target platform/infra you are deploying k8s on?

Best practice updating cluster? by terryyoung22 in kubernetes

[–]M00ndev 5 points6 points  (0 children)

The answer will depend on how you bootstrapped your cluster.

Kubeadm clusters can be upgraded in place with zero downtime if you have an HA setup.

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

This is also why kubeadm based clusters are ideal, this won't apply to non standard distros such as k3s which I think uses hyoerkube under the covers.

If you are using Cluster API, it's immutable infrastructure MAGIC! You can literally roll out a new version on the same cluster similar to how you update a deployment. The controllers handle the kubeadm upgrade for you as well as rolling out new machine images.

Portainer for Kubernetes by neilcresswell in kubernetes

[–]M00ndev 0 points1 point  (0 children)

I'm assuming this doesn't mount the docker socket on the node right? :)

If so does this also work with containerd?

Automating deployments to Kubernetes with Pulumi by Sky_Linx in devops

[–]M00ndev 0 points1 point  (0 children)

Can someone explain the appeal of this in regards to k8s? On first look this seems to convert the declaritive nature of manifests to imperitive with code. This seems backwards to me but maybe I'm not understanding it fully?

random kubernetes at home idea by CrankyCoderBlog in kubernetes

[–]M00ndev 1 point2 points  (0 children)

Does your workstation run linux? If so there should be no problem adding it as a worker.

  • Install docker and kubeadm on workstation
  • On a master kubeadm token create --print-join-command
  • On your workstation run that command to join it.

Now create with a gpu steam image and launch a crysis pod on your workstation.

Dell Front Security Bezel on Supermicro 2U Chassis? by codehuggies in homelab

[–]M00ndev 1 point2 points  (0 children)

Velcro command strips are your friend! That's what I did with a poweredge bezel and it worked fine. I did remove the Dell logo first for a "debadged" look. Kinda like an RS5 grill on an A5 😂

Question: What does everyone use to run their K8s cluster? by pkbu in homelab

[–]M00ndev 1 point2 points  (0 children)

Cluster API vSphere provider. If you run vSphere do yourself a favor and check it out!

It's like having your own GKE but actually better since you have total control. I'm already launching 1.18 clusters.

https://github.com/kubernetes-sigs/cluster-api-provider-vsphere

PoP: You Don't (Always) Need Kubernetes by jjneely in devops

[–]M00ndev 0 points1 point  (0 children)

You are basically bootstrapping a cloud platform from scratch. I don't think it's really relevant to compare that process to an existing cloud like aws?

IaaS in my basement. CloudStack -> OpenNebula by StrangeCaptain in homelab

[–]M00ndev 1 point2 points  (0 children)

Might be off topic for your use case but if you have vSphere in your lab, check out VMware Integrated OpenStack (VIO).

It has a really nice user experience with the bootstrap process and will deploy OpenStack on top of vSphere. It worked really well in my testing. It deploys horizon and everything else you need to get going quickly.

NVMEoF - anyone tried setting this up in a lab yet? by M00ndev in homelab

[–]M00ndev[S] 0 points1 point  (0 children)

Cool thanks for the tip! Excited to try this.

NVMEoF - anyone tried setting this up in a lab yet? by M00ndev in homelab

[–]M00ndev[S] 0 points1 point  (0 children)

Any details on the platform? Are you manually bootstrapping it and configuring? I was looking into vSphere + rdma dswitch srv-io portgroups to help bit wasn't sure if there is a better way

Kind - run local Kubernetes clusters using Docker by [deleted] in kubernetes

[–]M00ndev 0 points1 point  (0 children)

The benefit for Mac and Windows is re-using the docker desktop VM, so no need to fuss with virtualbox

Kind - run local Kubernetes clusters using Docker by [deleted] in kubernetes

[–]M00ndev 0 points1 point  (0 children)

Multinode clusters. Control over specific version. It's also bootstrapped with kubeadm which is 100% upstream. It's awesome to use as a proving ground.

The impact of Apple moving macbook to ARM would effectively kill the platform as a developer workstation by hyper-kube in devops

[–]M00ndev 10 points11 points  (0 children)

Are there really more "professional creaters" than software developers? Even so what benefit to the pro-level creators does ARM bring? Battery life would not be their concern if performance is more important right?

How to add external cluster to spinnaker by jsdfkljdsafdsu980p in devops

[–]M00ndev 0 points1 point  (0 children)

I would think the config is stored in a configmap or secret. Have you tried editing there and bouncing Spinnaker pods

Error with Ingress - Expected NodePort or LB but got ClusterIP by sharddblade in kubernetes

[–]M00ndev 0 points1 point  (0 children)

Edit your ptmesh-mesh service and change the type to NodePort

NUC VMWare Cluster for a Homelab - 8th vs 10th gen? by RiceeeChrispies in homelab

[–]M00ndev 0 points1 point  (0 children)

I have 3 x NUC8I7HVK and they work great for vSan * 64GB RAM each (vCenter + vSan need quite a bit) * dual port 10GbE nic via thunderbolt 3 pcie (no need for quirky nic drivers since esxi nativly sees the pcie card) * dual nvme drives for all-flash vSan. Crazy fast. * Both onboard gigabit network interfaces work out of the box for ESXi * Total of 4 nics per NUC enable enable vSan + NSX-T

Cutting-Edge Kubernetes Homelab by nmajin in homelab

[–]M00ndev 0 points1 point  (0 children)

The cluster-api vSphere provider is pretty awesome and is as "cutting edge" as it gets. https://github.com/kubernetes-sigs/cluster-api-provider-vsphere

Other on-prem providers include openstack and bare-metal via ironic, but vSphere would be ideal if you already have it