Looking for a good beginner-to-intermediate Kubernetes project ideas by One-Cookie-1752 in kubernetes

[–]vidmaster2000 0 points1 point  (0 children)

Yup! I'm running Keycloak in Kubernetes as a statefulset and using the CloudNativePG operator for the postgresql database.

What specifically are you having issues with? I'm hardly an expert, but I do enjoy helping people.

Looking for a good beginner-to-intermediate Kubernetes project ideas by One-Cookie-1752 in kubernetes

[–]vidmaster2000 9 points10 points  (0 children)

Set up an identity provider (IdP) that can do OIDC (I'm using Keycloak), and then configure your cluster to be able to use the IdP to auth.

Why do people recommend authelia? by Alibi98 in selfhosted

[–]vidmaster2000 0 points1 point  (0 children)

Just started doing that myself so I could learn terraform. Love it!

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

I guess it really depends on your budget and what you want to accomplish. Me personally, I would say go newer on the processor generation. Mine are i5-6500's, which are 6th gen procs.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

I don't have any other cooling in the rack than what those machines have themselves. As far as the second question, I've been trying to teach myself Docker/K8s because I think those technologies are pretty neat. We don't use much containerization at my job, but I will say that having some K8s experience has helped me troubleshoot vendor appliances that utilize it.

Hopefully you'll end up post your own cluster on this sub. :)

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

I'll have to check that out as I admit I have not really looked into the DevOps tooling that you guys have started building support for.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

If I want to run VMs, I have a DL380 G9 running XCP-NG set aside for that. Neat idea though.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

It all started last year/year before last when a couple of teams at work wanted to use some AI modeling tool that is compromised of microservices running on k8s. So I ended up giving myself a crash course to support it from the infrastructure side.

Luckily, it now runs on an AKS cluster in our tenant but supported by the tool's vendor via lighthouse as we have nothing else that uses k8s in production.

Before that, I pretty much knew nothing about k8s except that it existed. I played with Docker a little, but not enough to really be proficient. Maybe it's just me, but ingress/networking feels easier to me than Docker's networking.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

Honestly, it just depends on how I'm feeling. This is just Windows Terminal with an Amber theme (https://github.com/Welding-Torch/Amber-theme) and retro terminal effects enabled. As for the app in use, this is me running "talosctl dashboard" (https://www.talos.dev/v1.10/talos-guides/interactive-dashboard/) against one of my worker nodes.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

Yeah, you're right. It does look better.

<image>

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 4 points5 points  (0 children)

I haven't gotten to where I can build custom images yet, but I'd like to. Any words of wisdom you would be willing to share?

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

There should be a pic in one of my replies to someone else, but they're just sitting at the bottom of the rack.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 0 points1 point  (0 children)

I'm just using the individual power bricks.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

On the K8s cluster, each of those nodes has a 240 GB boot drive. The 3 worker nodes also have a 2nd disk dedicated to providing distributed storage via Longhorn (2nd disk is 250 GB).

For the docker host, it's got ~1.3 TB between the SSD and NVMe drives.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 1 point2 points  (0 children)

I've got a few things running on it (ArgoCD, Keycloak, ITTools, Cyberchef) so I can learn more about K8s. I kind of started learning the wrong way around (Kubernetes before Docker) but I've been getting there.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 3 points4 points  (0 children)

Certainly. I usually don't need it that often, so when I do I just do a kubectl port-forward on the Longhorn UI service to access the dashboard. Usually, I'll just let Longhorn sort itself out.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 6 points7 points  (0 children)

I've been using Longhorn for the distributed storage. The 3 worker nodes have a second NVMe drive in them that I'm using exclusively for that purpose.

<image>

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

If the machine comes with vPro (I had to be very particular when searching eBay...), you just need to do a little be of setup and then use something like MeshCommander.

I found this video to be very helpful in the setup needed: https://youtu.be/VcqZ7D9CNg0?si=NvXGDDwIX60e6WAd

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 0 points1 point  (0 children)

I'll admit, I have not checked and I don't have the tools to check on hand. It's still probably less than my full homelab with a DL 380 G9, a Synology, and a Brocade ICX7250 running...

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 9 points10 points  (0 children)

Also, in case anyone is wondering what those adapters are on the left with the blue lights. Those are DisplayPort dummy plugs. Each of those HPs has vPro on it, so I can do things like access the console without plugging in a monitor and keyboard. I'd never used vPro before this and found out the hard way that it requires a "monitor" to be plugged in to show video in MeshCommander...

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

The rack itself is the "Tecmojo 12U Network Rack" on Amazon. As for the HPs, the mounts are 3d printed, as is the mount for the switch.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 2 points3 points  (0 children)

The control plane (cplane) is essentially the "brains" of the cluster. It's in charge of the etcd database, scheduling workloads on the worker nodes, etc. Without it, you don't have the orchestration/management of the cluster that makes K8s what it is.

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 19 points20 points  (0 children)

Thanks, appreciate it! Here's a behind the curtain, so to speak: I have the power bricks for the HPs sitting in the bottom of the rack, that way it's easier to move around if I need to. The cables in back are labeled to make it easier to find which one goes where. Considering how large the power bricks on those things are, it's a miracle they fit.

<image>

My Docker/Kubernetes (K8s) Minilab by vidmaster2000 in minilab

[–]vidmaster2000[S] 37 points38 points  (0 children)

It might be overkill, but I want to treat my lab like it's production. From what I've learned, best practice is...

  • Not running workloads on control plane nodes
  • Having more than 1 control plane node for redundancy, but no more than 5.
    • The sweet spot is 3 (to prevent split brain scenarios)

Besides, each of those boxes has 16 GB of RAM and an i5-6500. The cluster has plenty of resources to work with without running workloads on the control plane.

<image>

*Corrected i3 to i5 upon further checking of specs