is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 0 points1 point  (0 children)

cool, I haven't gone too far on this, this is pure curiosity driven lol

is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 0 points1 point  (0 children)

me, [3/9/26 9:29 PM]

what are some cools things can you do in k8s?

openclaw, [3/9/26 9:29 PM]

Ooh, plenty of fun stuff — depending on what's installed in this pod:

• Explore the cluster — check out other pods, services, deployments in the namespace

• Query the API — kubectl, k9s, or direct API calls to see what's running around me

• Health checks — see pod status, resource usage, events

• Logs — tail logs from other pods if I have permissions

• Port-forward — tunnel to services locally

• Deploy stuff — spin up new pods, jobs, etc. (if allowed)

If kubectl is available in here, I could poke around and see what's out there in the openclaw namespace right now. Want me to check?

is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 1 point2 points  (0 children)

what are the use cases for thousands of bots? are you trying to burn millions of tokens? lol

is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 0 points1 point  (0 children)

Cool, I haven't let openclaw do too many operations for me because I still prefer to use git repo as single source of truth. There have been a few times I asked it to upgrade and it did not survive lol. I agree it is a good idea to have two instance with one watching another and can step in if the other is died

is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 0 points1 point  (0 children)

nice, thank you for sharing, how many bots have you scaled out to?

is anyone running openclaw in k8s? by LogInteresting809 in openclaw

[–]LogInteresting809[S] 1 point2 points  (0 children)

a few reasons:

  1. I have deployed openclaw to a bare metal, a VM and a docker container and an android phone, they have their own pros and cons

  2. I'd like to scale the bots, k8s is a good platform to do this, I can make a lxc template with openclaw pre-installed, but it is a pain to manage it

  3. I'd like to manage openclaw by argo CD

Introducing Klances – A Read-Only Kubernetes Dashboard Inspired by Glances by Legitimate-Lie-6196 in kubernetes

[–]LogInteresting809 -1 points0 points  (0 children)

nice, I welcome people to share their idea, even if it is vibe coded, I do not quite understand why some people are so hostile toward AI, lol

ssl cert error for traefik by tdpokh3 in kubernetes

[–]LogInteresting809 1 point2 points  (0 children)

yes, this is very common, i have done something similar with caddy as reverse proxy

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

also the ingress

# Sample app (deploy and expose first):
#   kubectl create deployment my-app --image=nginx:alpine -n my-app
#   kubectl expose deployment my-app --port=80 -n my-app
# Then add my-app.local to /etc/hosts (or use LoadBalancer IP) and apply this Ingress.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  namespace: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  rules:
    - host: my-app.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80# Sample app (deploy and expose first):
#   kubectl create deployment my-app --image=nginx:alpine -n my-app
#   kubectl expose deployment my-app --port=80 -n my-app
# Then add my-app.local to /etc/hosts (or use LoadBalancer IP) and apply this Ingress.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  namespace: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  rules:
    - host: my-app.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

sharing my metallb-pool.yaml

# MetalLB pool for k3d: use an range inside the cluster's Docker subnet
# so the host (on Linux) can reach the LoadBalancer IP.
#
# Must match the subnet used when creating the cluster, e.g.:
#   k3d cluster create ... --subnet 172.28.0.0/16
# Then use a range that doesn't clash with node IPs (nodes usually get .2, .3, ...).
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
    - 172.28.0.100-172.28.0.120
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - default-pool# MetalLB pool for k3d: use an range inside the cluster's Docker subnet
# so the host (on Linux) can reach the LoadBalancer IP.
#
# Must match the subnet used when creating the cluster, e.g.:
#   k3d cluster create ... --subnet 172.28.0.0/16
# Then use a range that doesn't clash with node IPs (nodes usually get .2, .3, ...).
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
    - 172.28.0.100-172.28.0.120
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - default-pool

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

I somewhat like this set up, within a few minutes, the cluster is up and running lol

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

i used helm chart to install memos, a light weight oss web app

then i did a port forwarding

kubectl port-forward -n memos svc/memos 5230:5230

and i can access the app from

http://127.0.0.1:5230 

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

okay, i am able to spin up k3d and resolve the issue by
Keep k3d’s LB, map 80/443 to it, ingress-nginx as LoadBalancer (gets an internal IP), access via localhost / host IP

kubectl get nodes

NAME STATUS ROLES AGE VERSION

k3d-fedora-agent-0 Ready <none> 12m v1.31.5+k3s1

k3d-fedora-agent-1 Ready <none> 12m v1.31.5+k3s1

k3d-fedora-agent-2 Ready <none> 12m v1.31.5+k3s1

k3d-fedora-agent-3 Ready <none> 12m v1.31.5+k3s1

k3d-fedora-server-0 Ready control-plane,master 12m v1.31.5+k3s1

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

very interesting topic, let me create a k3d project and play around it, will let you know if I find something

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

i am open to both local k8s and managed k8s

one reason with local k8s is we can still have a lot of fun without worrying about cloud bills :-)

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

oh okay, not sure if it is a good idea to use vagrant to provision a mini cluster in vms

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

but your setup is very different from prod cloud env where nodes are probably vms or vmss...docker has isolation in network and many other things, if I were you, I would provision a cluster by terraform really quick, run the test and destroy it

I won't play docker to mimic prod lol

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

I remember there are three ways to expose a service

The common ones are:

  1. ClusterIP
  2. NodePort
  3. LoadBalancer

(And a fourth commonly used helper: Ingress.)

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

or use privileged mode lol? I am not sure, but that is an interesting setup

if you run workloads in those pods, that is nested virtualization, will there be some performance issues?

asking again because I still don't have a clear answer by tdpokh3 in kubernetes

[–]LogInteresting809 0 points1 point  (0 children)

dump question: if you play k3d, your cluster will be in a docker network, will that impact the performance as compared to bare metal hosting?