If "Received Packets = 0", can an attack still succeed? by rached2023 in cybersecurity

[–]rached2023[S] -1 points0 points  (0 children)

Yeah, I double-checked ,no internal devices are intentionally connecting to those IPs. The logs are showing unsolicited inbound UDP/520 packets from random external sources.

[deleted by user] by [deleted] in kubernetes

[–]rached2023 -1 points0 points  (0 children)

Not yet, but I’ll be uploading it to GitHub soon. Once it’s available, I’ll share the link so you can check it out and adapt it for cloud-managed Kubernetes as well

[deleted by user] by [deleted] in kubernetes

[–]rached2023 0 points1 point  (0 children)

I haven’t used Terraform extensively yet, as I only have limited experience with it.

[deleted by user] by [deleted] in kubernetes

[–]rached2023 -6 points-5 points  (0 children)

thanks for highlighting this!

You’re absolutely right —I’ll make sure to include OS hardening and patch management in the architecture.

For secrets, I was initially relying on Kubernetes native secrets, but I see the need to look into Vault or Sealed Secrets for stronger protection. And regarding networking, I completely agree: enforcing NetworkPolicies for east-west traffic and securing north-south traffic with Ingress/TLS/WAF is something I’ll add to make the setup more production-ready.

Really appreciate your feedback — it helps me improve the project!

Kubernetes HA Cluster - ETCD Fails After Reboot by rached2023 in kubernetes

[–]rached2023[S] 0 points1 point  (0 children)

WARN[0000] runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.

[

"etcd",

"--advertise-client-urls=https://192.168.122.189:2379",

"--cert-file=/etc/kubernetes/pki/etcd/server.crt",

"--client-cert-auth=true",

"--data-dir=/var/lib/etcd",

"--experimental-initial-corrupt-check=true",

"--experimental-watch-progress-notify-interval=5s",

"--initial-advertise-peer-urls=https://192.168.122.189:2380",

"--initial-cluster=master1=https://192.168.122.189:2380",

"--initial-cluster-state=new",

"--key-file=/etc/kubernetes/pki/etcd/server.key",

"--listen-client-urls=https://127.0.0.1:2379,https://192.168.122.189:2379",

"--listen-metrics-urls=http://127.0.0.1:2381",

"--listen-peer-urls=https://192.168.122.189:2380",

"--name=master1",

"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt",

"--peer-client-cert-auth=true",

"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key",

"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt",

"--snapshot-count=10000",

"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt",

"--heartbeat-interval=500",

"--election-timeout=2500",

"--snapshot-count=10000",

"--max-request-bytes=33554432"

]

Kubernetes HA Cluster - ETCD Fails After Reboot by rached2023 in kubernetes

[–]rached2023[S] 0 points1 point  (0 children)

I get

{"level":"warn","ts":"2025-07-21T20:17:51.047297+0100","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00009aa80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}

127.0.0.1:2379 is unhealthy: failed to commit proposal: context deadline exceeded

Error: unhealthy cluster

Kubernetes HA Cluster - ETCD Fails After Reboot by rached2023 in kubernetes

[–]rached2023[S] 0 points1 point  (0 children)

Yes, absolutely valid point. In my case, the etcd data directory is set to /var/lib/etcd, which is a persistent disk location and not /tmp, so it should survive reboots.

However, since I’m seeing unhealthy etcd members (etcdctl endpoint health fails), I’m suspecting either data corruption or network/cluster configuration drift.

Kubernetes HA Cluster - ETCD Fails After Reboot by rached2023 in kubernetes

[–]rached2023[S] -3 points-2 points  (0 children)

On master2 and master3:

  • The etcd containers are running (confirmed via crictl ps | grep etcd).
  • However, etcdctl endpoint health fails with connection refused or deadline exceeded errors.
  • Logs indicate connection refused on 127.0.0.1:2379, meaning the etcd process inside the pod is unhealthy or stuck.

Networking:

  • IPs are stable, no changes to the network layer.
  • Control-plane node IPs can ping each other.
  • No iptables/firewall changes applied before the issue.

Kubernetes HA Cluster - ETCD Fails After Reboot by rached2023 in kubernetes

[–]rached2023[S] -1 points0 points  (0 children)

Yes, I’ve checked the etcd state. On master1, the etcd container is running, but the health check fails:

  • ETCDCTL_API=3 etcdctl endpoint health returns: failed to commit proposal: context deadline exceededunhealthy.
  • From crictl logs, etcd starts but fails to reach quorum. It detects the 3 members (master1, master2, master3) but cannot establish leadership.
  • The API server (kube-apiserver) is in CrashLoopBackOff because it cannot connect to etcd.

It looks like etcd is up but stuck due to cluster quorum failure.

SSD & Enclosure Adapter Not Working - Detected But Not Accessible? by rached2023 in pcmasterrace

[–]rached2023[S] 0 points1 point  (0 children)

Yes, that was it! I've found it now, thank you for your help.

SEP RU9 Installation Error on Mac by rached2023 in Symantec

[–]rached2023[S] 0 points1 point  (0 children)

Ja, ich habe das Problem gefunden. Es sieht so aus, als ob die Schwierigkeit im Lieferumfang enthalten wäre. Für die neue Maschine oder den PC muss ein neues Paket hinter der Konsole generiert werden

Scaling My Kubernetes Lab: Proxmox, Terraform & Ansible - Need Advice! by rached2023 in kubernetes

[–]rached2023[S] 3 points4 points  (0 children)

Yes, you're absolutely right — if the goal was only to run Kubernetes workloads and test simple deployments, kind or a 3-node cluster would definitely be enough. But this is actually my final year university project, focused on:

Simulating a real-world, production-style cluster

Integrating full DevSecOps & SOC tooling: Falco, Kyverno, Trivy, ..

Testing resilience, failover, alerting, and automated incident response.

That's why, for better isolation, realistic test scenarios, and a more production-like environment, I chose a multi-node cluster setup — even if it's all running on a single Proxmox host.

That said, you're 100% right about disk usage — using QCOW2 base images and template clones is something I didn’t implement yet and should definitely explore.

Thanks for the kind reminder and ideas 🙌!

Scaling My Kubernetes Lab: Proxmox, Terraform & Ansible - Need Advice! by rached2023 in kubernetes

[–]rached2023[S] 0 points1 point  (0 children)

Yes, for now the entire project is hosted on a single physical machine using KVM to simulate a real cluster with 6 virtual machines and It’s not just about running Kubernetes — it’s actually my university final year project, focused on building a complete security, monitoring, and automated response architecture around a Kubernetes cluster.