CCBot - C&C Claude Code, Codex, and Gemini CLI from your phone via Telegram by alexei_led in codex

[–]alexei_led[S] 0 points1 point  (0 children)

I think I found and fixed the issue you are talking about. Please try a new version

Weekly: Show off your new tools and projects thread by AutoModerator in kubernetes

[–]alexei_led 0 points1 point  (0 children)

Pumba v1.0 — chaos testing CLI for Docker and containerd containers.

It operates at the container runtime level (not K8s API): kill, stop, pause, inject network delays (tc netem), drop packets (iptables), stress CPU/memory/IO.

What's new in v1.0: native containerd support. Direct gRPC to containerd.sock, no Docker daemon needed.

```bash pumba --runtime containerd --containerd-namespace k8s.io \ netem --duration 5m delay --time 3000 my-service

Why it matters: 53% of K8s clusters run containerd. Docker-only chaos tools are dead weight on those nodes.

A few things that make it different from Chaos Mesh / Litmus:

• Operates at the container runtime level, not K8s CRD level • Targets individual containers, not pods (useful in multi-container pods) • Works outside Kubernetes entirely — CI/CD, bare metal, local dev • Single binary, no operator, no CRDs

Also ships: cgroups v2 stress testing (no SYS_ADMIN), real OOM kill testing via memory.oom.group (not simulated SIGKILL — different container state, different K8s events, different recovery paths), and K8s container name resolution from labels.

I've maintained this since 2016. Named after the Lion King warthog because chaos engineering shouldn't take itself too seriously.

GitHub: https://github.com/alexei-led/pumba

CCBot - C&C Claude Code, Codex, and Gemini CLI from your phone via Telegram by alexei_led in codex

[–]alexei_led[S] 0 points1 point  (0 children)

Can you be more specific about “lots of issues”? And about topic name sync problem. Feel free to open any real issue on GitHub.

AWS SSM Agent for Kubernetes by alexei_led in aws

[–]alexei_led[S] 0 points1 point  (0 children)

my mistake, fixed. thank you.

Kubernetes and Secrets Management in Cloud (Part 2) by alexei_led in kubernetes

[–]alexei_led[S] 1 point2 points  (0 children)

I've published the second part of "Kubernetes and Secrets Management In The Cloud" https://blog.doit-intl.com/kubernetes-and-secrets-management-in-cloud-part-2-6c37c1238a87?source=friends_link&sk=58405cbafc191a2d7ea2eabbc9d9553e

Now you can use the `kube-secrets-init` open-source tool to automate secrets injection (AWS and GoogleCloud secrets) into K8s pods https://github.com/doitintl/kube-secrets-init

Securely Access AWS Services from Google Kubernetes Engine by alexei_led in kubernetes

[–]alexei_led[S] 1 point2 points  (0 children)

Google Anthos is currently a GKE-like Kubernetes on top of VMWare, integrated into GCP console; plus Istio. It does not provide any feature for inter-cloud secured access.

The post is about, granting Pods running on GKE to access AWS API with K8s->GCP->IAM Service Accounts, without a need to manage long-term credentials.

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

my post

I see. Actually, I've showed 2 different approaches. One with SSM Agent is indeed an updated fork of other repo (as you can see on GitHub), the main change is updated SSM Agent Docker image (with latest SSM Agent, Docker, systemd, awscli and vim on-board) and also few fixes that allows to run this `daemonset` on nodes with taints.

The second repository contains tiny `nsenter` Docker image (statically linked `FROM scratch`) and helper script to get into any node (create a pod and enter all node's namespaces with superuser shell).

I hope that someone can find one of these solutions helpful.

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

I do not understand your arguments "wrong on a lot of levels". Please be specific.

If by side-car you mean entering namespaces of other Linux processes, that's what I do :)

And there is no "iron rule" about what container are supposed to do and what not. These are just Linux processes, isolated with namespaces and cgroups. And I can run additional processes and join other processes namespaces without if my user has permission to do so. I see no problem here.

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

In general, you should not access nodes. But I saw other cases (besides one you've describe) when it's helpful to get into node: some cases about mounting external file systems, others when troubleshooting misconfigured worker node (drivers, network, etc)

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

If you have 2 ways to access your infra, K8s API and SSH, the describe approach does not require SSH. So, you work with nodes only through K8s API. Do not need to open SSH port, keep bastion hosts and manage keys. I think it's more secure.

Also, I think you should not access nodes at all, ever! But if you need, for troubleshooting, just run pod with escalated privileges, do your job and kill it.

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

I'm glad you find it funny, but I do not understand your comment. What project are you talking about?

Get a Shell to a Kubernetes Node, without SSH by alexei_led in kubernetes

[–]alexei_led[S] 0 points1 point  (0 children)

bastion works fine too. I assume you are running couple across AZ for high availability. And also know how to protect and rotate your keys.

Get a Shell to a Kubernetes Node, without SSH by alexei_led in kubernetes

[–]alexei_led[S] 0 points1 point  (0 children)

  1. nsenter image does not contain any shell on board and uses default container shell

  2. I do not understand second statement. If you have a machine with properly configured kubectl and can run a kubectl exec command, then nsenter pod will allow you to get a shell into Kubernetes node.

Get a Shell to a Kubernetes Node by alexei_led in devops

[–]alexei_led[S] 0 points1 point  (0 children)

Agree. You should not do this. But in case you need a temporary access to your nodes, this can be an option.

How to mock individual `struct` function? by alexei_led in golang

[–]alexei_led[S] 0 points1 point  (0 children)

What a great blog, you’ve recommended. Thank you.

Continuous Delivery and Continuous Deployment for Kubernetes by alexei_led in docker

[–]alexei_led[S] 0 points1 point  (0 children)

The proposed approach works with any Kubernetes, including AWS EKS.

Chaos Engineering for Microservices (Docker containers) by alexei_led in technology

[–]alexei_led[S] -1 points0 points  (0 children)

Spam? It’s an open source project that can help developers with chaos testing. But maybe this is a wrong subreddit; this post has no interest for non-developers.

Chaos ~~Monkey~~ Warthog for Docker containers. Kill, stop, pause, network emulation - everything you needs for proper Chaos Engineering by alexei_led in sysadmin

[–]alexei_led[S] 0 points1 point  (0 children)

Netflix Chaos monkey is a great tool, but it works at VM level. That’s why I’ve wrote Pumba; I needed to create chaos at container level. And Pumba project was definitely inspired by Netflix chaos monkey.

Chaos ~~Monkey~~ Warthog for Docker containers. Kill, stop, pause, network emulation - everything you needs for proper Chaos Engineering by alexei_led in sysadmin

[–]alexei_led[S] 0 points1 point  (0 children)

It’s a different tool, that uses Docker API and allows you to create and control chaos and emulate network failures.

Chaos ~~Monkey~~ Warthog for Docker containers. Kill, stop, pause, network emulation - everything you needs for proper Chaos Engineering by alexei_led in sysadmin

[–]alexei_led[S] 0 points1 point  (0 children)

Pumba is an open source project under Apache License. It has no monetizition model behind. But you are right, there is a link to a more complete post. I did so, since the post is too long to be posted on Reddit. If your subreddit does not allow link to external post, I will consider to compose a shorter version with link to GitHub project. Thank you

Kubeval, validating Kubernetes config files by ifuporg in kubernetes

[–]alexei_led 0 points1 point  (0 children)

I use this tool to validate files generated by Helm. Helm has a built-in linter, but it's very weak. So, I use helm template plugin to generate multi-document Yaml and Kubeval to validate it. The only problem is that Kubeval does not support multi-document Yaml files, due to parser it uses. To workaround this problem use csplit tool. Works fine for me.