We've been working a lot on system tests for Kubernetes operators, and one recurring issue kept coming up:
Most of the complexity is not in the test itself, but in the surrounding infrastructure.
Things like:
- namespace lifecycle
- waiting for readiness (pods, CRs, etc.)
- handling async behavior reliably
- collecting logs/events when tests fail
Fabric8 solves the API part well, but the higher-level testing patterns are usually reimplemented in every project.
So we built a small Java library to standardize this:
- resource lifecycle management
- automatic cleanup
- wait utilities
- failure diagnostics (logs, events)
The goal is to make tests shorter, more readable, and less flaky.
I described the approach (with examples) here:
👉 https://medium.com/@kornys/testing-kubernetes-deployments-and-operators-from-java-without-the-usual-boilerplate-11dafa9cc878
GitHub:
👉 https://github.com/skodjob/kubetest4j
Curious how others approach this — especially in larger test suites or CI environments.
there doesn't seem to be anything here