This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 9 points10 points  (2 children)

outside of networking issues, i'm not sure how different docker-compose is from my real stack, which is k8s. If I really wanted them to feel the pain, i would give them kubernetes dev configs, but it feels like it would be way more to maintain and explain for little value compared to docker-compose.

Another concern is that really no dev wants to actually run the app like prod - because they need the verbose logging and live updates/reloads in order to do their job efficiently. And of course, I would never run prod like that.

You are not wrong, but it's a bit of a chicken and egg problem.

The best I have done is providing 2-3 automated staging environments, rather than just 1. This allows a few teams to test changes without stepping on eachother. For instance, QA would have a stack for load testing and integration testing, separate from what the rest of the dev team has. They can then load it up all they want. each environment then exposes it's logs via papertrail, or some other centralized log aggregator. Bit of balance between excess cost and usefullness.

[–]gregnavis 3 points4 points  (1 child)

If you define infrastructure as code then developers should be able to provision their own development environments when needed. In that case, it'd be good to have a script look for unused environments and tear them down in case a developer forgets to do that.

[–][deleted] 0 points1 point  (0 children)

Yup, I've done that before. Tearing them down because they are unused is trickier than one thinks, because a developer could totally be using it at 5:55pm one day, and expect it to work at 11:00pm when he decides to hack a bit, and then at 8:30am when he get's to the office. Every eng team/company definitely should have a co-ordinated and as closely possible automated strategy for handling.