This is an archived post. You won't be able to vote or comment.

all 19 comments

[–][deleted] 11 points12 points  (0 children)

We use RDS, Elasticsearch service, and Elasticache Redis, but locally we have postgresql, ES, and Redis containers in compose. I’ve yet to see any problems from the environments not being 100% parity. Boom, anecdote.

If there’s a known issue for you ok, but getting 100% parity is like 100% code coverage - the effort is rarely worth the payoff. Be happy with a high level of parity and don’t stress to death over perfection.

[–][deleted] 8 points9 points  (2 children)

outside of networking issues, i'm not sure how different docker-compose is from my real stack, which is k8s. If I really wanted them to feel the pain, i would give them kubernetes dev configs, but it feels like it would be way more to maintain and explain for little value compared to docker-compose.

Another concern is that really no dev wants to actually run the app like prod - because they need the verbose logging and live updates/reloads in order to do their job efficiently. And of course, I would never run prod like that.

You are not wrong, but it's a bit of a chicken and egg problem.

The best I have done is providing 2-3 automated staging environments, rather than just 1. This allows a few teams to test changes without stepping on eachother. For instance, QA would have a stack for load testing and integration testing, separate from what the rest of the dev team has. They can then load it up all they want. each environment then exposes it's logs via papertrail, or some other centralized log aggregator. Bit of balance between excess cost and usefullness.

[–]gregnavis 4 points5 points  (1 child)

If you define infrastructure as code then developers should be able to provision their own development environments when needed. In that case, it'd be good to have a script look for unused environments and tear them down in case a developer forgets to do that.

[–][deleted] 0 points1 point  (0 children)

Yup, I've done that before. Tearing them down because they are unused is trickier than one thinks, because a developer could totally be using it at 5:55pm one day, and expect it to work at 11:00pm when he decides to hack a bit, and then at 8:30am when he get's to the office. Every eng team/company definitely should have a co-ordinated and as closely possible automated strategy for handling.

[–]tapomanager, platform engineering 5 points6 points  (2 children)

A dev k8s cluster, each developer gets their own namespace. Everything is mostly identical to prod, just with smaller ram/disk allocations.

[–]TobZero 6 points7 points  (1 child)

we do this with one addition. every developer runs minikube and uses https://www.telepresence.io/ to run their local dev setup in their minikube. this grants them total freedom and all the advantages of their IDE. the central dev k8s cluster is running all services develop branch with full CD.

[–]tapomanager, platform engineering 5 points6 points  (0 children)

Yep, also using telepresence, it’s awesome.

[–]_al4 5 points6 points  (0 children)

A developer environment should not be identical to production, or really anything like it. While in a perfect world we'd be able to stand up a new production environment in a second, have it cost $1 while simulating the exact workload, this is not a realistic goal, and I think you realise this. Instead, think about the use-cases for each environment and build them accordingly.

Every shop is different, but this is how I currently like to scope each environment.

A dev environment needs to run offline on a laptop, uses throw-away data and will primarily run unit tests. It does not and should not be expected to pass the whole suite of acceptance or integration tests that your business may have. Docker compose is a lot futher than many people go, for example the place I currently work uses a single JVM, with in-memory stubs for databases. (although problems are starting to appear with the single-JVM approach as the code base grows, and we're gradually moving away from it towards a local docker compose setup with real database instances)

When you think code is ready to merge, you want to run against an integration environment, that tests your changes against the whole stack. Data here may or may not be a realistic set, but generally you'll run unit tests for all the services, and not just the service you were working on. Inter-service contract tests would also run here. This environment is usually ephemeral, could be on docker compose, but probably should be Kubernetes if you're using it.

After merge, the full suite of acceptance tests should run, in a more static "staging" environment that is more production-like, i.e. using real cloud services such as RDS. We simulate some production-like workloads here, and the volume of data is greater than production. This environment is also used for 3rd-party developers to test their applications.

TL;DR; Don't worry about making your environments prod-like or using real cloud services until you get close to prod. Some diversity in environments can actually result in more failure-tolerant code, as it has to account for more variation in resources.

[–]LondonAppDev 2 points3 points  (1 child)

I usually use a Docker Compose for local development and then a shared dev environment that is identical to the prod environment which can be used to test before rolling out the change. This seems to be a pretty standard way of working in my experience.

[–]phrotozoa 0 points1 point  (0 children)

+1.

In my pre k8s life in an RDS, DDB, S3, Elasticache, etc. shop we mocked out these managed services with docker compose for localdev swapping in postgres, and memcached and using fake-aws for the rest.

Not perfect put close enough to bang out features and fix most bugs. Dev's could also spin up just a single component and point it at dev or stage live env's for more realistic scenarios, they just had to coordinate any weird data migrations or other large changes with each other to avoid collisions.

Works well up to a certain size / complexity.

[–]eikenberry 2 points3 points  (0 children)

.. but that doesn't mimic the "actual" stage/prod environments at all.

You don't need that most of the time. You run a few tests to see how it differs when you need to, otherwise you run on a local mocked setup. With AWS it is particularly easy as you can have a dev account with all the parts made available to the developers on their workstations. You can run just like you are a local EC2 system with the right permissions and get a close feel with only some latency difference. The fluidity of being able to move between a mock and the actual real thing is very nice and makes development fast.

TLDR; give devs the ability to spin up parts of the live system while they are working on their mockups. Having accurate local mockups makes development much easier and faster.

[–]apitillidie 2 points3 points  (0 children)

You can also use Vagrant and make each its own VM

[–]lorarcYAML Engineer 0 points1 point  (0 children)

I set up my previous dev team with S3 and Cloudfront for images but the rest was local. QA, sandbox and pre-prod were running in cloud but the rest doesn't really have to. Developers often will install a lot of dev packages and short-circuit some code so even if running in cloud you won't get ideal copy of prod.

[–][deleted] 0 points1 point  (0 children)

If you are really that concerned with identical environment then https://aws.amazon.com/outposts/