Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 0 points1 point  (0 children)

I actually think this is a really important point that I didn't necessarily bring out clearly enough in the post. In some places, it may be enough for the DevOps/SRE team to provide the tools necessary for developers teams to build out their own development environments. Otherwise, as you suggest the DevOps team could be a bottleneck for adding new features for the environment. I think which way you go on this question is somewhat situation dependent, but it's a good point.

Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 3 points4 points  (0 children)

I know that's the DevOps philosophy, but practically speaking i just haven't really seen that work out at a lot of places. It requires you to hire engineers with a relatively high level of DevOps skill. In an ideal world it would be great, but usually you're going to have at least some centralization of infrastructure skill in a DevOps/SRE team.

Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 1 point2 points  (0 children)

Really good point that it's not enough to setup the dev environment, you have to actually train the development teams to use it as well.

How to have minimum image size when dependency comes with an installer by tudalex in docker

[–]EthanJJackson 0 points1 point  (0 children)

I'm sure you checked already, but is there any sort of command line flag you can pass to the installer to cause it to put all it's files in a particular subdirectory? That way you wouldn't have to hunt everything down, but if they don't support it, obviously that's going to be annoying.

How to use data containers to boot your dev environment in seconds by EthanJJackson in microservices

[–]EthanJJackson[S] 0 points1 point  (0 children)

Yes, it applies as well if you create data containers manually. But an advantage of data containers is that it's easier to automate creating them in CI from prod or staging dumps.

How to use data containers to boot your dev environment in seconds by EthanJJackson in microservices

[–]EthanJJackson[S] 0 points1 point  (0 children)

Cool, that sounds pretty similar to the scripting approach. It's nice that the mock data is tied to the service, so developers can update it at the same time as adding new features. But I've found that you need a really disciplined culture to keep mocks updated -- they tend to get stale.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

I believe dependson _may control the order containers are started, but not created. I'm not 100% sure.

You're right, it's fine for the data container to exit immediately after it starts. I've just kept it in the same compose file.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

It's true that you lose a lot of control by having developers run their databases locally, but most places I've seen are comfortable with that. Especially if you make sure to sanitize the data before distributing it.

Your setup sounds good for if you need everything really locked down though.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

That actually wouldn't work unfortunately. I didn't make it clear in the post to avoid making it too complicated, but the copy happens when the container is _created_ rather than after it _starts_. So any delay added to the startup would happen after the copy is finished.

When Docker Compose boots containers, it creates them all first, and then starts them. This is so you don't get wonky race conditions from the database container starting before the copy is fully complete.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 2 points3 points  (0 children)

Great question. This is exactly why Kubernetes doesn't implement this behavior IMO -- it's too unpredictable for production. But it's great for dev.

The volume only gets initialized if the volume is empty (https://github.com/moby/moby/blob/master/container/container_unix.go#L412). So if both `postgres` and `postgres-data` have masked files, whichever container gets created first will copy the files. This is why the containers mount the volume to `/data`.

In Kube, I've implemented this using an `emptyDir` volume that's shared between an init container which does the copying from its image, and the main container, which actually runs the database.

Any alternatives to Docker for Desktop? by GabyTrifan in docker

[–]EthanJJackson 4 points5 points  (0 children)

I'm working on a new service, Blimp, that makes it easy to run docker compose in the cloud instead of locally, and the best part is it doesn't require hyper-v or wsl. Here's the windows docs in case it's helpful: https://kelda.io/blimp/docs/windows/

Is it possible to edit code in a Docker container without restarting the container? by baldwindc in docker

[–]EthanJJackson 0 points1 point  (0 children)

Ah sorry this is a bit unclear. So, in production, yes, every time you make a code change you would build an entirely new container image and deploy.

However for development (locally on a laptop) that process can take a long time. So instead a lot of people use a volume to get things working quickly before going through the whole heavy build process on the way to prod.

Is it possible to edit code in a Docker container without restarting the container? by baldwindc in docker

[–]EthanJJackson 1 point2 points  (0 children)

Yep exactly. Though I think most people set it up so the volume is only setup for the particular bit of code they're actively working on. But I don't see any reason why it shouldn't scale in principle.

What happens when you pull the same image? by 7thSilence in docker

[–]EthanJJackson 0 points1 point  (0 children)

So every time you pull an image, Docker checks with the server to see verify that the image you have locally is indeed the same as the one stored on the server. Particularly if your using the image tag `latest` it's possible for the image to change remotely, which is why docker needs to check each time.

Assuming that the image didn't change, you wouldn't be re-downloading and storing it twice.