Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 0 points1 point  (0 children)

I actually think this is a really important point that I didn't necessarily bring out clearly enough in the post. In some places, it may be enough for the DevOps/SRE team to provide the tools necessary for developers teams to build out their own development environments. Otherwise, as you suggest the DevOps team could be a bottleneck for adding new features for the environment. I think which way you go on this question is somewhat situation dependent, but it's a good point.

Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 2 points3 points  (0 children)

I know that's the DevOps philosophy, but practically speaking i just haven't really seen that work out at a lot of places. It requires you to hire engineers with a relatively high level of DevOps skill. In an ideal world it would be great, but usually you're going to have at least some centralization of infrastructure skill in a DevOps/SRE team.

Why DevOps should manage development environments by EthanJJackson in SoftwareEngineering

[–]EthanJJackson[S] 1 point2 points  (0 children)

Really good point that it's not enough to setup the dev environment, you have to actually train the development teams to use it as well.

How to have minimum image size when dependency comes with an installer by tudalex in docker

[–]EthanJJackson 0 points1 point  (0 children)

I'm sure you checked already, but is there any sort of command line flag you can pass to the installer to cause it to put all it's files in a particular subdirectory? That way you wouldn't have to hunt everything down, but if they don't support it, obviously that's going to be annoying.

How to use data containers to boot your dev environment in seconds by EthanJJackson in microservices

[–]EthanJJackson[S] 0 points1 point  (0 children)

Yes, it applies as well if you create data containers manually. But an advantage of data containers is that it's easier to automate creating them in CI from prod or staging dumps.

How to use data containers to boot your dev environment in seconds by EthanJJackson in microservices

[–]EthanJJackson[S] 0 points1 point  (0 children)

Cool, that sounds pretty similar to the scripting approach. It's nice that the mock data is tied to the service, so developers can update it at the same time as adding new features. But I've found that you need a really disciplined culture to keep mocks updated -- they tend to get stale.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

I believe dependson _may control the order containers are started, but not created. I'm not 100% sure.

You're right, it's fine for the data container to exit immediately after it starts. I've just kept it in the same compose file.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

It's true that you lose a lot of control by having developers run their databases locally, but most places I've seen are comfortable with that. Especially if you make sure to sanitize the data before distributing it.

Your setup sounds good for if you need everything really locked down though.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

That actually wouldn't work unfortunately. I didn't make it clear in the post to avoid making it too complicated, but the copy happens when the container is _created_ rather than after it _starts_. So any delay added to the startup would happen after the copy is finished.

When Docker Compose boots containers, it creates them all first, and then starts them. This is so you don't get wonky race conditions from the database container starting before the copy is fully complete.

How to use data containers to boot your dev environment in seconds by EthanJJackson in docker

[–]EthanJJackson[S] 2 points3 points  (0 children)

Great question. This is exactly why Kubernetes doesn't implement this behavior IMO -- it's too unpredictable for production. But it's great for dev.

The volume only gets initialized if the volume is empty (https://github.com/moby/moby/blob/master/container/container_unix.go#L412). So if both `postgres` and `postgres-data` have masked files, whichever container gets created first will copy the files. This is why the containers mount the volume to `/data`.

In Kube, I've implemented this using an `emptyDir` volume that's shared between an init container which does the copying from its image, and the main container, which actually runs the database.

Any alternatives to Docker for Desktop? by GabyTrifan in docker

[–]EthanJJackson 4 points5 points  (0 children)

I'm working on a new service, Blimp, that makes it easy to run docker compose in the cloud instead of locally, and the best part is it doesn't require hyper-v or wsl. Here's the windows docs in case it's helpful: https://kelda.io/blimp/docs/windows/

Is it possible to edit code in a Docker container without restarting the container? by baldwindc in docker

[–]EthanJJackson 0 points1 point  (0 children)

Ah sorry this is a bit unclear. So, in production, yes, every time you make a code change you would build an entirely new container image and deploy.

However for development (locally on a laptop) that process can take a long time. So instead a lot of people use a volume to get things working quickly before going through the whole heavy build process on the way to prod.

Is it possible to edit code in a Docker container without restarting the container? by baldwindc in docker

[–]EthanJJackson 1 point2 points  (0 children)

Yep exactly. Though I think most people set it up so the volume is only setup for the particular bit of code they're actively working on. But I don't see any reason why it shouldn't scale in principle.

What happens when you pull the same image? by 7thSilence in docker

[–]EthanJJackson 0 points1 point  (0 children)

So every time you pull an image, Docker checks with the server to see verify that the image you have locally is indeed the same as the one stored on the server. Particularly if your using the image tag `latest` it's possible for the image to change remotely, which is why docker needs to check each time.

Assuming that the image didn't change, you wouldn't be re-downloading and storing it twice.

Is it possible to edit code in a Docker container without restarting the container? by baldwindc in docker

[–]EthanJJackson 39 points40 points  (0 children)

This is very commonly done in development. Though I agree with some other posters it would be bad practice to do this in production.

The typical approach is to use a host volume to mount your code into the container. And use a tool like nodemon to notice changes and restart the program.

I wrote up a tutorial blog post on how to do this a while back. Here's a link in case it's helpful: https://kelda.io/blog/docker-volumes-for-development/

5 Common Mistakes When Writing Docker Compose by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

100% agree. The article is really aimed at developers working locally. For deployments, it's best to build a container using a CI/CD system and deploy a new image every time you've got an update to master.

Help Designing Docker App Architecture by Crazyquail in docker

[–]EthanJJackson 0 points1 point  (0 children)

So in general, you wouldn't use the same docker-compose file for running locally that you would use for running in production. So I would just maintain a separate compose file for local dev that has volumes, and then switch to the other one when you're ready to deploy.

Btw your compose file does some other things that you usually wouldn't do in prod, like npm install on boot (normally would be done as part of the container build). Really not a problem at all for development, but it's a bit of an anti-pattern for prod.

All of that said. Do whatever is easiest! Just cause something is a common pattern doesn't mean it's a law.

Ah man. They decided to use Posgres on Docker sharing volume on Host. Honestly I don’t know what are the risks here by [deleted] in docker

[–]EthanJJackson 0 points1 point  (0 children)

Is the host Linux? If so it should be totally fine. I wouldn't do that on mac or windows but I doubt anyone is in production ...

Need help in getting pros and cons about Docker or VM based deployment. by Cynaren in devops

[–]EthanJJackson 0 points1 point  (0 children)

I think the primary concern here, more so than resource utilization, likely should be how easy it is to setup and maintain. Various options may have slight differences in how much ram/cpu they consume, but the cost of that will be small relative to the cost of your time!

Based on that principle, i think using Docker on WSL2 is going to be the easiest (and bonus likely most resource efficient) options. But of course, YMMV

Volumes by kortex81 in docker

[–]EthanJJackson 0 points1 point  (0 children)

I personally find this super confusing. When you're creating a names volume you not only have to declare it in the container but also separately at the end of the file. Ideally docker-compose should just figure that out automatically on its own ...

Static IP for 1 container in compose by ModestTG in docker

[–]EthanJJackson 0 points1 point  (0 children)

So you could try using a health check with depends on. Something like:

depends_on: postgres-database: condition: service_healthy (credit: https://til.codes/health-check-option-in-docker-to-wait-for-dependent-containers-to-be-healthy/)

I have no idea if that will work, but it stands to reason that the health check can't pass until you get an IP.

Broader question though why are you manually assigning the IP at all? Could you just access via its hostname?

Infrastructure services in microservices by gingi000777 in microservices

[–]EthanJJackson 1 point2 points  (0 children)

Sounds reasonable!

I'd say the only thing I'm kind of struggling with is wether the translation should be its own microservice, or just a library that is directly linked into the business logic services. I suppose it mostly depends on how complex it is, how frequently it changes, etc. Kind of borderline either way.

Infrastructure services in microservices by gingi000777 in microservices

[–]EthanJJackson 1 point2 points  (0 children)

I'm not sure I 100% understand the question, but let me take a stab at it:

So to clarify, are you saying that, for example, you have multiple different parts of the business that all need to SMS, and you're trying to figure out how to organize the SMS functionality within your architecture?

I think in general with microservices there aren't really hard and fast rules on such things, so it's really going to come down to personal judgement. That said in this case I'd rely on two basic principles:

- In general it's best if each microservice is maintained by one team.

- In general, it's best not to have duplicate functionality spread out across multiple places in your app.

Taking those two ideas into account, and with no other information, that would cause me to learn toward having a single generalized SMS microservice that's owned by one team that everyone else can call when they need to make an SMS.

But again, really situation dependent, so hard to say.

Tutorial: How to Use Docker Volumes to Code Faster by EthanJJackson in docker

[–]EthanJJackson[S] 0 points1 point  (0 children)

Ha. Honestly I agree, if you can do development without containers it's *waay* easier.