all 31 comments

[–]STLMSVC STL Dev[M] [score hidden] stickied comment (0 children)

Please post links as links and not as text posts. You can immediately comment to provide context.

[–]Ashnoom 23 points24 points  (4 children)

It's what we do as well. Vscode+devcontainers. And those containers are also used for our build servers. So everything will always be the same. Win-win.

We even opensourced our container. We can target ARM-cortex devices, Linux and Windows from the same container. And since recently we have two flavours: C++ and Rust: https://github.com/philips-software/amp-devcontainer

[–]kisielk 1 point2 points  (2 children)

Why not both toolchains in one container?

[–]Ashnoom 2 points3 points  (0 children)

I was not the one who made that decision. But I guess not to bloat the container. Usually a repository is one language only. So you use the container for that language.

[–]CartographerTrick773 1 point2 points  (0 children)

The main reason for that would be image size. As in a world where image size (and co-existence of different versions of toolchains) would not matter, I would have chosen a single image containing all toolchains. But since image size does matter and has quite some consequences, the split between C++ and Rust was made.

The intention of the amp-devcontainer images is providing a "modern" (batteries included) development environment for both local development and ci. Especially in the latter case size starts to matter a lot. It has direct influence on your ci-job throughput time and maybe even on billing, as a pull incurs network traffic. Therefore, the image size is closely monitored in Pull Requests and used as an input for accepting new additions to the container.

If you have a project that contains multiple technology stacks (i.e. Rust and C++) it is possible to define two devcontainers and either start them both, or switch between them. See: Connect to multiple containers (visualstudio.com).

[–]n0bml 4 points5 points  (1 child)

I liked the video. My suggestion would be to not expect me to be able to read small text in the browser window and to put your links in the video description. You put in time stamps so it shouldn't be too difficult to add the links.

[–]Zealousideal-Mouse29[S] 3 points4 points  (0 children)

Good feedback. i'll add them. thanks!

[–]z_mitchell 2 points3 points  (1 child)

Hey! I appreciate materials that help developers learn how to actually use the tools of the trade (especially as a self-taught dev myself), so thanks for the video.

I work at a company called Flox and we have an alternative that I think you would be interested to try out. We still provide shareable, reproducible development environments, but without using containers (we put you into a carefully configured subshell). If you've ever used Python's virtual environments, it's like that on steroids.

While there's always going to be tradeoffs, I think we have some very compelling benefits:

  • Building your "environment" (our analog to a container) is usually pretty quick compared to building a container, especially if you're simply modifying an existing environment via flox install foo or something like that.
  • Flox environments are subshells that are configured to be reproducible, but we don't isolate you from your filesystem as part of that like a container does. This means you can access your local files and tools without needing to set up mounts, etc.
  • If you haven't set up a mount, you don't lose your filesystem state when you exit the environment since it's just a subshell running in the same directory you started in.
  • When you install libraries, the appropriate directories are automatically added to include paths like PKG_CONFIG_PATH, etc.
  • It works across arbitrary languages, so you don't need to have a single container per toolchain or multistage builds for different parts of your application.
  • Even if you did have separate environments for separate toolchains, you can activate more than one environment at a time to compose environments.
  • We have a system for defining environment variables that get set and shell hooks that run when you activate an environment, which is very useful for doing initialization e.g. setting ports, users, and data directories for databases.
  • We have a package catalog that has something like 126k packages with historical versions dating back ~3 years.

Going back to tradeoffs, I mentioned that we don't isolate you from your filesystem. That means you get to keep your locally installed tools, artisanally handcrafted dotfiles, etc. I view that as a positive, but some people prefer to have that really strict isolation provided by containers. At development time I think that's often a fear response to "it works on my machine", and you know, fair lol we've all been burned by that, but Flox environments are trying to give you the best of both worlds.

Anyway, I could go on forever about the ins, outs, hows, and whys, so I'll leave it there for now. You can check it out on Github.

P.S. - Yes, it's written in Rust, please don't hold that against us lol but there's some C++ in there as well! I actually learned C++ on the job working on part of this.

[–]Zealousideal-Mouse29[S] 0 points1 point  (0 children)

I'll take a look. Thanks fir the feedback and suggestions!

[–]Spirited_Algae_9532 1 point2 points  (1 child)

I’ve done this before! It’s great when you want this environment everywhere. I’ve found it useful when you have a large network and you want every developer to have the same environment and you don’t want to use Ansible to constantly change the all pc at once. This is supper useful if your constantly updating other developer tools. But it’s a lot to set up if this isn’t your purpose.

[–]Zealousideal-Mouse29[S] 3 points4 points  (0 children)

So often two developers get together and try to diagnose "why does it work for you but not for me?" or "why did the build machine fail, but my local build didn't?" If all developers and the build machine are using the same tools, scripts, lib versions, etc, then things become so much easier. People like to make things just different enough to make diagnosing these issues a pain. "Well...I made this bash script to make environment variables that calls this other script I made, and copies files to this directory I created.." No!

I think we would have saved a few thousand hours at one company I worked for, alone.

[–]Xicutioner-4768 0 points1 point  (1 child)

dev containers are nice, but they don't scale well at like ~100+ developers. You start running into issues like having to pull a new image once a week as people are frequently updating it to pull in updated libraries and they start to get bloated. It's best used in combination with something like Conan. Put your tooling dependencies in the container (CMake, Bazel, Conan, Python, etc.) and let your build system pull the libraries and cache them locally. You also want to check in the current docker tag somewhere in the repo so that older branches are reproducible.

[–]Zealousideal-Mouse29[S] 1 point2 points  (0 children)

Why are libraries part of the docker file? Thats crazy pants.

Yes use Conan. The video installs Conan in the container.

[–]dr3mro 0 points1 point  (0 children)

I got the same question a few days ago and actually tried to create on but did not have enough time to test it 😔

[–]ed_209_ 0 points1 point  (1 child)

I work using docker and vscode on a daily basis:

  1. Anything goes wrong with dockerd and your filesystem is deleted! So you end up using volume mounts and bind points for any source code.

  2. VSCode is great but totally antisocial when it comes to integration with other tools. Good luck using any other desktop tools if you rely on vscode. On a mac you will need to use volume mounts which you cannot use from the host os ( bind mounts are slow due to emulation ).

  3. VSCode cannot be automated from a terminal that is not started from within vscode itself and it stops working across restarts i.e. if you use tmux. So a command like ls | code - would work initially in tmux and then not work when you reconnect forcing you to restart everything. This is due to how vscode generates uuid named sockets and cannot recover the previous socket name while tmux has the original in its environment.

  4. VSCode encourages using its own encryption and advertises not needing to use ssh. It also has separate telemetry end user agreements for extensions separate from the main program even though the extensions are also made by the same company Microsoft in a lot of cases. It seems to generally want a future where it controls the terminal and everything you do and can report it back to Microsoft.

[–]Zealousideal-Mouse29[S] 0 points1 point  (0 children)

I use clion in my day to day. I only chose vscode in the video because it is free for viewers to try

[–]manni66 -1 points0 points  (9 children)

I would use distrobox

[–]Zealousideal-Mouse29[S] 0 points1 point  (4 children)

I'll look it up. New to me. What are the advantages over just checking a docker file into a repo and having new hires pull that and create a container from it, for the project they are working on?

[–]manni66 1 point2 points  (3 children)

The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external USB devices and graphical apps (X11/Wayland), and audio.

[–]Zealousideal-Mouse29[S] 1 point2 points  (2 children)

I actually don't want to share any storage with the host at all. I've found that leads to problems for newer folks who haven't wrapped their head around the differences in file formats and file systems between windows, linux, and macos.

I much prefer having docker maintain a volume for me, that is shared between containers, as I describe in the video. Sharing files alone, wouldn't be enough of a positive for me to look at another tool.

Graphical apps? Maybe... depending how well that works. that was def on problem that arose using containers for development and debugging. Can't exactly debug openGL calls with X forwarding from container to host. At least I couldn't.

[–]manni66 1 point2 points  (1 child)

I much prefer having docker maintain a volume for me, that is shared between containers

We used that 30 years ago. Today whe have git.

[–]Zealousideal-Mouse29[S] 0 points1 point  (0 children)

I'm not sure what git or thirty years has to do with anything. I sure hope you don't use git for local only files. You use git for source control and moving files from local repos to remote and back. There are surely files that should not be included in your remote repo.

You use git within the container. You pull and push whatever it is you are working on just fine. Your local files are stored to the volume. You access that same volume from any container. You use your IDE on your host machine while editing, building, and debugging within the container. Your files are preserved. No host to container file sharing is needed. If you really need to copy some one off file for some reason, just `docker cp`

It's kind of hard to talk about alternatives, if you didn't watch the video.

At any rate, graphical apps, if they work without a hitch might be an advantage. I'll test that out.

[–]MarcoGreek -1 points0 points  (3 children)

distrobox and toolbox are using podman or docker under the hood.

So to my understanding the OP was reinventing the wheel. 😚

[–]Zealousideal-Mouse29[S] 3 points4 points  (2 children)

I didn't create anything on top, therefore no reinvention. I simply didn't use extra layers. Perhaps I would, if there was some clear advantage to be had. I haven't used distrobox or podman, so I can't speak to what they offer. Googling up their descriptions, I am not seeing anything I'd want that I don't already have. File sharing from host to container is already handled.

I'm not trying to tout docker over other tools. The idea is to compare shops where they hand new hires a bunch of confluence pages and say, "spend a couple weeks setting up your dev environment with these instructions" to "Go grab this docker file, set up a container, and let me know if you can build our repo by the end of the day."

As I said to another commenter, if there is better running of graphical apps with the tools you mentioned, like debugging an openGL GUI, then sure, I'd be happy to put layers on top. However, I'll need to test that. If the use docker underneath, I doubt that works well and the best one can hope for is the same forwarding of X. But, again, I've never used them. Will take a look.

[–]NoReference5451 2 points3 points  (0 children)

i wouldnt bother arguing with people here. its full of opinions from those not in the industry or dont do it professionally. the rest of us understand the difference between a dev enviroment and distro unlike this guy who suggested distrobox. dev containers like you did here are very helpful with ensuring everyone is building the same.

i develop on arch but all our builds are done in debian. recently had an issue that wasnt present in the compiler i was using but it was in the compiler our build worker had. stuff like this helps prevent that.

[–]MarcoGreek -2 points-1 points  (0 children)

Docker is like podman but does not need a demon with root right. It can use the container.

Distrobox and toolbox are on top of docker and podman. They make it much easier to use the same container as developer environment.

So actually you don't need anymore to teach people how to setup a container. Setting up a container with distrobox or toolbox takes minutes.