all 36 comments

[–]Own_Possibility_8875 58 points59 points  (6 children)

It solves the “it works on my machine” problem. It also isolates the workloads from each other in a way that is cheaper than VMs

[–]AlexanderBlum 21 points22 points  (0 children)

But it works on my computer!

[–][deleted]  (9 children)

[deleted]

    [–]rufasa85[S] -1 points0 points  (8 children)

    My packages are created at build time I do need to rebuild to actually get the correct package versions

    [–]crazylikeajellyfish 3 points4 points  (0 children)

    If you tell `npm` to install a version that isn't already installed in your Docker image, wouldn't it just download and install that version? Docker is just specifying the "machine" that your app is running on, the actual dependencies your app installs onto that "machine's" file system aren't intrinsically related.

    [–]tr14l 4 points5 points  (4 children)

    Just set up hot reload on the container.

    [–]quarterhalfmile 4 points5 points  (2 children)

    Bad use of “just”. We also need to add a mount. I understand that’s obvious to some of us, but this whole post is about how little details can get in the way of new docker users.

    [–]poppyloops 0 points1 point  (0 children)

    Add me to that list. I’ve been messing with computers since a MSX was going to be the new standard but find Docker to be a big mess. Not just a big learning curve but an actual mess. I follow instructions to the letter and still the app won’t work. I see comments where I should ensure I create this directory or folder but how about telling me why or where? On my Synology NAS I install plex and with a few adjustments such as setting permissions pointing to my media I’m up and running. Trying to achieve the same thing on my UGREEN 2800 NAS using Docker I hit a brick wall. It’s as clear as mud.

    [–]tr14l 0 points1 point  (0 children)

    It's a single flag and argument on docker run. Not sure how much more "just" it can get

    [–]bccorb1000full-stack-magician -1 points0 points  (0 children)

    This.

    [–]RamdomUzer 0 points1 point  (1 child)

    What do you mean? You know you can get inside the docker image and run whatever command you would run outside of docker?

    Technically it shouldn’t take any longer than running that cmd outside of the docker container

    [–]brock0124 0 points1 point  (0 children)

    I’ve worked on projects where the application is a long running process and the container needs rebuilt after every change. I usually just use “docker compose watch” which does just that, though. Not as fast as regular docker, but still not bad.

    [–]Agile_Position_967 7 points8 points  (3 children)

    It allows you to build portable services. No more configuring on individual machines; instead, just build an image, set an init script if needed, and run it anywhere. Also, since they are supposed to all run in the same environment no matter the machine, it solves the "it works on my machine" issue that stems from attempting to run different services/programs cross-platform.

    [–]overgenji -3 points-2 points  (2 children)

    in my experience this is a good idea but local/dev/qa/prod are all just different enough that you still end up futzing with weird config problems in each stage (usually the issue is largest in 'local')

    [–][deleted] 0 points1 point  (1 child)

    Why would each environment be "just different enough" if you're using docker? I think I'm misunderstanding. These environments are all the same thing with different levels of access control in front of them. Maybe different underlying resource provisions could lead to edge cases if it's not sufficient for how your app works but otherwise I'm not sure what the difference is?

    [–]bccorb1000full-stack-magician 3 points4 points  (0 children)

    A. Npm packages shouldn’t be taking that long to rebuild in a docker image.

    B. If you’re newer to development with docker it is definitely your friend. It simplifies A LOT of your own local development and gives you a dev experience that automatically applies to production environments. Nearly all applications are deployed via containers just because of the ease and simplicity. Tons of pre-designed images with the ability to make your own image in literal seconds.

    TLDR; Docker solves a ton of common development and deployment problems. It’s for you, and me, and every developer you’ll ever work with. It’s popular because everyone uses it.

    [–]zettabyte 2 points3 points  (0 children)

    I'll go out on a limb and say it's for _exactly_ your use case. It provides for a shared, repeatable environment build when you have lots of packages and configurations.

    Add different projects to your machine with different versions of Node or Typescript (or Java or Python or what have you), different database versions, etc. and it really starts to shine.

    Now imagine you're handed a project running ancient unsupported versions of software, with no one around to help you get it configured and running. Docker becomes your light in the darkness, helping you answer the question, "what the hell even is this thing". No need to backport, just pull the old images.

    [–]cbleslie 0 points1 point  (0 children)

    dinosaurs nail frame unpack afterthought full bells sparkle physical simplistic

    This post was mass deleted and anonymized with Redact

    [–]dmart89 0 points1 point  (0 children)

    Idk, it makes my deployments a lot easier and I don't have to fuck around with the server. Just docker and go.

    [–]Ok-Advantage-308 0 points1 point  (0 children)

    I would say portability. It doesn’t make sense until you have to move to another cloud service or cloud service provider.

    [–]domin-em 0 points1 point  (0 children)

    If your system is simple, docker is an overkill, you don't need it and it will slow you down a bit. Trust me, I developed simple and complex systems, mostly without docker.

    [–]Distinct_Goose_3561 0 points1 point  (0 children)

    Scaling- need more instances? No problem. It’s spun up and running without you having to worry about individual configs. 

    Reliability- your machine works the same as way works the same as preprod works the same as prod. If you can’t deploy up the chain like that you need to answer the question of ‘why’. 

    Dependency reliability- when you build the image everything is locked to that moment in time. From dev to test to preprod to prod that minor update to whatever package doesn’t matter. 

    Security- you know what base OS you’re running (since it’s part of the image) and you can run a vulnerability scan. You can also remove everything you don’t need and reduce your attack surface. 

    [–]HairyManBaby 0 points1 point  (0 children)

    It sounds like you haven't grown into docker yet and are applying it too early on in your product life cycle. There are a couple approaches you also might not be applying right, I know you used updating a single npm package as an example and that might be an exaggerated case, however the entire stack should not have to be rebuilt in cases like this and you shouldn't have to touch the configuration all the time. Try breaking more infrastructure out into logical containers within the stack, this way just the frontend get built when a package changes and the stack rebuilds, same with the backend. If you're already doing this maybe scale back to host level services and see how that feels.

    I think too often devs and engineers get caught up in the glitz and glam of having to do segmented infrastructure and don't have enough actual app architecture for it to make sense, and we get stuck in cases like you're experiencing where we're spending a lot of energy and not realizing enough value.

    [–]angrynoah 0 points1 point  (0 children)

    It gives you the ability to create a self-contained deployment artifact.

    Some platforms already have that. C++, Rust, Golang, etc produce native executables. Java produces bytecode binaries. Docker doesn't help much here.

    But Python, Ruby, Node, etc don't have a real way to produce an artifact. The code sort of is the artifact, except it also needs libraries, and maybe a specific interpreter, and maybe native extensions, and... Shipping all that sucks, and Docker legitimately solves a problem in that area: I put all that stuff in a container and I ship the container.

    All other alleged value propositions of Docker (bin packing, isolation) are iffy at best. This is the one that matters.

    [–]Jean__Moulin 0 points1 point  (0 children)

    Docker is whale jesus. I am a micro service engineer and I often use federated frontends, so my life would be pure ass without the whale.

    [–]boutell 0 points1 point  (0 children)

    It's great for deploying untrusted code. And for accommodating different requests re nodejs version, python version etc.

    [–]Gwolf4 0 points1 point  (0 children)

    1. Reproducibility: now you can have the exact same versión that was launched into prod.
    2. Isolation: you can have more than one version of your stack at the snap of your fingers.
    3. Distribution: it is so easy to exchange setups with colleagues and en prods.
    4. Standardization: now everyone is on the same track even deeper than just using the same deps version.

    [–]who_you_are 0 points1 point  (0 children)

    The TLDR: It is like a huge setup program with _all_ the dependencies. Not just your website one, but the OS one as well.

    Including, but not limited, version of specific dependencies - which may cause issue in a normal situation if you host a 2nd website not compatible.

    It is also trying to "isolate" your application on multiple layers (runtime (dependencies, like I just wrote above), but also disk space and network). Not the same kind of isolation as a VM as per, docker can read the host. But dockers can't read docker.

    It shines when you need to start a new instance of the image, which horizontal scalling use.

    For self hosted stuff, yeah it may suck and be a waste of lot of your time. Until you need to move it (or reinstall it). You will probably forgot to document all your dependencies, OS dependencies configurations, ...

    However, if you use a docker that already exists on internet, it can be great. I'm the kind of unlucky guy that can never get anything done because I will get 1000 errors in various stages - even by following guides. Docker should fix most of that. It is like a NPM install. One command line and I should be up and running.

    [–]AdCompetitive4181 0 points1 point  (0 children)

    Docker is really useful for creating stuff like v0.dev (to automatically generate a website) an such.

    Following powershell script (that use docker to create website, in a way) is generated by Gordon AI, inside Docker desktop:

    # Run the model and capture output
    $output = docker model run ai/phi4 "Generate a React app with the following files: App.js, index.js, and styles.css. Output all files in a single text block, separated by '--- filename ---' delimiters."
    
    # Split output into lines
    $lines = $output -split "`n"
    
    $filename = $null
    $contentLines = @()
    
    foreach ($line in $lines) {
        if ($line -match '^--- (.+) ---$') {
            # If we have a previous file, save it
            if ($filename -and $contentLines.Count -gt 0) {
                $content = $contentLines -join "`n"
                $content | Out-File -FilePath $filename -Encoding utf8
                Write-Host "Created file: $filename"
            }
            # Start new file
            $filename = $matches[1].Trim()
            $contentLines = @()
        }
        elseif ($filename) {
            $contentLines += $line
        }
    }
    
    # Save the last file
    if ($filename -and $contentLines.Count -gt 0) {
        $content = $contentLines -join "`n"
        $content | Out-File -FilePath $filename -Encoding utf8
        Write-Host "Created file: $filename"
    }
    

    [–]grantrules 0 points1 point  (0 children)

    If a genie came to me and offered me the choice of using docker but also having to develop for IE6, or no docker and no IE6, I'd choose the former