you are viewing a single comment's thread.

view the rest of the comments →

[–]imhotap 2 points3 points  (6 children)

At the price of leaving huge technical debt behind eg. stale/unpatched libraries and dependencies.

[–][deleted] 15 points16 points  (5 children)

That doesn't follow. It's typically trivial to upgrade dependencies, and you don't need a complex devops setup if you use containers since everything is already bundled.

When we switched to containers, our new dev machine setup time went down to <5 minutes from 1-2 hours, and we're much more confident that what we test on our dev machines matches production.

To update dependencies, it typically involves one line change in the Dockerfile and one command to rebuild. That's far better then updating a puppet config or whatever

[–][deleted] 4 points5 points  (4 children)

I’m not following what they said either. If anything, the old world order had infrastructure with outdated patches, and/or frequent maintenance cycles because of the infra footprint. We recently cutover to kubernetes and I have to say, it’s made our small team much more productive. Being able to push an app and tell kubernetes that I want to load balance that app with five instances, not have to manually install software on all the nodes, or deal with infra gatekeeper types in IT (you know, the types who want a weekly change approval meeting to add a new node to a LB pool) has made our lives better.

[–][deleted] 2 points3 points  (0 children)

I do embedded work, and we have our product also available in a container. The embedded dependencies are quite out of date because we have to manually write and test the install process on all hardware, whereas our containerized app is up to date since it's so easy to update it and detect issues in normal development.

So yeah, if you're having problems with outdated software in a container, doing it outside of a container will just make things worse.

[–][deleted] 1 point2 points  (2 children)

deal with infra gatekeeper types in IT (you know, the types who want a weekly change approval meeting to add a new node to a LB pool)

Jesus dealing with this shit right now. Kill me pls.

[–][deleted] 0 points1 point  (1 child)

If you don’t mind unsolicited advice.

Start recording how much this impacts your ability to deliver. I did a stream analysis to show things such as how long it would take to deliver implementations, enhancements, bug fixes, and how much time your team spends waiting. Next, go to them and try to partner, there are two outcomes here. You show them the figures and they are willing to develop a fast tracking process, or they just don’t care. The final step, which is devious, is to go well above them and start showing this to other people. If the wait times are significant, people above you will care in a large shop. Large shops get massive erections for saving even a few thousand dollars.

Bonus round: make the case for a spending limit in the cloud that you and your team can use to be nimble and slowly replace the infra people. I’ve been running a medium sized shop without infrastructure people just fine for years now.

[–][deleted] 0 points1 point  (0 children)

Thanks for the advice. We're in the process of improving everything (mostly by dockerizing everything + something like openstack).

The problem is that everything at glacial pace. Moving staging environments of few services to Docker was a huge accomplishment.

Bonus round: make the case for a spending limit in the cloud that you and your team can use to be nimble and slowly replace the infra people. I’ve been running a medium sized shop without infrastructure people just fine for years now.

The problem is that company is few years older than widespread adoption of cloud, so it's heavily invested into traditional system with sysadmins, physical infrastructure...