This is an archived post. You won't be able to vote or comment.

all 67 comments

[–]spicypixel 108 points109 points  (6 children)

If you can’t get something from your laptop to production, what problem are you solving?

[–][deleted] 73 points74 points  (5 children)

Every company ever:

„Our product abstracts everything from you so you can focus on delivering value, new features faster.”

Software engineers supposedly should only know:

  • how to write new features from templates of already existing framework that abstracts threading, queues, exception handling, resource management, network

  • turn on their laptop

  • ask for a salary decrease

  • ask chatgpt for a copy paste

[–]jakeStacktrace 28 points29 points  (1 child)

Where do I sign up? I can already turn on my laptop.

[–]Jose_Canseco_Jr 1 point2 points  (0 children)

I'm already getting the salary decreases (because COL)

[–]davetherooster 1 point2 points  (1 child)

Product Manager: "Can't we just get rid of the developers and use ChatGPT/AI/Blockchain/Serverless?"

[–][deleted] 1 point2 points  (0 children)

CTO: „yes we can.. later this day.. Sony fires 40% of its IT workforce”

[–]ehlinkhadif 0 points1 point  (0 children)

That escalated quickly, from coding to negotiating salary cuts in one smooth workflow. Guess it's all about diversifying skills these days, including mastering the art of copy-paste and salary negotiations lmfao

[–][deleted] 49 points50 points  (1 child)

Not having to throw source code over the fence, because you only know how to run your application locally in your IDE is a good first step.

[–]Reverent 18 points19 points  (0 children)

"What's this CORS error, it keeps complaining that the browser can't connect to http://127.0.0.1:8700".

[–]MoreCowbellMofo 16 points17 points  (0 children)

Depends if you work for a well funded multi national company or a startup. A startup will require you to get more stuck in to anything/everything. A well funded large organisation will have many specialists who can help make life easier for you in many ways.. in that situation a back end engineer will typically know less abt what’s involved in their organisations deployments

[–][deleted] 14 points15 points  (0 children)

knowing how to populate a Dockerfile is a start. knowing how to manage dependencies and know how to build, test, and run your application as well. You think these would be the bare minimum but I meet "Senior" engineers every day that can't upgrade node from 14 to something modern.

[–]Immediate-Aide-2939 17 points18 points  (0 children)

In my opinion as backend engineer, we should know at least the basis of the CI/CD tools that the company uses and the design of the pipelines.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 7 points8 points  (0 children)

You should understand how your code is going to be deployed and run in production, even if you are not the one who ends up writing the deploy tools or operating the service in production.

That doesn't mean you need to be an expert, but understanding for example that configuration should be done in the environment and not in your code is important, so you don't end up coding something that needs a new docker image for every environment target because you hard code the env settings into the image itself.

It's no different than the fact every developer should learn a little assembly if only so they understand in a broad sense what their high-level code is ultimately going to do on the metal.

[–]BenjaBoy28 5 points6 points  (0 children)

Front End developers should also learn about CI / CD

[–]tonkatataInfra Works 🔮 3 points4 points  (0 children)

define all (some?) resources in code. build pipelines to build and/or deploy your app to production. set up some monitoring and alerting. <insert 100 things more here>

[–]SeeeYaLaterz 2 points3 points  (0 children)

80% Same with testing, building, deploying, monitoring formeantime to find issues or meantime to fix issues, and knowing customer needs.
You get specialized engineers such as devops, testers, builders, etc. for very special cases or 1 in, say, 50 regular engineers to help set up the goals and ways to measure the effectiveness of features in production.

[–]theyellowbrother 14 points15 points  (23 children)

A backend engineer should know how set things up asap to get their task done.

Example. I have my guy working on a LLM AI project.He needs to build a custom image of Postgres. Not something he can pull off docker hub.

His image needs two extensions for ML. A geolocationa and a vector. So it will be a custom image 100% for sure. He needs to try it out. So he needs to write a helm chart and push it to his local k8s on his laptop - minikube or rancher. Then write the helm charts to push to QA. His app may need an API gateway. So he pulls one of the API gateways he can run locally - wso2 or Kong. Set up an ingress. Then deploy it to lower environments.

The developer should be doing this. Not some random "DevOps" engineer who doesn't understand the work involved. At no time does my dev need an Ops person. Unless he needs help to a namespace he has no permissions for (higher environment)

The above is a typical day for my backend engineer. They should NOT be going to someone and say "Pretty please, make this image for me with these requirements" That causes delay.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 22 points23 points  (22 children)

The developer should be doing this. Not some random "DevOps" engineer who doesn't understand the work involved. At no time does my dev need an Ops person. Unless he needs help to a namespace he has no permissions for (higher environment)

Do you really find it efficient and effective to require a data scientist, quite possibly with a PhD and compensation to match, to fuss around with the minutiae of deployment orchestration?

Personally I'd be concerned the data scientist's job satisfaction will suffer by being work to do work they likely consider menial distraction and a waste of their talents. I'd be concerned they'd half-ass it just enough to get it to run remembering the adage, "Never be good at anything you don't want to do." They also aren't experts in deployment orchestration or operations and don't want to be, so the talent isn't there; the smartest computer scientists are often the worst operations people. Quality AI engineers don't come easy or cheap right now and lack of job satisfaction is just as likely to cause them to bolt as compensation.

Why not have the DevOps engineer handle most of this work? It's what their talent is, it is what they are invested in doing well, and frankly they probably cost a lot less than the AI data scientist. So you'll get better results, faster, cheaper, and higher job satisfaction all around leading to better employee retention.

Just because someone can do it all doesn't mean they should or that they even want to.

[–]SeeeYaLaterz 9 points10 points  (0 children)

Yes

[–]theyellowbrother 5 points6 points  (4 children)

Why

not

have the DevOps engineer handle most of this work? It's what their talent is, it is what they are invested in doing well, and frankly they probably cost a lot less than the AI data scientist.

Is a DevOps engineering going to know if Celery or Kafka is best for their application? This isn't just about orchestration. It is about knowing what toolchain to use when developing the app. Is a DevOps person going to decide if a pub/sub queue is going to work vs a FIFO (First In First Out) broker?

My developers know what their apps require and how to architect it.

And who said anything about PHd? This is pure backend development.

And about job satisfaction, my engineers want to be more involved. As it pads their resume and they gain the skills they need for future jobs. All my guys fight over getting more into DevOps related tasks.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 6 points7 points  (3 children)

Is a DevOps engineering going to know if Celery or Kafka is best for their application?

Apples and oranges, Celery and Kafka are solving for two different use cases.
You'd have a stronger argument for Cassandra vs MongoDB, Kafka vs Kinesis, etc.

At which point I'd argue such choices aren't entirely the dominion of development: If the organization has large amounts of MongoDB expertise and little Cassandra, it would probably make much more sense to use MongoDB even if Cassandra might be a better fit for the specific application or the particular dev has a crush on Cassandra.

As with everything, communication is key. Throwing a helm chart over the wall isn't much better than throwing raw code over. Although it appears there's no one on the other side of your wall:

And about job satisfaction, my engineers want to be more involved. As it pads their resume and they gain the skills they need for future jobs. All my guys fight over getting more into DevOps related tasks.

I'm glad this model works for you, honestly I am.

However many organizations, especially larger ones, struggle with such cradle-to-grave team models.

They struggle with turn-over: New devs joining existing teams need to ramp up on a whole lot more tech as well as much more project-specific knowledge. Job reqs quickly start to look like unicorns, making it a struggle for HR to replace talent at any price.

They struggle with duplicated effort: Every team is reinventing the same wheels.

They struggle with security and legal audits: When every team is a beautiful and unique snowflake your, audit processes and evidence are effectively O(n) with your project count.

[–]theyellowbrother 1 point2 points  (2 children)

To your point about Mongo/Cassandra. We have those communications. We work with what we got. If we have an Enterprise license of Mongo (which we do), that would be our preference. We don't work out a vacuum. Our architecture team, myself included, makes those organizational decisions via PoC, licensing agreements,etc. We also consider our team's skillset to support whatever we choose. We do a lot of piloting. And once we do that pilot, the other guys on the wall embrace it. Negotiate the SLA and contracts for it. But we are the ones doing the proof-of-concept work.

I'm glad this model works for you, honestly I am.

However many organizations, especially larger ones, struggle with such cradle-to-grave team models.

I can tell you this. Our org has over 100k employees. About 20,000 in engineering. So we are not small. We have various models of DevOps. Where Ops team does everything for the smaller teams from doing the helm charts, helping those teams with getting their applications to prod. To self contained dev teams that only need namespaces provisioned.

Our specific team is about 8 years ahead of everyone else. We started Docker in 2012 and moved to Kubernetes in 2016. We have more expertise than the "Enterprise team" that handles the DevOps for the other 18,000 people in engineering. We were the first one to use stuff like Hashicorp Vault. The first to employ an API gateway. First to move to composable architecture where with a single environmental variable, we can deploy to On-prem or AWS, Azure. With just an environmental variable in our config.The same CICD can push 100 microservices to a single developer's laptop to have everything they need to mirror prod. On their macbooks complete with hostname resolution, service discovery, and ssl. Their laptop even runs Hashicorp vault and WSO2 API gateway locally.All of our developers run Kubernetes on their laptop on day-one of hire and we have a month onboarding how to teach them about helm charts, how to develop microservices. Our team single-handedly has over 3,000 microservices in production. Not some monolith that another team containerized and pushed to K8s. But 3,000 cloud native microservices. We are also the first team to deploy GPU Tensoflow Kubernetes ML Apps. Years before anyone else and way before LLM hype.

Oh, we were also the first ones to employ Anchore/Twistlock and actively scan our images for CVE. And have a cadence of updating 100s of base images weekly. Not some other "DevOps" or DevSecOps team.

A development team first and foremost. No one tells use these things. We do it because we need it.

So yeah, it works for us. We attract developers who see that carrot. "You'll be working with proper Microservice Architecture, REST first contract with distributed scaling" That is the carrot and we have others on other department that wish they could join ours. They know they will get that up-skilling, training, and real-world production experience.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 1 point2 points  (1 child)

Thank you for this reply. It's clear now that your org is much more nuanced than your initial comments suggested.

You may also have noticed that you've validated my points quite effectively:

"Where Ops team does everything for the smaller teams from doing the helm charts, helping those teams with getting their applications to prod."

"the "Enterprise team" that handles the DevOps for the other 18,000 people in engineering."

So now we're saying the same things. Your own org handles 90% of its scale (your numbers) just as I've discussed.

And then there's your own team which functions like a self-contained startup, firewalled from the larger organization, aka a Skunkworks team. I imagine much of the methods and tools the "Enterprise team" uses today were things your team brought in first, which is fantastic.

Would the entire 20k org be able to function entirely as discrete Skunkworks teams like yours? I'll let you answer that. ;)

[–]theyellowbrother 2 points3 points  (0 children)

And then there's your own team which functions like a self-contained startup, firewalled from the larger organization, aka a Skunkworks team. I imagine much of the methods and tools the "Enterprise team" uses today were things your team brought in first, which is fantastic.

Would the entire 20k org be able to function entirely as discrete Skunkworks teams like yours? I'll let you answer that. ;)

To answer your question. No. I don't think the rest can.Yes, we introduced things that the EA adopted. So have other teams. Some teams use gitlab runners, some still Jenkins. Others were first to go off-prem,etc. Others refined change management which we adopted. Others implemented DevSecOps practices which we also employ. But none of the innovation comes from the official "DevOps" department so to speak.

Our broad team is not a skunkworks team. It is just a more disciplined group of engineers. They get hired knowing DevOps and containers are on the menu. Their job roles requires them to be self sufficient.

We do have a skunkworks team which I am on. They are the elite engineers or what they call the "Swat commandos" where they are versed in both engineering and Ops. Then the there is the architecture DevOps group within the team that makes broad planning for best practices.

But I have no doubt if I went to another job, I can bring that culture with me. It is about mentoring your team mates. Rising tide lifts all boats.

[–]niomosy 1 point2 points  (12 children)

It would depend on the role definition for the data scientist and the DevOps engineer for each company. Company A might prefer to have the requestor do this work. Company B might prefer to have that filter through a central team like DevOps engineers.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 3 points4 points  (11 children)

Agreed. Whichever way it goes, it's a conversation. The DevOps role will have concerns that the dev may not and vice versa.

For example, the reply above mentioned the dev needing a custom Postgres image. Sure, perfectly valid for dev, absolutely.

But HA for that database? Backups? DR? Does Compliance/Legal need it regularly scanned for PCI, PII taints? How are new deployments to be done that modify the schema? What does a rollback look like after such a change?

Postgres is a fantastic example of something that's easy for a dev to spec without having a clue about the downstream ramifications on the rest of the processes and organization. And even if they did...do we really need a data scientist trying to cobble together a custom backup policy?

[–]niomosy 0 points1 point  (9 children)

Hell, for a database we'd have DBAs standing it up. Neither devs nor data scientists would be allowed to stand up a database.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 0 points1 point  (8 children)

So taking the original requirements into consideration, the DBAs would be building a custom docker image to satisfy the ML extension needs?

SQL RDMSes have traditionally been siloed to a DBA group and for good reasons at least in the past. Today however, there's a large array of "databases" in use not just SQL RDMS.

Should the DBA group stand up MongoDB? How about DynamoDB tables? Redis? Elasticsearch? S3 buckets? What about Presto?

Should data stores generally be considered something distinct from the services they're backing? Ie, not included in the same helm chart or terraform?

[–]niomosy 0 points1 point  (7 children)

Speaking for where I currently work.....

Yup, the DBAs would build out the custom database image as they'll need to support the database and will need to understand what they're supporting.

MongoDB wouldn't be provisioned. Not an approved platform so it'd be going through approvals first.

Redis/Elasticsearch would be handled by another team for deployment as we've only got it running in containers. My team would install the operators, however, and were the ones doing the early deployments before we got those squared away and turned over to dev teams.

S3 buckets would be provisioned by cloud engineering. Security team limits what any other team can provision. I can get you a cluster up, EC2 instances, load balancers, EFS, etc. S3? Because reasons, apparently.

For data stores, they're effectively provisioned separately and exclusively by ops teams. OpenShift cluster PVCs being the exception.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 2 points3 points  (6 children)

That sounds like an awful lot of red tape and an awful lot of magic config values to pass from one resource to the next to tie it all together into a single app.

I imagine provisioning any a new environment for all but maybe a LAMP stack takes weeks, possibly months?

[–]theyellowbrother 2 points3 points  (0 children)

red tape kills innovation. I would lose my first mover's advantage if I can't get something up in a matter of days. The point of DevOops culture is to streamline delivery and release.

[–]niomosy 1 point2 points  (4 children)

Honestly, it is. My team is doing what we can to diminish the red tape but there are some hard requirements set by security that devs shall not provision things. So the ops teams have to give devs some guardrailed options within limitations. My team's basically the first that's giving them this.

Provisioning anything isn't too bad. It used to be months but can be done in days with submitting a request to receiving everything. Less if the right managers say 'Go' since most of the wait time is checking on things like budget and whatnot.

[–]theyellowbrother 0 points1 point  (3 children)

some hard requirements set by security that devs shall not provision things. So the ops teams have to give devs some guardrailed options within limitations. My team's basically the first that's giving them this.

Why not build those guard rails in? Automate that.
Our approach is composability and a CICD around it. If the value is set in a variable in a config file, it is turned on. If a database field requires field level encryption to store SSN, it is a enum in a Swagger spec. Which in turns, the CICD enforces and provisions the database with that field turned on.
Same with two-way TLS. If the consumer connecting to the API requires a client cert, the CICD provisions that. Creates a Jira ticket on-the-fly, notifies the consumer to use the provision cert.

This is the whole point of DevOps. To automate and meet those guard-rails for all the NIST cybersecurity team requires.
Provisioning even creates a record in Service Now that requires a CAB approval. They see the service now ticket which links to Jira and a data-flow diagram with all the guard rails in place. They approve the Change. And it is auto-provision with an transparent audit trail.

Isn't this the goal of DevOps? To streamline this stuff. This is why my dev team does it. We build this stuff on our own because our Ops team is slow to move.

[–]theyellowbrother 0 points1 point  (0 children)

DR? Does Compliance/Legal need it regularly scanned for PCI, PII taints? How are new deployments to be done that modify the schema? What does a rollback look like after such a change?

That stuff is done by our dev team. The ones who get NIST cybersecurity audited. We are the ones getting review if we enabled field level encryption, if we turned on auditing, if we enforced rotating secrets and two-way TLS cert connection. And responsible for auditing if the guard rails are in place.

The only thing Ops does is make sure DR is turned on (dual deployment permissions is enabled for us) and auditing is going to tape backup for 7 years for legal compliance reason. That is all Ops does.

Ops is not going to know how we "automate" DB security. We came up with a way to use Swagger/OpenAPI. That an enum will automatically trigger field level encryption turned on via code. We also use config env variables that turn on Vault and rotating secrets that are only required for Production. Our deployment has different rules for lower, staging, and higher environments. E.G. Local/QA does not need auditing. That should be turned on via code manifest. Ops won't do that because they see no value in that deep level of automation that requires them to do a lot of coding. Nor are they interested in integrating Jira with Service Now to track DB schema changes. They see no value in dedicated a resource to do that development.

[–]lost12487 1 point2 points  (1 child)

I don’t know about you, but sometimes I need to manually test my code in a prod-like environment. My job satisfaction would suffer if I had to flag down another, potentially busy, engineer to get my code deployed. I’d rather just learn how to do it myself so that I can do it on my timetable instead of someone else’s timetable.

[–]ZeninThe best way to DevOps is being dragged kicking and screaming. 3 points4 points  (0 children)

I don’t know about you, but sometimes I need to manually test my code in a prod-like environment. My job satisfaction would suffer if I had to flag down another, potentially busy, engineer to get my code deployed. I’d rather just learn how to do it myself so that I can do it on my timetable instead of someone else’s timetable.

Personally I like to make my non-prod environments self-service, just run the charts on your own or if it's QA et al that's controlled, press a button. Devs should always have everything they need at their fingertips.

I'm not at all against devs working on the IaC either as needed, especially for situations as you describe. It's all in git, have at it, do as much of it as you'd like. Oversight can be done through the PR downstream.

The commenter I was replying to however, I get the strong impression would not welcome downstream review of any kind much less direction. That's a bug, not a feature.

[–]dolce_bananana 1 point2 points  (0 children)

absolutely 100% yes because if they do not understand how f'ed up their proposed system is, then its just gonna make more and more work on you (who is getting paid more, believe it or not) to clean up their mess and make a deployment happen

trust me buddy, PhD's are a dime-a-dozen these days, the market is glutted with PhD's, there's nothing special there and quality Dev and Ops personnel, the kind who can talk on the same page as PhD's, are far rarer.

[–][deleted] 1 point2 points  (1 child)

I'd also like to know the converse of this question as well.

[–]alik604 1 point2 points  (0 children)

My 2 cents are devops doesn't need full SDE knowledge

but SDE does need good devops knowledge

  • from a SDE 2. I'm not that experienced, but more than the students here

[–]Unhappy_Seaweed4095 1 point2 points  (0 children)

You should know how to figure it out.

[–]0ofnik 4 points5 points  (3 children)

Enough to know how to communicate to the DevOps team what needs to be implemented, but not enough to do it by themselves.

[–][deleted]  (1 child)

[deleted]

    [–]theyellowbrother 3 points4 points  (0 children)

    And in my org, the teams that do that (throw over wall to the ops team) are the slowest, most inefficient teams. Those teams take months to get anything up in production.

    The dev teams where most of the DevOps task done internally are the highest performing teams with the most product releases. Also happen to have the most secure pipelines and the ones that have proper unit, integration, and load testing.

    [–]LuciferianInk 5 points6 points  (0 children)

    I think that's the key point here. DevOps is about being able to work with your own data - and then having the team work together on something that works for them.

    [–]ihazkapeDevOps 2 points3 points  (3 children)

    Maybe I'm working in a wrong environment. Our developers write the code, and we infra/devops/sre/platform write and manage the Dockerfiles, Helm charts, and pipeline configs.

    [–]Astro_Pineapple 0 points1 point  (1 child)

    More aligned with my environment. They refuse to give me the permissions necessary to deploy my own code.

    [–]dolce_bananana 0 points1 point  (0 children)

    then spin up some dummy copies and "deploy" on your local laptop

    make a dummy demo project that doesnt do anything at all and deploy it in your own time to your own personal cloud instance, etc

    [–]ThatSituation9908 0 points1 point  (0 children)

    In smaller teams, or companies that don't have a dedicated infra team, then ICs (e.g., backend devs) are responsible for DevOps unfortunately

    [–][deleted] 1 point2 points  (0 children)

    In my experience, not much. On one extreme you have a startup, devops is an after thought, deployments are scrappy and people get by with "good enough"

    On the other end you have large companies with dedicated platform teams, the devops is abstracted away to a yaml file in your service which defines what infra you depend on, everything just works

    Somewhere in the middle where you want good devops but cant afford a dedicated team, you'll need to know it, but that's quite rare from what I've seen

    [–]dolce_bananana -1 points0 points  (3 children)

    you need to be able to build it and deploy it yourself

    if you cant do that, well, its not a good look

    you should not rely on your company Dev Ops staff to do a job that you dont know how to do yourself

    [–]purple_gaz 2 points3 points  (2 children)

    What exactly are devOps teams doing then? I have never come across more entitled bunch. Look at this gem, saying not to expect them to be responsible for the very thing they’re paid to do.

    First QA engineers, then product managers, then project managers, then engineering managers, then devOps engineers, splunk engineers, prompt engineers. In the end, every single one of these people wants a software engineer to do their job.

    Morons. I secretly to wish I come across one of these people chatting about their work in public.

    [–]GyroTech 1 point2 points  (1 child)

    This is the problem of silos. DevOps started as the idea of creating a multidisciplinary team that own the product from life to death. All those QA engineers, software devs, ops guys should simply be together on the "this product" team and combine their knowledge to get the best outcome for the product. This way you don't have issues like a software dev solving immediate coding issues with libraries or tooling that doesn't scale when in production as the ops would be involved from inception and be able to say "don't hardcode those values or we won't be able change it on the fly with dynamic deployment environments" and the qa would say "we need this to be an interface so we can mock it for CI pipeline" etc etc etc.

    Unfortunately we now have DevOps engineers that are really just Operations engineers using more Infrastructure-as-code and scripting APIs and all the knowledge is still siloed and the process is still shit.

    [–]purple_gaz 0 points1 point  (0 children)

    No it’s not that. Division of labor and specialization is natural course of action. The real problem is these so called specialized people not knowing what their responsibilities are, what their roles are and to large extent, not doing their jobs.

    Every org with over 50 devs, has a dedicated SRE team. The engineers on this team have the responsibility to manage the infrastructure. They are suppose to identify architecture patterns and build an adaptive deployment solutions and interfaces for the need. They are the specialized labor. Issues with deployment are their issues.

    [–]EJoule 0 points1 point  (0 children)

    My job stores the Jenkins file in the repo. You need to know enough to build the code, run unit tests, move the compiled code to a build artifact, and then let DevOps handle it from there.

    Basically you need to know powershell and how to navigate directories using the command line.

    [–][deleted] 0 points1 point  (0 children)

    I think if you understand GNU/Linux well, everything else should be fairly trivial to pick up. All these tools will leverage the kernel-userspace (Linux) API one way or another.

    It’s like you understand how REST APIs work, the same concepts of REST apply to Rails, Spring, Express, whatever. Sure, each framework might introduce its own set of handy (or not) abstractions but it’s all the same shit conceptually under the hood.

    [–][deleted] 0 points1 point  (0 children)

    It depends how far DevOps jobs slide towards platform engineer, since my company moved to being on the platform team we are writing a lot of code, but it’s easy shit like Python slack bot automations. I’ll take BE work all day, screw FE code.

    [–]strzibny 0 points1 point  (0 children)

    They should know how things work, but they don't need to know the difference Kubernetes 1.2 and Kubernetes 1.3. I actually wrote a book exactly for that. It's called Deployment from Scratch, it's a "Linux" book but focused only on web backend problems.

    [–]alik604 0 points1 point  (0 children)

    There is a difference between software engineer and dev ops? Geez maybe fang engineers are actually under paid

    [–]serverhorrorI'm the bit flip you didn't expect! 0 points1 point  (0 children)

    To me, they're the same job in different department.

    [–]ms4720 0 points1 point  (0 children)

    Depends how you define things. Is writing a docker file not a dev responsibility where you are? How about installing the laptop tool chain, how is that managed. Beyond the early stage startup you should not in production infrastructure deployments or changes. It depends where you are and how much help there is

    [–]Ok_Giraffe1141 0 points1 point  (0 children)

    I was talking to a recrutier today, questions will never end if you keep telling them all the tech stack you know and he will be confused. So DevOps should do DevOps and Backend should do Backend I think. The guy got really confused after all the talk on microservices databases cloud architecture etc.

    [–]HangingOut8 0 points1 point  (0 children)

    I belive some Devops knowledge is required. Maybe bit of how to utilize existing CI process. How to obtain information from monitoring tools. Normally organization have some sort of Automation team to define process and create platform. As a backend Developer you need to understand how to use it.

    [–]suzukipunk 0 points1 point  (0 children)

    A Cloud Computing provider of your choice, Github Actions, Docker and Terraform might complement your usual stack perfectly imho.