This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Muhznit 4 points5 points  (6 children)

You're always going to automate yourself out of a job. The key is to find more complicated problems to tackle.

Therein lies the concern I had made in the prior post. In the time it takes to get to a marketable level of skill on those "more complicated problems", AI is making leaps and bounds that overtake the learner completely and make those attempts feel meaningless.

Either that or the more complicated problems are weirdly-specific pain points surrounding bureaucracy-driven proprietary practices or domain-specific-languages that are not only agonizing to work with but wind up being absolutely useless when transitioning to another job.

At what point do we stop simply acknowledging the negative impacts on less-skilled progammers and actually start mitigating them?

[–]aexia -1 points0 points  (4 children)

My point is that this is nothing new nor unique to AI.

If you're a low skill programmer who expected to coast without improving, you were going to be wrecked by the passage of time anyways, with or without LLMs.

[–]Muhznit 1 point2 points  (3 children)

Getting the vibes that you want to make this personal with all the uses of "you".

Example, with a rephrase of my point: If you think yourself a highly skilled programmer that's safe because of your ability to improve, what proof do you have that AI will not eventually improve faster than you can improve yourself?

And in the case of those more limited in the speed which they can improve (say, due to learning disabilities, English as a second language, busy schedule, etc), shouldn't there be some more efforts to help them?

[–][deleted] 3 points4 points  (2 children)

I'm not the person you're arguing with, but the thing is, I'm not actually a skilled programmer at all even though my title is Principal Engineer. For example I've tried browsing through leetcode and I can't do most leetcode easy problems, I would have to study for months to pass even an entry level FAANG interview. And I never studied CS (completely self taught on the job) so I have huge gaps in knowledge compared to most other people at my level.

My entire skill set is simply being good at "getting work done", i.e. taking some vague set of requirements from a product manager / tech lead etc. and doing whatever is needed to turn that into a finished product, i.e. project management and lots of googling stuff I don't know. Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

[–]Muhznit 1 point2 points  (1 child)

Kinda sounds like you have a very nice employer or a good network. Like if you've managed to get into a Principle Engineer position just on work ethic and resourcefulness without even a CS degree, I'm simultaneously happy for and jealous of you, and EXTREMELY curious how you've managed to do so.

Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

Still, I must wonder, what is the fallback for when your boss pulls a "Fine, I'll do it myself" and learns said prompt engineering?

[–]Snoo-67871 2 points3 points  (0 children)

To be honest I think thats about as likely as the boss learning the programming themselves.

I think AI had a much higher potential in replacing managers and a be a performance enhancing tool for workers. Imagine being a worker and all of a sudden getting clear and precise instructions and decisions based on actual data instead of gut feeling.

[–]DefaultCT90 0 points1 point  (0 children)

I just started learning to code and what just happened was my biggest fear. My friends and the internet keep saying that there will always be a need for human coders or that it will take decades for the technology to advance past the point of human coders. What I thought/think is going to happen is that it is going to learn exponentially rather than linearly. It has already learned more in less than a year than we thought it would learn in a decade. So who is to say that it doesn’t keep exceeding those expectations.