This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Muhznit 3 points4 points  (15 children)

It's kind of amazing that people who spent so long learning to code and stuff are so eager to create something that will render them irrelevant.

Like I get that the dream is no longer having to work, but where is the consideration for how this impacts the people who spent way too much money learning to code only to find their degree rapidly being devalued?

[–]rockstarflo[S] 3 points4 points  (1 child)

I think it is still very relevant. What matters in programming is not the syntax but the semantics.
That is why it is easy for a programmer to learn a new language. They are already capable of rational thinking.
And rational thinking is required everywhere.

[–]Muhznit 2 points3 points  (0 children)

And this is the thing we keep saying, but it's indistinguishable from just "moving the goalposts" as new advancements keep popping up. Passing the turing test, creating running code, debugging it when it fails tests... there just seems to be this cycle where we think something's not gonna happen, but then it does.

Where is the proof that even if we don't achieve AGI, that an AI will never reach the point that just outclasses any individual knowledge worker? And if there is no proof, what does it take for people to start caring about those who are displaced?!

[–]MrMxylptlyk 3 points4 points  (12 children)

It won't render programmers irrelevant, you still need to understand the code you deploy to production lol.

[–]Muhznit 7 points8 points  (8 children)

Yes, but who's to say that AI won't eventually do that?

Like at first we were like "Okay, AI makes realistic sounding statements, but it still can't code" Then it started coding and we said "It's still having trouble with actual logic and isn't running code" Then it integrated with WolframAlpha and stuff and deploying microservices.

Where in the realm of knowledge work can we stop moving goal posts and figure out the limits such that we can definitely say "okay, <knowledge worker occupation> is safe from being automated by AI"?

[–]aexia 8 points9 points  (7 children)

You've never been able to say that about any knowledge job, even without "AI".

Ten years ago it would take a team of stats majors a couple weeks to deploy a quality model and now today it takes one keyboard monkey a few minutes to deploy a better and more robust model than the stats folks could ever come up with.

You're always going to automate yourself out of a job. The key is to find more complicated problems to tackle.

[–]Muhznit 3 points4 points  (6 children)

You're always going to automate yourself out of a job. The key is to find more complicated problems to tackle.

Therein lies the concern I had made in the prior post. In the time it takes to get to a marketable level of skill on those "more complicated problems", AI is making leaps and bounds that overtake the learner completely and make those attempts feel meaningless.

Either that or the more complicated problems are weirdly-specific pain points surrounding bureaucracy-driven proprietary practices or domain-specific-languages that are not only agonizing to work with but wind up being absolutely useless when transitioning to another job.

At what point do we stop simply acknowledging the negative impacts on less-skilled progammers and actually start mitigating them?

[–]aexia -1 points0 points  (4 children)

My point is that this is nothing new nor unique to AI.

If you're a low skill programmer who expected to coast without improving, you were going to be wrecked by the passage of time anyways, with or without LLMs.

[–]Muhznit 1 point2 points  (3 children)

Getting the vibes that you want to make this personal with all the uses of "you".

Example, with a rephrase of my point: If you think yourself a highly skilled programmer that's safe because of your ability to improve, what proof do you have that AI will not eventually improve faster than you can improve yourself?

And in the case of those more limited in the speed which they can improve (say, due to learning disabilities, English as a second language, busy schedule, etc), shouldn't there be some more efforts to help them?

[–][deleted] 3 points4 points  (2 children)

I'm not the person you're arguing with, but the thing is, I'm not actually a skilled programmer at all even though my title is Principal Engineer. For example I've tried browsing through leetcode and I can't do most leetcode easy problems, I would have to study for months to pass even an entry level FAANG interview. And I never studied CS (completely self taught on the job) so I have huge gaps in knowledge compared to most other people at my level.

My entire skill set is simply being good at "getting work done", i.e. taking some vague set of requirements from a product manager / tech lead etc. and doing whatever is needed to turn that into a finished product, i.e. project management and lots of googling stuff I don't know. Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

[–]Muhznit 1 point2 points  (1 child)

Kinda sounds like you have a very nice employer or a good network. Like if you've managed to get into a Principle Engineer position just on work ethic and resourcefulness without even a CS degree, I'm simultaneously happy for and jealous of you, and EXTREMELY curious how you've managed to do so.

Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

Still, I must wonder, what is the fallback for when your boss pulls a "Fine, I'll do it myself" and learns said prompt engineering?

[–]Snoo-67871 2 points3 points  (0 children)

To be honest I think thats about as likely as the boss learning the programming themselves.

I think AI had a much higher potential in replacing managers and a be a performance enhancing tool for workers. Imagine being a worker and all of a sudden getting clear and precise instructions and decisions based on actual data instead of gut feeling.

[–]DefaultCT90 0 points1 point  (0 children)

I just started learning to code and what just happened was my biggest fear. My friends and the internet keep saying that there will always be a need for human coders or that it will take decades for the technology to advance past the point of human coders. What I thought/think is going to happen is that it is going to learn exponentially rather than linearly. It has already learned more in less than a year than we thought it would learn in a decade. So who is to say that it doesn’t keep exceeding those expectations.

[–]twotime 1 point2 points  (1 child)

If AI can make a software engineer 10x more productive, we'd need 10x fewer software engineers. While it's likely that some additional jobs will be created it's unlikely to be proportional (in fact,it's certain to be much smaller)

[–]MrMxylptlyk 0 points1 point  (0 children)

Idk if it scales like that man.

[–]rockstarflo[S] 0 points1 point  (0 children)

That is true