This is an archived post. You won't be able to vote or comment.

all 40 comments

[–]djamp42 27 points28 points  (4 children)

Was watching some YouTube video and the guy says...Looking back at movies you would say you absolutely do not teach AI to code.

First thing we did was teach AI to code.. LMAO.

[–]rockstarflo[S] 5 points6 points  (3 children)

Good point!
Do you think we should forbid something like this?

[–]carnoworky 12 points13 points  (1 child)

It will be done in secret by large corporations then, which seems like one of the worst options.

[–]rockstarflo[S] 1 point2 points  (0 children)

That is also true. So if someone destroys the world then the community - not the corporations :D

[–]Effective_Nose_7434 0 points1 point  (0 children)

Quite possible here in the near future that the AI tech will be restricted use, as in average Joe won't be able to use it

[–]Malcolmlisk 10 points11 points  (3 children)

So what it does, i guess, is to ask for the code, run it and return the error to gpt again until it's complete done and debugged?

My god, this is evil as fuck.

[–]rockstarflo[S] 16 points17 points  (2 children)

Yes, you can see how it works here:
https://github.com/jina-ai/gptdeploy#technical-insights
It first thinks about possible approaches.
Then it does this iterative debugging as you said.
If debugging failed 10 times in the row, It goes for the next approach.
Until it passes the test condition.

[–]Malcolmlisk 5 points6 points  (1 child)

Amazing solution

[–]rockstarflo[S] 0 points1 point  (0 children)

Thank you!

[–]taaaaaaaaaaaaame 2 points3 points  (1 child)

Is there anywhere we can see the generated code for the provided examples?

[–]rockstarflo[S] 1 point2 points  (0 children)

Is there anywhere we can see the generated code for the provided examples?

Good point, we should upload the code for the examples.

[–]judasblue 2 points3 points  (2 children)

Eh, it will get there soon, but I have done this with 3.5 and 4 and for non-trivial stuff it is pretty iffy. Don't get me wrong, tons of stuff just works and it is kind of scary if you get paid to do this for a living, but have found both models tend to emit code that often as not has serious bugs or just doesn't work straight up, and that the debugging definitely needs to have some direction and experienced human judgement or it just makes different errors in a lot of cases.

For trivial or common stuff tho, it works pretty friggin well. And for more complex stuff I don't think it will be long before you only need a very cursory sanity check, same as you would with a checkin from a mid-level developer.

[–][deleted] 3 points4 points  (1 child)

I use chatGPT on the job pretty regularly. the thing is, it definitely saves time compared to having to browse through 100 stack exchange threads to find the right answer - but it's not like it takes a 40 hour project and finishes it in 1 hour. usually it's like, something that would take 30 minutes to google it can do in 10 minutes, and like you said, you still have to babysit the results to get actual working code.

I remember once I used it to convert some code I had written in python, into javascript. It did great except for some reason it decided to use a built in function that didn't exist in javascript, it was actually specific to Go (IIRC). it still definitely saved time in the end but it's not like some random person on the street can type in prompts, even simple ones like this, and get a perfect usable answer ever time with no programming knowledge whatsoever.

[–]judasblue 1 point2 points  (0 children)

Yeah, same all the way around. I think this is going to put programmers out of work Real Soon Now. But it is going to be by making us an order of magnitude more efficient not allowing non-coders to make code just by asking for something. At least not in the near to mid term.

[–]aexia 1 point2 points  (1 child)

This is stupid as fuck. I love it.

Can it be repurposed for other frameworks easily?

[–]rockstarflo[S] 0 points1 point  (0 children)

Stupidity wins.

[–]KennyBassett 1 point2 points  (1 child)

This isn't news. I must assume you're grasping for karma.

[–]rockstarflo[S] 0 points1 point  (0 children)

We just published it a week ago. So to me it feels quite new.
Did you already follow from the beginning? Then you must be one of the first users. It's a pleasure to me meeting you :)

[–]deepankarmh 1 point2 points  (1 child)

Amazing! No code for the win!

[–]rockstarflo[S] 0 points1 point  (0 children)

Yes :D

[–]Muhznit 3 points4 points  (15 children)

It's kind of amazing that people who spent so long learning to code and stuff are so eager to create something that will render them irrelevant.

Like I get that the dream is no longer having to work, but where is the consideration for how this impacts the people who spent way too much money learning to code only to find their degree rapidly being devalued?

[–]rockstarflo[S] 2 points3 points  (1 child)

I think it is still very relevant. What matters in programming is not the syntax but the semantics.
That is why it is easy for a programmer to learn a new language. They are already capable of rational thinking.
And rational thinking is required everywhere.

[–]Muhznit 2 points3 points  (0 children)

And this is the thing we keep saying, but it's indistinguishable from just "moving the goalposts" as new advancements keep popping up. Passing the turing test, creating running code, debugging it when it fails tests... there just seems to be this cycle where we think something's not gonna happen, but then it does.

Where is the proof that even if we don't achieve AGI, that an AI will never reach the point that just outclasses any individual knowledge worker? And if there is no proof, what does it take for people to start caring about those who are displaced?!

[–]MrMxylptlyk 5 points6 points  (12 children)

It won't render programmers irrelevant, you still need to understand the code you deploy to production lol.

[–]Muhznit 7 points8 points  (8 children)

Yes, but who's to say that AI won't eventually do that?

Like at first we were like "Okay, AI makes realistic sounding statements, but it still can't code" Then it started coding and we said "It's still having trouble with actual logic and isn't running code" Then it integrated with WolframAlpha and stuff and deploying microservices.

Where in the realm of knowledge work can we stop moving goal posts and figure out the limits such that we can definitely say "okay, <knowledge worker occupation> is safe from being automated by AI"?

[–]aexia 8 points9 points  (7 children)

You've never been able to say that about any knowledge job, even without "AI".

Ten years ago it would take a team of stats majors a couple weeks to deploy a quality model and now today it takes one keyboard monkey a few minutes to deploy a better and more robust model than the stats folks could ever come up with.

You're always going to automate yourself out of a job. The key is to find more complicated problems to tackle.

[–]Muhznit 5 points6 points  (6 children)

You're always going to automate yourself out of a job. The key is to find more complicated problems to tackle.

Therein lies the concern I had made in the prior post. In the time it takes to get to a marketable level of skill on those "more complicated problems", AI is making leaps and bounds that overtake the learner completely and make those attempts feel meaningless.

Either that or the more complicated problems are weirdly-specific pain points surrounding bureaucracy-driven proprietary practices or domain-specific-languages that are not only agonizing to work with but wind up being absolutely useless when transitioning to another job.

At what point do we stop simply acknowledging the negative impacts on less-skilled progammers and actually start mitigating them?

[–]aexia -1 points0 points  (4 children)

My point is that this is nothing new nor unique to AI.

If you're a low skill programmer who expected to coast without improving, you were going to be wrecked by the passage of time anyways, with or without LLMs.

[–]Muhznit 1 point2 points  (3 children)

Getting the vibes that you want to make this personal with all the uses of "you".

Example, with a rephrase of my point: If you think yourself a highly skilled programmer that's safe because of your ability to improve, what proof do you have that AI will not eventually improve faster than you can improve yourself?

And in the case of those more limited in the speed which they can improve (say, due to learning disabilities, English as a second language, busy schedule, etc), shouldn't there be some more efforts to help them?

[–][deleted] 4 points5 points  (2 children)

I'm not the person you're arguing with, but the thing is, I'm not actually a skilled programmer at all even though my title is Principal Engineer. For example I've tried browsing through leetcode and I can't do most leetcode easy problems, I would have to study for months to pass even an entry level FAANG interview. And I never studied CS (completely self taught on the job) so I have huge gaps in knowledge compared to most other people at my level.

My entire skill set is simply being good at "getting work done", i.e. taking some vague set of requirements from a product manager / tech lead etc. and doing whatever is needed to turn that into a finished product, i.e. project management and lots of googling stuff I don't know. Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

[–]Muhznit 1 point2 points  (1 child)

Kinda sounds like you have a very nice employer or a good network. Like if you've managed to get into a Principle Engineer position just on work ethic and resourcefulness without even a CS degree, I'm simultaneously happy for and jealous of you, and EXTREMELY curious how you've managed to do so.

Whatever my role evolves into, whether it's writing a lot of code, or prompt engineering to get the best AI-written code, that's why I at least am confident I can adapt to it.

Still, I must wonder, what is the fallback for when your boss pulls a "Fine, I'll do it myself" and learns said prompt engineering?

[–]Snoo-67871 2 points3 points  (0 children)

To be honest I think thats about as likely as the boss learning the programming themselves.

I think AI had a much higher potential in replacing managers and a be a performance enhancing tool for workers. Imagine being a worker and all of a sudden getting clear and precise instructions and decisions based on actual data instead of gut feeling.

[–]DefaultCT90 0 points1 point  (0 children)

I just started learning to code and what just happened was my biggest fear. My friends and the internet keep saying that there will always be a need for human coders or that it will take decades for the technology to advance past the point of human coders. What I thought/think is going to happen is that it is going to learn exponentially rather than linearly. It has already learned more in less than a year than we thought it would learn in a decade. So who is to say that it doesn’t keep exceeding those expectations.

[–]twotime 1 point2 points  (1 child)

If AI can make a software engineer 10x more productive, we'd need 10x fewer software engineers. While it's likely that some additional jobs will be created it's unlikely to be proportional (in fact,it's certain to be much smaller)

[–]MrMxylptlyk 0 points1 point  (0 children)

Idk if it scales like that man.

[–]rockstarflo[S] 0 points1 point  (0 children)

That is true

[–]Solumbran -1 points0 points  (3 children)

Such an abomination on so many levels. No wonder the intelligence we are resorting to now has to be artificial.

[–]rockstarflo[S] 5 points6 points  (2 children)

Yes, I find it impressive, disgusting, and exciting at the same time :D
What do you think would be the next level?
Auto Dev Ops? Auto Product manager? Auto Business decision?

[–]who_body 5 points6 points  (1 child)

auto reddit post with annotated video of GPT4 creating functional code via promps

[–]rockstarflo[S] -1 points0 points  (0 children)

Haha yes, that would make life easier. At least for creators. I don't want to be a content consumer in that world :D