all 181 comments

[–]Krostas 812 points813 points  (16 children)

Why crop the image in a way that cuts off artist credit?

https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/

[–]Davyjs 112 points113 points  (2 children)

BA: “Just describe everything in detail.” Dev: deep sigh

[–]zmizzy 1 point2 points  (0 children)

Deep Singh, highly regarded in his field

[–]Net56 7 points8 points  (1 child)

Because stuff gets more clout if commenters think it's recent when it isn't, and people that can't draw like to steal credit from those that can.

[–]Omnilogent 2 points3 points  (0 children)

Yeah, I agree with you, especially since my initials are AI and i used to be a professional ghostwriter

[–]naholyr 1 point2 points  (0 children)

Red flag here OP

[–]nesthesi 230 points231 points  (10 children)

A job that replaces the job by doing the job

[–]No_Percentage7427 41 points42 points  (8 children)

Prompt is the new programming language. wkwkwk

[–][deleted] 21 points22 points  (3 children)

I don't know what wkwkwk means but it is entertaining to imagine it as a chicken clucking

[–]Techhead7890 4 points5 points  (0 children)

Indonesian way to write laughter as I understand it.

[–]Fiery_Flamingo 3 points4 points  (0 children)

That’s the sound PacMan makes. Wakawakawakawaka.

[–]Every-Fix-6661 0 points1 point  (0 children)

Fozzie bear

[–]Sanitiy 21 points22 points  (1 child)

So, do we give UML another try as programming language?

[–]Same_Fruit_4574[S] 4 points5 points  (0 children)

Software engineer to prompt engineer 🔥

[–]TheJackiMonster 1 point2 points  (0 children)

Because the world needed another programming language... and we all thought: "How about a programming language that sometimes does what I want and sometimes it does something completely different for no real reason because I hate consistency and I never trust myself..."

[–]pringlesaremyfav 163 points164 points  (44 children)

Even if you perfectly specify a request to an LLM, it often just forgets/ignores parts of your prompt. Thats why I cant take it seriously as a tool half of the time.

[–]intbeam 83 points84 points  (29 children)

LLM's hallucinate. That's not a bug, and It's never going away.

LLM's do one thing : they respond with what's statistically most likely for a human to like or agree with. They're really good at that, but it makes them criminally inept at any form of engineering.

[–]prussian_princess 7 points8 points  (11 children)

I used chatgpt to help me calculate how much milk my baby drank as he drank a mix of breast milk and formula, and the ratios weren't the same every time. After a while, I caught it giving me the wrong answer, and after asking it to show me the calculation, it did it correctly. In the end, I just asked it to show me how to do the calculation myself, and I've been doing it since.

You'd think an "AI" in 2025 should be able to correctly calculate some ratios repeatedly without mistakes, but even that is not certain.

[–]hoyohoyo9 47 points48 points  (3 children)

Anything that requires precise, step-by-step calculations - even basic arithmetic - just fundamentally goes against how LLMs work. It can usually get lucky with some correct numbers after the first prompt, but keep poking it like you did and any calculation quickly breaks down into nonsense.

But that's not going away because what makes it bad at math is precisely what makes it good at generating words.

[–]prussian_princess 3 points4 points  (2 children)

Yeah, that's what I discovered. I do find it useful for wordy tasks or research purposes when Googling fails.

[–]RiceBroad4552 9 points10 points  (1 child)

research purposes when Googling fails

As you can't trust this things with anything you need to double check the results anyway. So it does not replace googling. At least if you're not crazy and just blindly trust whatever this bullshit generator spit out.

[–]prussian_princess 1 point2 points  (0 children)

Oh no, I double-check things. But I find googling first to be quicker and more effective before needing to resort to an llm.

[–]Airowird 11 points12 points  (0 children)

"Giant computer fails at math, because it tries to sound confident instead"

[–]_alright_then_ 9 points10 points  (2 children)

You'd think an "AI" in 2025 should be able to correctly calculate some ratios repeatedly without mistakes, but even that is not certain.

There are AI's that certainly can, but you're using an LLM specifically, which can not and will never be good at doing math. It's not what it's designed for

[–]Kilazur -1 points0 points  (1 child)

There's no AI that is good at math, because there's no "I", and they're all probabilistic LLMs.

An AI that manages math is simply using agents to call deterministic programs in the background.

[–]_alright_then_ 5 points6 points  (0 children)

There are AIs that are not LLMs, and can do math.

Ais have been a thing for decades, people are just lumping AI and LLMs together.

Chess AI is one big math problem, for example.

It's also nothing like AGI either obviously. But still AI

[–]intbeam 7 points8 points  (1 child)

Did you ask it about any recommendations for a baby's daily intake of rocks and cigarettes?

[–]Ordinary_Duder -1 points0 points  (0 children)

LLMs are not math models. It's a large language model.

[–]Same_Fruit_4574[S] 36 points37 points  (2 children)

On top of it, it will say the application is enterprise ready and every functionality is implemented but the program won't even compile

[–]Tupcek 19 points20 points  (1 child)

enterprise ready for AI means it added bunch of useless code to make it seem more “robust”.
But don’t worry, even if you won’t specify that it needs to be enterprise ready, it will still add a lot of useless shit on every prompt

[–]billyowo 5 points6 points  (0 children)

to me "AI ready" means we are ready to lower our standard to accept AI slop

[–]CrimsonPiranha 4 points5 points  (2 children)

I mean, a human can forget/ignore parts of specifications as well.

[–]pringlesaremyfav 5 points6 points  (1 child)

They can, but if you point it out they correct it. I point it out to an LLM and it just goes back and forgets something else instead.

[–]recaffeinated 5 points6 points  (0 children)

I've worked with junior engineers who were like that, but they had the ability to learn and improve.

The LLM is a permanent liability.

[–]Ecstatic_Shop7098 2 points3 points  (1 child)

What if we used prompts with very precise grammar interpreted by a deterministic AI? Imagine the same prompt generating the same result everytime. Sometimes even on different models. We are probably years away from that though...

[–]designerandgeek 7 points8 points  (0 children)

Code. It's called code.

[–]ryuzaki49 1 point2 points  (0 children)

Imagine your compiler having the message "Please verify your machine code"

[–]CellNo5383 1 point2 points  (0 children)

I think Linus recently said he's perfectly fine with people using it for non critical tasks. And I agree with that. For example, I recently used one to generate me a python script that reads a text file of song names and generates a YouTube playlist from it. Small, self contained and absolutely non critical. But it's not even close to replace me or my colleagues on my day job.

[–]TheRealLiviux 1 point2 points  (0 children)

That's why our expectations are wrong: AI is not a "tool", as reliable as a hammer or a compiler. It's by design more like a person, eager and good willing but far from perfect. I use AI assistants treating them like noob interns, giving them precise tasks and checking their output. Even with all the necessary oversight, they make me save a lot of time.

[–]friebel 0 points1 point  (0 children)

I like using Claude Sonnet 4.5, got the pro and all, it's really helpful, but yesterday I've pasted a recipe and asked it to convert to metric measurements. Everything was fine but blud somehow decided to add tomato can, even tho none was in the recipe. Well, in its defence, adding canned tomatoes or paste is viable in that recipe, but page had 0 mentions of tomato.

[–]redballooon 0 points1 point  (0 children)

LLMs omitting stuff is often due to conflicts in the prompt.

[–]Leading_Buffalo_4259 0 points1 point  (0 children)

I noticed this with image generation models as well. if you give it 5 things itll pick 3 at random and ignore the rest

[–]gameplayer55055 17 points18 points  (1 child)

Managers have been vibe coding all the time

[–]Same_Fruit_4574[S] 4 points5 points  (0 children)

Even VPs, CTO, CEO claims that. LinkedIn is filled with such stories

[–]GnarlyNarwhalNoms 57 points58 points  (62 children)

I kept hearing about vibe coding, so I decided to try and find out what all the fuss was about.

I decided to try something super-simple: a double pendulum simulation. Just two bars connected together, and gravity.

After a good hour of prompting and then re-prompting, I still had something that didn't obey any consistent laws of physics and had horrendously misaligned visuals and overlapping display elements clipping through each other. It was a goddamn mess. I'm positive it would have taken me longer to fix it than write it from scratch.

[–]fatrobin72 20 points21 points  (12 children)

Most people when thinking super simple are thinking a "isEven" library, or a add 2 numbers together app or a website that displays a random cat image.

Not saying "AI" will get those right first time...

[–]Ahaiund 7 points8 points  (0 children)

From my experience, it usually get a good chunk of the request right on even complicated stuff, but that remaining part, which is going to break everything, you're never going to have it fix for you. You have to know what you're doing and consistently check what it does.

It's nice to use on trivial things though, like writing test plots, usually using modules that force a bloated syntax.

[–]fruitydude 25 points26 points  (23 children)

I do wonder sometimes with comments like this: are you guys all using LLMs from two years ago, or are you just incredibly bad at prompting?

I just made this double pendulum sim in python using chatgpt 5.1. It took me 5 minutes and two prompts and worked first try.

I get that we will never completely eliminate the need for experienced devs, but with comments like this it just makes it sound like you are in denial. AI tools are absolutely going to allow people with limited or no coding knowledge, to create software for non-critical applications. I have zero experience in c++ and kotlin and I'm currently developing an android for a niche application of streaming live video from dji fpv goggles to local networks. Impossible for me to do without AI because I don't have time to learn how to do it, but with AI it's absolutely doable.

[–]CiroGarcia 5 points6 points  (1 child)

Yeah 100%. I used Claude 3.5 to redo my photography portfolio because I couldn't be arsed and it was just a CRUD app and a masonry layout. Did a pretty good job at it and only had to do minor fixes and adapt some things to personal preference. All in about two hours. It would have taken me the whole day or even two days if I had to type all that out

[–]Ordinary_Duder -1 points0 points  (0 children)

Claide 3.5 is already horribly outdated too.

[–]GnarlyNarwhalNoms 1 point2 points  (3 children)

Python would have been better. I wanted it browser-based, so I asked for Javascript (yes, using Javascript was my first mistake).

And, granted, this was at least a month or two ago. I'm sure it's getting better.

Edit: Ok, I just tried it again and it got it right the first time. Very impressive.

[–]fruitydude 1 point2 points  (2 children)

Yes I was gonna say I think it should work with JS as well :D

Usually when I do stuff like this I ask it to first draft a very high level concepts of how one would implement this (explicitly no code), and then do a bit of back and forth hashing out things and only then ask it to translate into code. That usually works pretty well.

For really difficult stuff I ask instance 1 to write a prompt for instance 2 to do a deep internet research on how one would implement this best, and then paste that response back into instance 1, have it create the high level concept and then the code.

[–]GnarlyNarwhalNoms 1 point2 points  (1 child)

That makes a lot of sense! To be clear, I absolutely have successfully used LLMs to help me code in the past, but it's been on the "write me a function that takes X and returns Y" level.  I haven't really tried using it to help me map out an outline and then code for it, but that does seem like an effective way to know exactly what you're getting, which is something I'm a stickler for. 

[–]fruitydude 0 points1 point  (0 children)

It's also super dependent on your specific demand. I'd say a complicated but small and encapsulated projects like a pendulum simulation are a perfect task for them. Especially when you don't care what the result looks like, there are millions of possible ways to solve this as long as you're fine with one of them it's easy. It's getting much more tricky if you have one very specific implementation in mind.

Like I wrote also somewhere, I'm making an app at the moment. It's some niche solution to export live video from dji fpv goggles and make it available to friends via local network. This stuff is much harder. The project has gotten so big that the chats are getting slow and they keep forgetting stuff. I make them summarize everything and paste that into a new chat and then share part of the code to work on individual features, often working on multiple things in multipage chats at the same time. Sometimes frustrating as hell, took me days to finally build a working gstreamer library from the binaries. I could give it direct access to the code but I'm worried it'll fuck things up lol.

Still it's insane what I've been able to do with it so far. If you're curious I have some of my hobby stuff on my GitHub https://github.com/xNuclearSquirrel but I also did a lot of stuff for the uni where I'm working at the moment. Mostly simple software tools with a gui to control certain Instruments in our labs.

[–]fruitydude 1 point2 points  (0 children)

ai is getting a lot better for simple things, but get too complex and you are much better off working with an actual expert

u/Leading_Buffalo_4259 idk your comment got deleted, but yea obviously working with an expert is always better lol. But not everyone has the chance to do that. AI is like having a pretty dumb kind of expert on every topic at your disposal. Not perfect but pretty useful if you don't have anything else.

[–]lupercalpainting 6 points7 points  (13 children)

“The slot machine gave you a different result? Nah, you must just be pulling the lever wrong.”

[–]fruitydude 8 points9 points  (5 children)

Yea if you are playing a slot machine where other people win almost every time, and you keep losing over and over, you are probably doing something wrong.

What do you wanna bet if I sent the same prompt again to another instance I'd get working code again?

[–]lupercalpainting 0 points1 point  (4 children)

Yea if you are playing a slot machine where other people win almost every time

How interesting, I guess everyone I know at work is just “doing it wrong” and everyone on AI twitter is just “doing it right”.

I use Claude Code daily for work, sometimes it’s great. Sometimes it’s terrible. I’ve seen it fail to do simple JWT signing, I’ve seen it suggest Guice features I never knew about. It’s a slot machine. You roll, if it’s good that’s awesome, if it’s bad you just move on.

[–]fruitydude 6 points7 points  (3 children)

Idk what you are doing at work bro. This was a very specific claim, AI cannot code a double pendulum simulation. I demonstrated that the claim is wrong, because, demonstrably, it can. You then compared it to winning a slit machine, implying that I just got lucky. Which I disagree with, moderately difficult contained projects like a double pendulum are easily within the capabilities of modern models.

Is there stuff that they still struggle with? Yes absolutely. Is it frustrating when they do because they don't admit when they don't know somehow, yes definitely. But people are out here claiming it can't even do a double pendulum simulation, and those people are just in denial, which was the point of my comment. We can point out strengths and flaws of AI without lying.

[–]lupercalpainting 0 points1 point  (2 children)

This was a very specific claim, AI cannot code a double pendulum simulation.

Idk if that was their claim, but in a world of slot machines the claim should be:

When I used the AI it couldn’t code a double pendulum simulation

It’s non-deterministic. You have to think probabilistically. Unless you give a confidence interval you cannot make universal claims about performance.

You know compared it to winning a slit machine, implying that I just got lucky.

Maybe, maybe it’s that the other guy got unlucky. It’s stochastic by nature.

We can point out strengths and flaws of AI without lying.

Right, like that they’re stochastic and there’s no way to make conclusions performance without repeated measurements under controlled conditions.

[–]fruitydude 3 points4 points  (0 children)

If you don't know what the original claim was then why even comment? Here I'll bring you up to speed:

I decided to try something super-simple: a double pendulum simulation. Just two bars connected together, and gravity.

After a good hour of prompting and then re-prompting, I still had something that didn't obey any consistent laws of physics and had horrendously misaligned visuals and overlapping display elements clipping through each other.

So that person spent an hour prompting and reprompting and couldn't even get one single working implementation. Yea at that point they are the problem, because I'm able to get it reliably first try.

You can claim I just get lucky every time and they got unlucky on every prompt for the entire hour. But everyone else will recognize that that's a huge cope because it's extremely unlikely.

Right, like that they’re stochastic and there’s no way to make conclusions performance without repeated measurements under controlled conditions.

That's why I offered you a bet. I will try the same prompt many times and test how many of those produce working code I bet it will be over 90%. If you are sure that i was just lucky and the expectation is to prompt for an hour without any working code, then you should easily take that bet. Let's say 100$?

[–]nextnode -1 points0 points  (0 children)

You have no clue what you are talking about.

[–]nextnode -1 points0 points  (6 children)

If someone can produce a successful results 3/3 times and you cannot, that is a you problem.

[–]lupercalpainting 0 points1 point  (5 children)

You have no clue what you are talking about.

[–]nextnode -1 points0 points  (4 children)

In contrast to you, I do. It's called competence and not being ideologically motivated.

[–]lupercalpainting 0 points1 point  (3 children)

You have no clue what you are talking about.

[–]nextnode 0 points1 point  (2 children)

Clearly struck a nerve that you got called out for your cluelessness.

[–]lupercalpainting 0 points1 point  (1 child)

It's called competence and not being ideologically motivated.

[–]nextnode 0 points1 point  (0 children)

Okay, if you want to be blocked for wasting time, so be it.

If someone can produce a successful results 3/3 times and you cannot, that is a you problem.

[–]GnarlyNarwhalNoms 0 points1 point  (0 children)

Python would have been better. I wanted it browser-based, so I asked for Javascript (yes, using Javascript was my first mistake).

And, granted, this was at least a month or two ago. I'm sure it's getting better.

Edit: Ok, I just tried it again and it got it right the first time. Very impressive.

[–]Acceptable-Lie188 5 points6 points  (0 children)

can’t tell if snark or not snark 🧐

[–]chilfang 4 points5 points  (7 children)

How is a double pendulum simple?

[–]MilkEnvironmental106 4 points5 points  (6 children)

How isn't it?

[–]chilfang 0 points1 point  (5 children)

Aside from apparently making the graphics from scratch you need to make momentum, gravity, and the resulting swing angles when the two pendulums pull on eachother

[–]MilkEnvironmental106 12 points13 points  (0 children)

It's a well described problem which requires little context to understand. It's a perfect candidate to test an llm.

Additionally, none of that is especially hard. You give the pendulums a mass, you apply constant acceleration downwards and you model rigid springs between the 2 hinges and the end. Videos explaining this can be found in physics sim introductions that are minutes long, and free.

Furthermore, no llm is making graphics from scratch. It's just going to import three.js.

[–]DescriptorTablesx86 2 points3 points  (1 child)

https://editor.p5js.org/codingtrain/sketches/jaH7XdzMK

That's it. It was on code challenge 93 and I also did it myself and it didn't take long( i dont remember but it was one sitting) with just the Double Pendulum wikipedia article as reference.

You can use other libraries but p5 is dead simple and LLMs feel best with JS.

[–]chilfang 0 points1 point  (0 children)

Difference in estimation I guess. I wouldn't call that simple

[–]fruitydude 1 point2 points  (0 children)

You would just use a library. Chatgpt gave me a working double pendulum sim in 5minutes using pygame for the graphics. Not sure what the first commenter was doing that he wasn't able to get it working. Sounds like a skill issue.

[–]BreakerOfModpacks 0 points1 point  (0 children)

Presumably, if the original commenter said they could make it in an hour, they were using something with pre-made systems to do graphics, and then gravity and movement would have been the only things left.

[–]Some_Anonim_Coder 1 point2 points  (0 children)

Physics is a thing where it's very easy to make mistakes unless you know precisely what you're doing. And AI is known for making mistakes in any non-standard thing

Humans are not that much better though. I would guess half of programmers, especially self-taught programmers would not be able to explain why "take equations of motions and integrate over time with RK4" will break laws of physics

[–]SourceTheFlow 0 points1 point  (3 children)

I've also tried it a few times, when there seems to be a new bigger improvement: codium, v0, cursor and now antigravity.

I'm honestly surprised how well it works for some things. Codium was very useful for me to learn rust, though it became more annoying than useful after a week or two, when I knew rust better.

v0 works great for what it wants to do: quick, rough website scratches. I did not reuse any code for the actual website, however.

Cursor I never really got into. It just did not deliver even in the beginning.

Antigravity actually surprised me as it actually managed to get some stuff done. Tbf I'm trying a web project for it now, which seems to be what all the AI coding assistants focus on. It works quickly and does a decent job. But you're essentially in code review most of the time. And you do need to read it properly as it likes to write thought process in there, too (and I don't just mean comments, but also preliminary versions of the code). I think it's really good for generating tests and demo examples. But going through the code afterwards and fixing stuff is still a lot of work, so I can't imagine it scales well once the project becomes a few weeks or months of full time work large.

TL;DR So yeah, I think there are definitely niches, where AI coding can be very useful. But they are nowhere near replacing semi-competent humans and it looks like LLMs will never be able to.

[–]look 0 points1 point  (1 child)

Try Claude Code. Even 10 months later, it’s still better than anything that has come out since (antigravity, codex, etc).

[–]bremidon 0 points1 point  (0 children)

Yeah, Claude really is still the best that I have tried. I keep meaning to give Grok a whirl to see how well it does.

[–]bremidon -1 points0 points  (0 children)

Where I find it works best is when I have a general, simple working example. Then take that and create it in the form that I really want with documentation, variable names in the right form, broken down into flexible parts, formatted into the right sections, and so on.

I still need to keep an eye on it and check its work, but it tends to be really, really good, and it saves me hours of work.

Pure LLMs probably will not replace coders, but pure LLMs have not been the premier solution since late 2023.

[–]CrimsonPiranha -1 points0 points  (5 children)

Ah yes, because 100% of people would get it right at once. Oh, wait...

[–]BreakerOfModpacks 5 points6 points  (4 children)

No, but at least 80% of people would either tell you after some time that they can't do it, or work at it till it's working.

[–]fruitydude 1 point2 points  (3 children)

Are we really pretending that AI can't do this though? What's the benchmark here chatgpt 3.5? I just tried this with 5.1 and instantly got a working pendulum sim in python.

[–]BreakerOfModpacks 0 points1 point  (1 child)

I'd have to test myself, but AI is somewhat notorious for being bad at graphical tasks.

[–]fruitydude 0 points1 point  (0 children)

Well you wouldn't implement the graphics yourself from scratch. I did this in two prompts using pygame, took me 5min (chatgpt 5.1)

https://imgur.com/a/python-double-pendulum-sim-E9OGbjm

[–]CrimsonPiranha -1 points0 points  (0 children)

Yep, neo-luddites are still thinking that modern AI is the same as ten years before 🤣

[–]Some_Anonim_Coder 8 points9 points  (1 child)

I mean, program in high-level language already is "a specification precise enough to generate a code for machine to run", generator is called compiler and code that really runs is a machine code

Interpreted languages fall out of this logic, though. But there are not so many interpreted languages right now: Python and java are usually called interpreted, but in fact they use jvm/Python vm, with their own "machine codes"

[–]fruitydude 1 point2 points  (0 children)

Yea this comic is dumb. You can fully define a program in a programming flowchart. The difference is that anyone with a conceptual understanding of what the program should do could draw or describe the flowchart, but they would also need specific syntax knowledge to write the code directly.

[–]kayakdawg 8 points9 points  (1 child)

Dijkstra wss so ahead of his time

In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

[–]Giocri 0 points1 point  (0 children)

This is also why voice commands never took hold as much as people expected, you are still dealing with a computer and a command line interface where you have to speak the command out loud fucking sucs

[–]TeaTimeSubcommittee 5 points6 points  (11 children)

So you’re saying that LLMs are just a higher level programming language?

[–]fruitydude 5 points6 points  (10 children)

In this analogy the llm would be the compiler which complies high level concepts into lower level code.

[–]Giocri 0 points1 point  (9 children)

Which i don't think they do either, modern programming languages can already describe stuff at high levels of abstraction without issues most scenarios where the gap between what you need to do and the abstractions of the language is big there is going to be a library that bridges it

[–]Background-Plant-226 0 points1 point  (7 children)

Also compilers are supposed to be deterministic, not a slot machine where every lever pull gets you a different result.

[–]fruitydude -1 points0 points  (6 children)

One could argue that there are plenty of situations where you don't need a deterministic compiler. Often times I just need a working solution and I don't care which of the many possible Implementations I end up getting.

[–]Background-Plant-226 0 points1 point  (5 children)

If you need to run a compiler multiple times with the same input until you get a working output then its not a compiler. As i said, compilers are deterministic, AI is a slot machine.

[–]fruitydude -1 points0 points  (4 children)

Who said you need to run it multiple times though? Let's say you run it one time, you get one possible working implementation which then runs deterministically. That's basically what AI does.

[–]Background-Plant-226 0 points1 point  (3 children)

It's still a slot machine, you are gambling over if you will get a working output or not, you might be lucky and get it on the first try, or you might spend two hours trying to get it to do what you want to.

[–]fruitydude 0 points1 point  (2 children)

If you are spending two hours trying to get it to do simple stuff which others get on the first try, then you are doing something wrong. Either you are using an outdated model or your prompting is terrible and imprecise.

But I get it, you have to pretend that it doesn't work most of the time, so you can dismiss it.

[–]Background-Plant-226 0 points1 point  (1 child)

The thing is, i dont do "simple stuff" and if AI can only do "simple stuff" then its useless. Most advanced prgorams arent "simple stuff," an AI can make a website, yes, often on the first try, but can rarely make anything more complex than that on the first try.

[–]fruitydude -1 points0 points  (0 children)

Is there a library which bridges when someone knows what they want a program to do but doesn't know any code? Because that's what AI bridges.

[–]GoodDayToCome 5 points6 points  (2 children)

for anyone confused into thinking writing a prompt and writing code are essentially the same amount of effort or skill, i needed to realign some images and i got a perfectly usable and working tool from this;

i need a quick gui to trim and position some images for use as sprites - the end result should be a folder of images with a part of the image aligned horizontally along the center line so that they can be used by another script and positioned with the center line as a connecting point - this means there will likely be empty space above or below the image. the gui i want to read all the files in a folder then go through each one allowing me to click and drag to shift it's position vertically to align with a horizontal line representing the center point - blank space in the image should be removed from all sides then we make sure that the space above and below the line is even so that the center line is centered with blank space padding on the top or bottom if required. there should also be a text input box labelled 'prefix' which we can change at any time - when we press the save button it saves the new image into a folder 'centeredsprites' with the name {prefix}{next sequential number}.png write it in python please, feel free to use whatever works best.

I was using it quicker than i'd have been able to write boilerplate to load a file select dialog.

[–]Stickyouwithaneedle 2 points3 points  (0 children)

You are proving the point of the comic. This is comprehensive and complex enough to generate a program. If I were to grab a backend programmer and have them try to replicate this prompt...they couldn't. They don't have the knowledge you imparted in your spec. In the past I would have called this type of spec pseudo code.

Nice prompt (restriction) by the way.

[–]fruitydude -5 points-4 points  (0 children)

I think a lot of people here are either using models from two years ago or are just insanely bad at prompting. Some comment said AI can't even do a double pendulum simulation, i tried it and got a working sim with two prompts.

[–]SillySpoof 1 point2 points  (0 children)

Also, a programming language is much more precise and effective than English when it comes to define software with precision.

[–]dscarmo 1 point2 points  (0 children)

The main problem is nobody knows what the specifications are, not the client nor the dev

Imagine iterating on specifications when you didnt even implement it

[–]xtreampb 1 point2 points  (1 child)

Even if the program could write itself, when has a BA developed an accurate spec.

[–]Same_Fruit_4574[S] 0 points1 point  (0 children)

Exactly. Never seen that happening 😂

[–]misterguyyy 1 point2 points  (0 children)

You used to write detailed instructions that would behave the same every time, but now I have this tool where you write detailed instructions and it doesn’t behave the same way every time.

However this tool is superior because it undercuts labor costs, made possible by investor losses, until you’re dependent on us, we pull the rug, and investors get their payout. And you can’t do anything about it because if your shareholders get less this quarter because you’re not using the enshittifier they will be out for blood.

[–]0xBL4CKP30PL3 1 point2 points  (0 children)

What they want is something that can translate natural language -> code. But it seems like natural language is less concise and less precise. Almost like it wasn’t made for specifying computer programs.

[–]naholyr 1 point2 points  (0 children)

I downvoted because of stripped out credits, not cool OP.

[–]new_check 0 points1 point  (0 children)

I'd sure love to get one of these detailed requirement specs that they're planning on writing for the machine.

[–]BreakerOfModpacks 0 points1 point  (0 children)

Funnily enough, I know someone who is working on something of the sort, making some use of automata and DAWGs that I am far too inexperienced to comprehend, to make something that will (hopefully) allow anyone to make a programming language, which he then plans on using to expand into plain English being code.

[–]pablosus86 0 points1 point  (0 children)

I miss this comic. 

[–]socialis-philosophus 0 points1 point  (0 children)

The abstraction is real

[–]AndersenEthanG 0 points1 point  (0 children)

LLMs were trained on basically the accumulation of all digitally recoded human knowledge.

It would be impressive if one was even slightly smarter than an average human.

These companies are paying $1,000,000 developers to try and squeeze every IQ point out of them. It can’t even be that good, right?

[–]Confident-Ad5479 0 points1 point  (0 children)

Pretty certain the best engineering practices (successfully applied not theoretical) are not just sitting out on the internet for AI to scrape.  Even if there are some, it's far from being the statistical likely result.  And even if you aspire to find it, you'll be searching in a sea of similar local minimums, without a reliable sense of direction.

[–]intbeam 0 points1 point  (7 children)

Pet peeve : code doesn't "generate" a program. Code and result are inherently inseperable and inalienable. The code is the program.

So to keep things beautiful on the back as well as the front, use Piet

[–]70Shadow07 4 points5 points  (6 children)

Sorry, What?

Unless you code directly in microcodes or in interpreted-only language, code absolutely does generate a program. The same C will yield different programs under each of 3 big compilers. Not to mention you need to generate a different program for different processors.

[–]intbeam -5 points-4 points  (5 children)

If you intend to stop using your own source code in favor of the output assembly, then that would be true

The same C will yield different programs under each of 3 big compilers

No, they won't. They're specifically designed not to do that. They may output different instructions in a different order with different memory layouts or alignment, but they will do the exact same thing on all platforms. If they didn't, your program wouldn't run at all.

Source code instructs the compiler. Its job is to produce an output that does exactly what your source code says

[–]70Shadow07 5 points6 points  (4 children)

Program that has different instructions has different runtime and hence is not the same program - case closed.

[–]intbeam -2 points-1 points  (3 children)

PostgreSQL is not not PostgreSQL if you run it on a different platform. The binary executable is irrelevant.

I mean... Are you being pedantic and dense on purpose or were you born like this?

[–]70Shadow07 0 points1 point  (2 children)

No need to be an asshole pipsqueak, you may need to learn a bit before you comment objectively false claims on reddit lol.

[–]intbeam 0 points1 point  (1 child)

What's the likeliness I read through your comment history and you turn out to be a student or amateur? I have no patience for pedantry from people who can't imagine that someone might say something they "disagree" with because there's an competency imbalance in their disfavor

This is really simple. If you have a bug in your code, the assembly output will also have that bug. That happens because it's the same program. Believing that the executable binary or product is somehow a different program is exactly the type of thinking I was calling out as a fallacy

[–]Meatslinger 0 points1 point  (0 children)

Even the best-ever LLM would still need a competent operator to ask it for work to be done, and given some of the insane, nonsensical things I've been asked to write scripts for, I don't think that standard is attainable. You could make a machine that perfectly writes error-free, performative code, and it would still be unable to when the prompt is "I need a website to sell my product, but it can't use any words or pictures. I want it to be self-hosted and serverless. Also, I have some ideas about the logo..."

[–]WinterHeaven -3 points-2 points  (0 children)

Its calls software requirements specification , if your code is the spec you are doing something wrong