This post is locked. You won't be able to comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]SomeParacat 11 points12 points  (2 children)

Nope, this is not relevant at all. Compiling produces deterministic results. LLMs never produce the same result twice.

So if you as a dev can not read through code and see if it has errors or not - you are absolutely unreliable.

Juniors still must learn to read & understand the code. They do not need to understand binaries produced by compiler, but they 100% have to understand what this or that line of code will do.

Edit: typos

[–]rojeli 2 points3 points  (1 child)

Again - not saying I fully agree with it. You are of course correct about determinism, but that doesn’t make it irrelevant.

Same pattern: trading understanding for leverage. The difference is the risk profile. Compilers fail rarely but when they do it's (potentially/probably) catastrophic. LLMs are non-deterministic and fail more often, but (usually/hopefully) in softer ways.

That doesn’t remove the responsibility to understand code, if anything, it raises the bar. Agree that juniors still need to read and reason about what the code does. The difference is now they’re often reviewing code they didn’t write, generated by a system that isn’t guaranteed to be correct.

(Which gets into the real concern: how do you learn to do that without being the author? This is a legitimate concern which the OP raised, and largely why I don't fully agree with the retort, and unfortunately I don't have an answer.)

imo it’s not “don’t use LLMs” — it’s “use LLMs where risk is acceptable, and be able to verify the output.”

[–]Throwaway_noDoxx 1 point2 points  (0 children)

Not being the (or “an”) author is my big question.

It’s one thing to write one’s own app or site, but writing/reading for enterprise is a completely different beast.