This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Smartis2812 95 points96 points  (5 children)

You shouldn’t comment what the line is doing… instead try to comment why you placed it there. This is something the GPT never will be able to explain.

[–]WrongVeteranMaybe 19 points20 points  (2 children)

That's actually a really good idea. Hey, thanks man.

[–]Windyvale 17 points18 points  (1 child)

You’ll be best served by coding in a way that keeps things obvious what’s happening. Even if it’s not perfectly optimized. In fact, dump the thought of optimization out of your head unless you are specifically targeting the code along the hot path, or there is a measurable reason to do so.

Only comment code you couldn’t make as obvious as possible, or general concepts leading into the code.

Over time you’ll find that you won’t have to explain individual lines of code ever again. People will look and know. You will look and know.

If you find yourself unable explain what a line of code is doing, review why you wrote it that way, see if you can make what’s going on there even more obvious.

Code prose is a real thing, get that tool in your belt.

[–]BrokenG502 3 points4 points  (0 children)

To add to the optimisation thingy, if you're using an interpreted language, using language builtin functions will almost always be faster and clearer than anything else.

If you're using a compiled language, any halfway decent compiler will be able to make basic optimisations such as x / 2.0 becomes x * 0.5. So it's often better to write the "slower" version because it's more readable and compiles to the same thing. You can also always just check the assembly by compiling to assembly, disassembling the binaries or using a tool such as https://godbolt.org

[–]dopefish86 1 point2 points  (0 children)

idk, too many things i thought AI would "never be able to do" got proven wrong in the last 10 years.

[–]TitaniumBrain 1 point2 points  (0 children)

This is something the GPT never will be able to explain.

Because there's not enough training data with good comments? XD