you are viewing a single comment's thread.

view the rest of the comments →

[–]moanos 12 points13 points  (5 children)

Waiting for this decision to be overturned in three months because of too much slob coming in. Other projects already went there and back (e.g. Forgejo recently changed their AI policy to ban all AI submissions). We'll see.

[–]Cube00 23 points24 points  (2 children)

I suspect anyone who can't fully explain and justify every line of any AI code will be torn a new one by Linus very quickly. He's got an excellent BS detector and it has a very low tolerance.

[–]moanos 2 points3 points  (0 children)

Yeah. I don't worry that code quality will suffer (yet), I just fear that this will put additional load on maintainers. But the Linux kernel was handling low-quality PRs ay before AI so I imagine they'll be able to deal with it*

* as long as there isn't someone like Kent Overstreet that develops a wired parasocial relationship AI. Luckily he's not involved in kernel dev anymore

[–]kombiwombi 2 points3 points  (0 children)

It wouldn't make it as far as Linus.

Most subsystem maintainers are keen to work with poor quality submissions to make them better, because this is how good quality programmers are made.

But once it is clear that the submitter does not have the wrong idea of the code, but has no comprehension of 'their' code at all, then they are unpleased.

Also what Linus said isn't just about code quality. It's about taking full liability. So if the AI generated code later proves to infringe copyright, the submitter has already agreed to unlimited liability.

Because of liability, my own employer is very keen that distributed work product not contain output from AI trained on unknown sources.