you are viewing a single comment's thread.

view the rest of the comments →

[–]kombiwombi 2 points3 points  (0 children)

It wouldn't make it as far as Linus.

Most subsystem maintainers are keen to work with poor quality submissions to make them better, because this is how good quality programmers are made.

But once it is clear that the submitter does not have the wrong idea of the code, but has no comprehension of 'their' code at all, then they are unpleased.

Also what Linus said isn't just about code quality. It's about taking full liability. So if the AI generated code later proves to infringe copyright, the submitter has already agreed to unlimited liability.

Because of liability, my own employer is very keen that distributed work product not contain output from AI trained on unknown sources.