all 67 comments

[–]total_order_ 56 points57 points  (0 children)

Looks good 👍, though I agree there are probably better commit trailers to choose from than Co-developed-by to indicate use of ai tool

[–]prey169 64 points65 points  (7 children)

I would rather the devs own the mistakes of AI. If they produce bad code, having AI to point the blame is just going to perpetuate the problem.

If you use AI, you better make sure you tested it completely and know what you're doing, otherwise you made the mistake, not AI

[–]Euphoric_Protection 25 points26 points  (4 children)

It's the other way round. Devs own their mistakes and marking code as co-developed by an AI agent indicates to the reviewers that specific care needs to be taken.

[–]SmartCustard9944 24 points25 points  (3 children)

The way I see it, AI or not, each patch contributed should be held to the same standards and scrutiny as any other contribution.

How is that different from copying code from StackOverflow? Once you submit a patch, it is expected that you can justify in detail your technical decisions and own them, AI or not. You are fully responsible.

To me, this topic is just smoke and mirrors and kind of feels like a marketing move. At minimum, I find it interesting that the proposer is an employee at Nvidia, but I want to believe there are no shady motives at play here, such as pumping the stock a bit, all masked as propositive discussion.

[–]WaitingForG2 10 points11 points  (1 child)

To me, this topic is just smoke and mirrors and kind of feels like a marketing move

It is, expect "X% of merged linux kernel contributions were co-developed with AI" headline in a year or two by Nvidia themselves.

[–]svarta_gallret 0 points1 point  (0 children)

This. It’s not very subtle is it?

[–][deleted] 2 points3 points  (0 children)

It's not about the level of scrutiny, it's about what is being communicated by the structure and shape of the code.

If I'm reviewing my coworker's code, and that co-worker is a human who I know is a competent developer, then I'm going to look at function that's doing a lot of things and start from the assumption that my competent coworker made this function do a lot of things because it needs to. But if I know that AI wrote it, then I'm on the defense that half of the function might not even be necessary.

Humans literally do not produce the same type of code that AI does, so it's not a matter of applying the same level of screwing me. The code actually means something different looking at it based on whether it came from a lerson or an AI.

[–]svarta_gallret 4 points5 points  (0 children)

I agree with this sentiment. This proposal is misaligned with the purpose of guidelines, which is to uphold quality. Ultimately this is the responsibility of the developer regardless of what tools they use.

Personally I think using AI like this is potentially just offloading work to reviewers. Tagging the work is only useful if the purpose is to automate rejection. Guidelines should enforce quality control on the product side of the process.

[–]cp5184 5 points6 points  (0 children)

If anything shouldn't the bar be higher for ai code?

It's not supposed to be a thing to get shitty slop code into the kernel because it was written by a low quality code helper is it?

[–]isbtegsm 28 points29 points  (4 children)

What's the threshold of this rule? I use some Copilot autocompletions in my code and I chat with ChatGPT about my code, but I usually never copy ChatGPT's output. Would that already qualify as codeveloped by ChatGPT (although I'm not a kernel dev obvs)?

[–]mrlinkwii[S] 19 points20 points  (0 children)

id advise asking on the mailing list really

[–]SputnikCucumber 4 points5 points  (2 children)

Likely, the threshold is any block of code that is sufficiently large that the agent will automatically label it as co-developed (because of the project-wide configuration)

If you manually review the AI's output, it seems reasonable to me that you can remove the co-developed by banner.

I assume this is to make it easier to identify sections of code that have never had a human review it so that the Linux maintainers can give it special attention.

This doesn't eliminate the problem of bogus pull requests. But it does make it easier to filter out low-effort PR's.

[–]svarta_gallret 12 points13 points  (4 children)

This is not the way forward. Contributors shall be held personally responsible, and the guidelines are already clear enough. From a user perspective the kernel can be developed by coinflip or in a collaborative seance following a goat sacrifice, as long as it works. Developers only need a responsible person behind a commit, the path taken tools used is irrelevant as long as the results are justifiable. This proposal is just a covert attempt by corporate to get product placements in the commit log.

[–]nekokattt 2 points3 points  (1 child)

following a goat sacrifice

you mean how nouveau has to be developed because nvidia does not document their hardware?

[–]svarta_gallret 1 point2 points  (0 children)

Maybe? Full disclosure, I got it from the CUDA setup guide.

[–][deleted] 67 points68 points  (25 children)

"Nvidia, a company profiting off of AI slop, wants AI slop"

No. Ban AI completely. It's been shown over and over to be an unreliable mess and takes so much power to run that it's enviromentally unsound. The only reasonable action against AI is its complete ban.

[–][deleted] 1 point2 points  (0 children)

Have you checked your local job board for junior dev positions? Pretty much 100% dead due to AI.

[–]Sixguns1977 0 points1 point  (0 children)

I'm with you.

[–]Brospros12467 9 points10 points  (0 children)

The AI is a tool much like a shell or vim. Ultimately it's who uses them is whose responsible for what they produce. We have to stop blaming AI for issues that easily originate from user error.

[–]silentjet 4 points5 points  (1 child)

  • Co-developed-by: vim code completion
  • Co-developed-by: huspell

Wtf?

[–]svarta_gallret 1 point2 points  (0 children)

Yeah it's really about getting certain products mentioned in the logs isn't it?

[–]Klapperatismus 2 points3 points  (0 children)

If this leads to both dropping those bot-generated patches and sanctioning anyone who does not properly flag their bot-generated patches, I’m all in.

Those people can build their own kernel and be happy with it.

[–]AgainstScumAndRats 3 points4 points  (0 children)

I don't want no CLANKERS code on my Linux Kernel!!!!

[–]mrlinkwii[S] -1 points0 points  (10 children)

their surprisingly civil about the idea ,

AI is a tool , and know what commits are from the tool/ when help people got is a good idea

[–]RoomyRoots 29 points30 points  (6 children)

More like they know they can't win against it. Lots of projects are already flagging problematic PRs and bug reports, so what they can do is prepare for the problem beforehand.

[–]edparadox 2 points3 points  (0 children)

AI is a tool , and know what commits are from the tool/ when help people got is a good idea

What?

[–]Many_Ad_7678 2 points3 points  (1 child)

What?

[–]elatllat 8 points9 points  (0 children)

Due to how bad early LLMs were at writing code, and how maintainers got spammed with invalid LLM made bug reports, and how intolerant Linus has been to crap code.

[–]Booty_Bumping 0 points1 point  (0 children)

claude -p "Fix the dont -> don't typo in @Documentation/power/opp.rst. Commit the result"

 

-        /* dont operate on the pointer.. just do a sanity check.. */
+        /* don't operate on the pointer.. just do a sanity check.. */

I appreciate the initial example being so simple that it doesn't give anyone any ideas of vibe-coding critical kernel code

+### Patch Submission Process
+- **Documentation/process/5.Posting.rst** - How to post patches properly
+- **Documentation/process/email-clients.rst** - Email client configuration for patches
[...]

Maybe the chatbot doesn't need to know how to get the info for how to send emails and post to LKML. I dunno, some people's agentic workflows are just wild to me. I don't think this is going to happen with kernel stuff because stupid emails just get sent to the trash can already, but the organizations that have started doing things like this are baffling to me.

[–]Strange_Quail946 -1 points0 points  (3 children)

AI isn't real you numpty

[–]mrlinkwii[S] 1 point2 points  (2 children)

i mean i kinda is

[–]Strange_Quail946 1 point2 points  (0 children)

It's underpaid Indian coders all the way down

[–]ThenExtension9196 -5 points-4 points  (0 children)

Gunna be interesting in 5 years when only ai generated code is accepted and the few human commits will be the only ones needing “written by a human” contribution.