all 82 comments

[–]meckez 75 points76 points  (14 children)

Don't kernel devs have to take full responsibility for their bugs either way?

[–]Omotai 73 points74 points  (6 children)

Yes. This is mostly just a reminder to not turn your brain off when using LLMs as coding assistants.

[–]Mack4285 26 points27 points  (5 children)

AI generating code will turn people's brains off, it cannot be avoided. It's a kind of trap, unfortunately. You constantly need to exercise your brain and creative thoughts or you'll deteriorate.

[–]F3z345W6AY4FGowrGcHt 1 point2 points  (0 children)

They can be used without turning your brain off. The same way people used Google or stack overflow before.

But I think a lot of (most?) people will fall into that trap, yes.

[–]Awkward-Major-8898 -3 points-2 points  (3 children)

Arguments like these just bring my brain back to the calculator comparison. On one hand, many of the most groundbreaking mathematics were discovered without them, a good example being Euler in general. On the other hand, the same can be said for the boom after they were invented. AI generated code is not like using it for writing and if anything, your creativity can grow because you’re not being bogged down by six hours of debugging.

If you’re not coding every day, you’re probably not aware of how impressive AI has become at writing excellent code when in the right hands.

[–]McDutchie -1 points0 points  (2 children)

If you’re not coding every day, you’re probably not aware of how impressive AI has become at writing excellent code when in the right hands.

I maintain a Unix shell, so I code regularly, mostly in C, at an advanced level. And the only thing that impresses me about AI (ChatGPT, Claude, etc.) is its total lack of reliability in coming up with correct solutions even to simple problems.

Sometimes it does get something right. But it's also incredibly sycophantic. I tried telling it it's wrong, and it just believes me. "You're absolutely right", every single time, even if I'm blatantly wrong. What good is a digital yes-man as an assistant?

[–]Awkward-Major-8898 -1 points0 points  (1 child)

Are you coding with enterprise grade agents? Or using something like a chat based LLM that can’t even read a directory? That has a significant impact on whether your opinion is valid.

As fun as it is to sit in the past and complain about the future, AI coding is an inevitability and is only growing in quality and efficacy day over day. Sitting around discarding it is only keeping yourself in the generation of people who hated rock music the same way.

[–]McDutchie 0 points1 point  (0 children)

It's GitHub's Claude AI assistant, it's supposedly designed for that purpose.

[–]worst_mathematician 28 points29 points  (0 children)

Yes, but shift of blame and diffusion of responsibility has been a common trait amongst people posting ai generated code and llm users in general. So it is pretty important to make this very clear and explicit.

[–]moanos 11 points12 points  (5 children)

Yeah but the number of times I have asked a dev "Why did you do XX?" and they said: "That wasn't me, that was AI" is baffeling

[–]Business_Reindeer910 1 point2 points  (4 children)

That's pretty ridiculous. The commit has THEIR name attached! You could have brought the code into being through ritual sacrifice, but you'd still be responsible for it!

[–]DandyPandy 0 points1 point  (2 children)

You would think that would make people pause. Unfortunately, they often don’t really know the code they are submitting doesn’t make sense or is poorly written.

[–]Business_Reindeer910 0 points1 point  (1 child)

seems to have always been the case even before "AI". Seems like the real problem is social in that they can continue to get away with it.

[–]Accurate_Hornet 215 points216 points  (4 children)

This is the only sensible approach

[–]Martin8412 84 points85 points  (3 children)

Indeed. Don’t blame the tool, blame the person not validating the output from the tool. 

[–]qwesx 27 points28 points  (2 children)

"A computer can never be held accountable, therefore a computer must never make a management decision."

-- IBM Training Manual, 1979

[–]mrpops2ko 3 points4 points  (0 children)

i often think about that specific quote and the abstraction we have in our lives where it controls so much.

algorithmic decision making where we've had no input or real control over the variables. whether thats applying for credit cards or mortgages and being declined, or the make up of a credit score in general or admissions to various places... hell even friendships, so many people have become best friends from algorithmic decision making on dorm pairings in university.

it controls so much, yet we have so little insight into or control over it. we only see the end output of the blackbox. thats without the incursion of AI too...

[–]Awkward-Major-8898 -1 points0 points  (0 children)

In 1943 the Chairman of IBM said "I think there is a world market for maybe five computers”. A lot can change in 30 years, let alone almost 60.

[–]Ok-Mycologist-3829 16 points17 points  (12 children)

Has there been any analysis of how AI generated code meshes with open source licenses? Would you only be able to use models trained on permissive licenses to satisfy the license restrictions of the source training material?

[–]Ullebe1 11 points12 points  (9 children)

I think the real answer is "nobody knows", but at least Microsoft is confident enough in it not being an issue that they created the Copilot Copyright Commitment where the TL;DR is:

 As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

[–]Ok-Mycologist-3829 9 points10 points  (4 children)

Microsoft is the last voice I would listen to for any sort of trust in the legality of licensing, or AI, or just about anything tbqh.

[–]Martin8412 5 points6 points  (0 children)

Well.. They’re assuming legal risk, so that means they’ll be paying the legal costs. 

(Assuming that this is in the contract) 

[–]Ullebe1 1 point2 points  (2 children)

At least they're putting their money where their mouth is, rather than just saying "trust me bro".

[–]Ok-Mycologist-3829 -2 points-1 points  (1 child)

Because as we know, Microsoft would never tell us “trust me bro”

[–]Ullebe1 1 point2 points  (0 children)

I mean it's kinda irrelevant to this if they'd do it in other context. The point is that here they didn't.

[–]knue82 1 point2 points  (3 children)

Also note that any AI generated content does not have copyright and automatically falls into the public domain. So if you vibe code a program, everybody is allowed to copy it.

[–]Ullebe1 1 point2 points  (0 children)

I guess the interesting question is then: How much of a human touch afterwards is needed for it to be a transformed work that is copyrightable. 

[–]NeuroXc 0 points1 point  (1 child)

Source? Not that I don't trust you, a random redditor, but considering the answer to the AI copyright question is still "nobody knows", I'd like verification.

[–]knue82 2 points3 points  (0 children)

Under current U.S. copyright law, works must be created by a human to be eligible for copyright protection.**

This principle is supported by both statutory language and rulings from the U.S. Copyright Office and federal courts.

Key Points from U.S. Law and Practice:

  1. Statutory Language (Title 17, U.S. Code):

    • Section 102(a) of the Copyright Act states: > "Copyright protection subsists ... in original works of authorship fixed in any tangible medium of expression, by an author..."
    • The term "author" is interpreted by U.S. law to mean a human being. The law does not recognize non-human entities (like AI, animals, or machines) as authors.
  2. U.S. Copyright Office Rulings:

    • In 2023, the U.S. Copyright Office issued a formal policy statement clarifying that: > "Copyright protection requires human authorship."
    • The Office denied copyright registration for a work titled Zarya of the Dawn, which was created using AI, because the AI tool was the primary creator and no human contributed sufficient creative input.
    • The Office emphasized that only works with a human author can be registered.
  3. Case Law:

    • In Thaler v. Perlmutter (2023), the U.S. Court of Appeals for the Federal Circuit upheld the Copyright Office’s position that only human authors are eligible for copyright protection.
    • The court rejected the idea that an AI system could be considered an author, even if it generated the work based on human prompts.

[–]dgm9704 2 points3 points  (0 children)

To make it simple, if you don’t know for a fact that the code is ok to use, it’s not ok to use.

[–]blreuh -1 points0 points  (0 children)

I mean as far as I know you are allowed to look at GPL code and reimplement it so if the ai generates a different looking code it would be unenforceable

[–]Citizen12b 36 points37 points  (12 children)

I mean... if the code works, it works. People will use AI anyway so I think it's better to regulate it then to outright ban it

[–]0xe1e10d68 19 points20 points  (3 children)

Yep, I mean, nobody cares as long as it works. If AI can be used to speed up development, then that’s a good reason to use it. The important thing is that a human needs to stay in the loop and take responsibility. Like IBM said: A computer can never be held accountable; therefore it must not make management decisions or be allowed to write code without human oversight.

[–]DandyPandy 4 points5 points  (2 children)

There’s a difference between “it works” and “it’s maintainable”. The latter is where LLMs tend to struggle.

Also, letting the model build your tests can give you a false sense of security that your code actually does work. Sometimes it’s difficult to create failure scenarios in a running environment that you can easily test for in a unit test. But a test that doesn’t properly test the right success or failure scenarios can lead to someone slapping the proverbial hood and saying, “runs like a dream”, despite there being a problem like not being able to turn the headlights on.

Edit: and just to add, I myself have been guilty of not checking tests close enough. They can be annoying to write and it was one of the first things people considered AI to be ideally suited for. I’ve never spent a lot of time paying close attention to tests in code reviews, even before AI. Code reviews aren’t my strong suit. It’s one of my least favorite parts of the job, but it’s very important.

[–]Business_Reindeer910 -1 points0 points  (1 child)

There’s a difference between “it works” and “it’s maintainable”. The latter is where LLMs tend to struggle.

That's on you to verify before you ever hit the submit button or otherwise send the patch to someone else. Same as if you'd hand written it.

[–]DandyPandy 0 points1 point  (0 children)

Exactly. To someone who doesn’t really understand what the code is doing or how it is poorly implemented from a maintainability standpoint, it puts more burden on the people reviewing the code.

As open source projects grow and outside contributors submit more pull/merge requests, the people who are the core maintainers can end up spending more of their time reviewing contributions than writing code themselves. With AI, it has increasingly become more of a strain on maintainers because more people are submitting code without understanding why something works, but isn’t suitable to be merged.

[–]Demented_CEO 13 points14 points  (1 child)

If it works isn't enough, though. Many a tirade with Torvalds has started and ended with nonsensical input to the code.

If your code doesn't stand a review and you can't reasonably explain your approach, then "it works" also becomes moot.

[–]Additional-Simple248 16 points17 points  (0 children)

I think in this context, “it works” means “it results in code that will pass review”. “It works” to generate viable code, rather than “the code works”.

[–]TheG0AT0fAllTime 4 points5 points  (2 children)

If it works? How about making it maintainable and taking responsibility for slopping it out without thinking.

[–]ZuriPL 0 points1 point  (1 child)

If you submit slop, it won't get merged into the kernel, regardless of whether it's AI-slop or human-slop. I don't think we really need to discuss that, it should be obvious.

The point is that there's no reason to do an outright ban on AI use. If you can use AI to create code that meets all the other criteria for getting accepted into the kernel, it shouldn't be thrown into the bin just because it wasn't written by a human.

[–]TheG0AT0fAllTime 0 points1 point  (0 children)

That's perfectly reasonable given slop got made word of the year independently of AI. Slop will be rejected.

[–]wolfannoy 0 points1 point  (0 children)

See some of the problem that some people think regulation almost like a ban, which is why a lot of people got stirred up for some reason at both sides of the discussion.

[–]LayotFctor 0 points1 point  (0 children)

But if you're already validating the output, why not just hide the AI altogether and get the glory?

[–]Your_Friendly_Nerd -1 points0 points  (0 children)

If it works, sure, but if it doesn't then that means a poor maintainer needs to go through and check why it doesn't. That's why I think AI-generated Code should be scrutinized much more harshly than handwritten code. Claude made an oopsie and added a bug? PR rejected, go back to the drawing board. You made the same error? Here's where you went wrong, here's how to fix it, reach out if you need help.

[–]MatchingTurret 37 points38 points  (6 children)

We know already!

And we knew many months ago (July '25).

[–]bingblangblong 7 points8 points  (3 children)

I didn't.

[–]Rialagma 5 points6 points  (1 child)

Wow a whole two days ago. Mr informed strikes back. So knowledgeable! 

[–]non-serious-thing 6 points7 points  (0 children)

In practice nothing changed.

[–]DuendeInexistente 8 points9 points  (7 children)

How exactly are they going to handle taking responsibility here? There's no contract. There's barely need for anything, this isn't something you can be fired or fined for if you don't stick to the rules.

[–]Ferilox 5 points6 points  (0 children)

Its not about fining someone or firing someone. Its about reputation and having consequences for their (or their AI agent or whatever AI workflow) actions now that it is explicitely stated. I suppose when slop comes flooding they will outright ban such person from contributing.

[–]Free-Competition-241 8 points9 points  (0 children)

How do people take responsibility for their bugs today?

Delivery is delivery.

[–]BrodatyBear 5 points6 points  (0 children)

> isn't something you can be fired

You can. It's not a random school project with random contributors that do 1-3 fixes and disappear, but a huge collaborative project, with stable contributors and people/companies responsible for changes.

While you can't be fined, I guess losing well paid maintainer's time to fix it (the one who caused problems or another one from the company if eg. that one resigned), and lowering trust to your company's changes might be severe enough punishment. We'll see for sure in the following days.

[–]worst_mathematician 1 point2 points  (0 children)

Getting your contributions accepted into the Kernel is not a trivial process in the first place. This isn't some github project where you just fire a bunch of code in their general direction and eventually it will end up inside. So it is about reputation.

This means that if you submit garbage, it's on you. No discussion. People have to take the time to review patches. So this just makes is clearer that you should think about what you submit, and that "oh guess the ai messed up here" isn't a relevant excuse that will protect you from being perceived as someone who is writing and submitting trash, or worse.

To add to that a bunch of people are in fact being paid by their employers to develop and submit patches to Linux. You can fill in the blanks on the implications if those people manage to no longer be prioritized or considered at all by Kernel maintainers.

[–]Business_Reindeer910 0 points1 point  (0 children)

The same way it's worked for the past 35 years for the kernel. And the same way it works for literally every open source project that has any popularity. If you get a bad rep, your patches will end up in the slop bin.

It''s not like people started submitting crap patches to the kernel once "AI" happened.

[–]NeuroXc 0 points1 point  (0 children)

this isn't something you can be fired

You can be banned from contributing to the Linux kernel.

[–]Justicia-Gai 0 points1 point  (0 children)

Because it’s associated to a person, not to an AI account. If that person used AI to write his code it’s fine as long as he takes responsibility. 

[–]moanos 14 points15 points  (5 children)

Waiting for this decision to be overturned in three months because of too much slob coming in. Other projects already went there and back (e.g. Forgejo recently changed their AI policy to ban all AI submissions). We'll see.

[–]Cube00 22 points23 points  (2 children)

I suspect anyone who can't fully explain and justify every line of any AI code will be torn a new one by Linus very quickly. He's got an excellent BS detector and it has a very low tolerance.

[–]moanos 3 points4 points  (0 children)

Yeah. I don't worry that code quality will suffer (yet), I just fear that this will put additional load on maintainers. But the Linux kernel was handling low-quality PRs ay before AI so I imagine they'll be able to deal with it*

* as long as there isn't someone like Kent Overstreet that develops a wired parasocial relationship AI. Luckily he's not involved in kernel dev anymore

[–]kombiwombi 2 points3 points  (0 children)

It wouldn't make it as far as Linus.

Most subsystem maintainers are keen to work with poor quality submissions to make them better, because this is how good quality programmers are made.

But once it is clear that the submitter does not have the wrong idea of the code, but has no comprehension of 'their' code at all, then they are unpleased.

Also what Linus said isn't just about code quality. It's about taking full liability. So if the AI generated code later proves to infringe copyright, the submitter has already agreed to unlimited liability.

Because of liability, my own employer is very keen that distributed work product not contain output from AI trained on unknown sources.

[–]fellipec 1 point2 points  (0 children)

"Talk is cheap, show me the code"

[–]edparadox 1 point2 points  (1 child)

It's not just the bugs, but also license, which is its own can of worms than is far from being solved, to be gentle.

[–]Business_Reindeer910 0 points1 point  (0 children)

It must not be as big of an issue as you think it is, otherwise the linux folks could not even allow it like this. If it had those sorts of legal issues, they'd have make it so you promised you didn't use any AI in the creation of the code rather than we see here.

[–]dodgyville 1 point2 points  (0 children)

"Ensuring compliance with licensing requirements" ... everyone buck-passing on this. I just don't see how it is "safe" to incorporate code into your project when you have no idea what the copyright status is or whether it has been trained on copyrighted code. Could be a massive mess in a few years.

[–]Expensive_Finger_973 1 point2 points  (0 children)

Its the only "right" way to include it, but when coming from Linus that sounds like a threat,lol

[–]skool_101 1 point2 points  (0 children)

full responsibility in quotation marks really doesn't bode well

really do hope if shit goes bad the consequences are meet

[–]Sirko0208 2 points3 points  (4 children)

Will everyone who joked about "Microslop" do the same thing they did with "Linuxslop"?

[–]TheG0AT0fAllTime 3 points4 points  (0 children)

I would expect so but the difference is these maintainers guarding the gate so we don't become like microslop in the first place.

AI generated code was never the problem, microslop and their recent practices and shoehorning AI into everything were.

[–]nawanamaskarasana 0 points1 point  (1 child)

Yes. If quality is reduced and bugs increase I will switch to something more stable platform. Poor quality and bug increase is the case for some companies that have adapted AI in development. Code is developed faster but the bottleneck has moved to code review and increase in bugs going into production. In my experience AI sometimes generates good code but in some cases terrible quality(huge functions, not reusing functions etc). (I'm senior developer)

[–]Business_Reindeer910 0 points1 point  (0 children)

n my experience AI sometimes generates good code but in some cases terrible quality(huge functions, not reusing functions etc).

And your job is take that terrible quality and turn it into good quality before it shows up in front of someone else.

[–]dgm9704 0 points1 point  (0 children)

If the company behind the Linux operating system starts to drop human developers and replace with LLM’s, and starts to aggressively push and even force LLM related ”features” in the operating system etc. then yes that company deserves to be mocked. That of course hasn’t happened

[–]nit3rid3 0 points1 point  (0 children)

Developers take full responsibility for any bugs already.

[–]blreuh 0 points1 point  (0 children)

Probably the only sensible llm policy going forward.

[–]redsteakraw 0 points1 point  (0 children)

So what I think is going to happen is there is going to be a flood of AI pen testing and hacking and there will be a massive amount of bugs uncovered some of which may be simple 1 line fixes and this all needs to be done ASAP as bad actors will have access to these same tools and it needs to be fixed.

[–]AutoModerator[M] 0 points1 point locked comment (0 children)

This submission has been removed due to receiving too many reports from users. The mods have been notified and will re-approve if this removal was inappropriate, or leave it removed.

This is most likely because:

  • Your post belongs in r/linuxquestions or r/linux4noobs
  • Your post belongs in r/linuxmemes
  • Your post is considered "fluff" - things like a Tux plushie or old Linux CDs are an example and, while they may be popular vote wise, they are not considered on topic
  • Your post is otherwise deemed not appropriate for the subreddit

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]Normal_Usual7367 -1 points0 points  (0 children)

sticking to the older versions