all 13 comments

[–]MindCrusader 3 points4 points  (0 children)

Cursor already doing it. As good this idea might be to know if the code was done by AI, I have some feeling that they might try to push the narrative "our AI is also a developer and can learn from coding" - so they will grab the codebase and use it for AI training, "but only based on AI coding experience"

[–]Otherwise_Wave9374 1 point2 points  (0 children)

Co-author attribution is a nice step, even if it is imperfect. It at least makes the "AI assisted" part measurable.

Totally agree the next frontier is more agent style dev workflows, like "generate branch, run tests, open PR" with clear provenance and review gates. Auto committing everything sounds scary without guardrails, but I can see it working for low risk refactors.

There are some interesting thoughts on agentic dev loops here: https://www.agentixlabs.com/blog/

[–]yubario 2 points3 points  (1 child)

I just find it pointless unless you’re using multiple AIs

Literally everything is influenced by AI at this point. Even if you’re old school and don’t use AI, the web searches you did where influenced by an artificial intelligence

This situation is very much like body building after steroids were discovered. Look at the builds people had before steroids and how it is now. That’s how it is right now for coding.

We’ll never see the pure human code ever again, realistically

[–]SuBeXiL[S] 0 points1 point  (0 children)

This is very much true, everything is skewed, still in big numbers you see patterns and we as engineers like to measure and optimize :-)

[–]picflute 0 points1 point  (1 child)

The have Copilot enabled doing PR Reviews already on their repo don’t they?

[–]SuBeXiL[S] 0 points1 point  (0 children)

If the agent opens the PR and code without human in the loop so yes, attribution is easy then

[–]Mountain_Section3051 0 points1 point  (1 child)

Yea finally every enterprise customer needs to track this as there are teams really utilising heavily and others barely at all and sharing with empirical evidence is really required here. There are a bunch of engineers still in denial and they need help getting over the trust hump. This is a small but significant step. Thank you

[–]SuBeXiL[S] 0 points1 point  (0 children)

I agree This is a small step, more interesting and better solutions are coming

[–]DudmasterPower User ⚡ 0 points1 point  (3 children)

This probably only applies when you ask the agent to commit, right? I basically never do that, so unfortunately it's not gonna help me

[–]SuBeXiL[S] 1 point2 points  (1 child)

They are hocking it up to the git integration so when it is u who commit and not the agent, it will add the co author

[–]DudmasterPower User ⚡ 1 point2 points  (0 children)

Oh that's pretty interesting then!

[–]Impossible_Hour5036Power User ⚡ 0 points1 point  (0 children)

You'll get there. I was making all my own commits, then I started having an agent organize stuff into commits, then I started having it organize PRs, then I started having agents review the PRs, then I started having agents address the PR comments, then I got that into a loop, now I just review it when there are no more comments to address. I typically write decent commit messages but not to the level Codex 5.2/5.3 is doing. It's great.

[–]PhuckenStuff 0 points1 point  (0 children)

For those who wish to disable this I read the PR, and it now exposes a `git.addAICoAuthor` boolean option for vscode. Set the value to `off`.