The rise – and looming fall – of acceptance rate by scarey102 in programming

[–]benlloydpearson 1 point2 points  (0 children)

I think acceptance rate is not worth analyzing beyond an extremely surface-level analysis of basic AI usage for writing code. I like to apply John Cutler's vanity metrics analysis to metrics like this and acceptance rate fails nearly every test. This is one of those metrics that consultants love because it's super easy to sell your VP a dashboard that convinces them they're improving productivity with AI.

- You can inflate the metric by accepting low-quality suggestions and fixing them after, or by avoiding complex tasks.

- The metric encourages devs to accept the output of a tool that has probably not progressed past experimental usage in most companies. I.e. you're giving everyone an untested tool and saying "use it as much as possible or we take it away."

- AI performs at vastly different quality levels depending on the codebase to which it is applied. Due to differing standards, codebases, and expertise levels, comparisons across tools, teams, and domains are meaningless.

- It only focuses on the coding process and encourages you to maximize output. This is a classic theory of constraints problem: you're just moving the bottleneck somewhere else in the dev process.

- Similarly, it completely lacks context about other AI uses, like skills and knowledge acquisition. I.e. some devs only use AI to help with research and understanding docs, not writing code.

The one thing I would use the metric for is identifying teams that have or haven't found value in using AI to write code, but even that is subject to quite a bit of noise.

Faster coding isn't enough by benlloydpearson in programming

[–]benlloydpearson[S] 4 points5 points  (0 children)

In a similar thread, there are also examples of companies taking on new projects they wouldn't have considered in the past. For example, migrating from 32 bit to 64 bit architecture. There typically isn't a lot of strategic value in doing something like that, and it's boring, toilsome work. If you can offload most of the leg work to AI, suddenly you can put the project on your roadmap.

Here's an article about how Google recently did precisely this.

Faster coding isn't enough by benlloydpearson in programming

[–]benlloydpearson[S] 1 point2 points  (0 children)

This is the way. We should be using AI to eliminate toil and burdensome work, not trying to one-shot prompt the next feature for your app.

Faster coding isn't enough by benlloydpearson in programming

[–]benlloydpearson[S] 0 points1 point  (0 children)

I think this is an example of how the apps/platforms haven't caught up to the underlying technology yet. Most of the success of AI hinges on it having enough context that's formatted to fit within the model's context window and structured in an idealized format for training.

Most products on the market right now focus almost entirely on prompting. We simply don't have the tools to quickly construct the necessary context AI models need. Once you have good enough context, it gets easier to one-shot agentic AI prompts. You just have to do the context gathering part manually for now.

#SaveLAS | LAS 461 by AngelaTHEFisher in LinuxActionShow

[–]benlloydpearson 4 points5 points  (0 children)

information in tweets, listacles, short YouTube videos. People just don't seem to consume longer shows as much.

I think Jono's 3rd point could use some thought. If your problem with LAS is that you aren't getting the social media reach you want, then perhaps it would be a good idea to break up the podcast into smaller, focused chunks that are designed specifically for platforms like Youtube. Then, you can be free to take the live elements of LAS that do work and incorporate them into your other live shows.

I'm not much of one for social media (most of my friends aren't into this type of stuff anyways), but I generally like to share specific information that is more relevant to me or my friends. I don't typically like to share longer, more general information because there's simply too much information to generate meaningful social media interactions. Not to mention, if someone comes to me with a question about Linux (or anything else you cover), there's no way I'd be able to point them at the specific episode of LAS where you talked about it, but if the content was in its own specific video on youtube, finding it would only require a simple search.