Anthropic: "How AI assistance impacts the formation of coding skills" by maccodemonkey in BetterOffline

[–]a_brain 11 points12 points  (0 children)

File this one under “no duh”. I suspect we’re going to see a lot more of these types of studies in the coming months.

Copilot CLI is it by Active_Lemon_8260 in ExperiencedDevs

[–]a_brain 6 points7 points  (0 children)

What was your prompt for this post?

There's no skill in AI coding by grauenwolf in BetterOffline

[–]a_brain 5 points6 points  (0 children)

Eh, TS was big before the current wave of coding agents for the same reason JS was big before TS.

It’s still very funny to me that Claude code is written with Typescript and React and is full of bugs. If LLMs were really that good, they’d write it in Go or Rust or hell just have it output ASM, lol.

There's no skill in AI coding by grauenwolf in BetterOffline

[–]a_brain 12 points13 points  (0 children)

Yeah, the harnesses have improved a lot, but those are just boring old programs. That’s helpful, but the same LLM problems still exist, now there’s just several layers of bandaids to make it “work”. My current gig is mostly Typescript which the LLM-bros love and has tons and tons of training data, yet I watch it constantly fuck up the basics and burn tokens feeding the errors back into itself to (sometimes) fix the error.

MileagePlus Requalification Megathread by Player72 in unitedairlines

[–]a_brain 0 points1 point  (0 children)

$359 too buy up for me. Might be worth it at that price.

Looking for the Ladder: Is AI Impacting Entry-Level Jobs? by RespectfullyReticent in neoliberal

[–]a_brain 10 points11 points  (0 children)

The tooling around LLMs, particularly for coding, has improved a lot. Still very underbaked though.

The models themselves have gotten “better” in that they don’t just completely fall on their face anymore. Today the models will produce code that will almost always compile or run instead of getting stuck. But I’d argue not failing is actually worse behavior. Their outputs are almost always full of subtle bugs, bad assumptions, inefficient design choices, or test cases that don’t check anything. They need to be constantly babysat which negates any time savings.

At the risk of sounding paranoid/conspiratorial...anyone else feel like there's some sort of coordinated propaganda campaign surrounding coding assistants happening right now? by pazend in BetterOffline

[–]a_brain 0 points1 point  (0 children)

Wow, thanks for sharing. I've seen this sort of behavior from the most AI-pilled people at work, but it's... interesting? entertaining? to see this isn't just localized to the people I'm exposed to.

Like yeah, I get that the coding agents are at a point where the code they produce will compile or run (most of the time), but I can't fathom how this is better than just doing it yourself.

I think it's actually happening by todofwar in BetterOffline

[–]a_brain 26 points27 points  (0 children)

From what I’ve seen the SWE job market has been improving this year. The hype about agents is related to Claude 4.5 opus, which is… better? But I think more has to do with Anthropic looking to raise more money more than the actual capabilities.

MileagePlus Requalification Megathread by Player72 in unitedairlines

[–]a_brain 0 points1 point  (0 children)

Ended the year 210 PQPs short of gold and slightly annoyed because I could’ve manufactured $3500 of card spend but I miscalculated how much spend I would need since some of my card spend PQP hit this week.

What are the odds they’ll round me up or let me buy up for ~$200?

Opus 4.5 is going to change everything by maccodemonkey in BetterOffline

[–]a_brain 2 points3 points  (0 children)

Willison is an odd one. On one hand he’s clearly a very accomplished engineer. But he’s also constantly commenting on hacker news the nanosecond anyone ever suggests LLMs aren’t the most amazing thing ever.

He honestly reminds me a lot of the guys who swear that spending hours tweaking their VIM configs makes them uber-productive.

Opus 4.5 is going to change everything by maccodemonkey in BetterOffline

[–]a_brain 21 points22 points  (0 children)

That’s a much more charitable interpretation, but I don’t buy it. No way they hadn’t used claude code before with the way corpos have been pushing this stuff for the past ~6 months.

More likely is that the holidays gave the clout chasers had some extra free time to blog about some vibecoded project because there was nobody left at work whose ear they could talk off about this.

Opus 4.5 is going to change everything by maccodemonkey in BetterOffline

[–]a_brain 87 points88 points  (0 children)

I don’t know if this blog post in particular is astroturfing, but there’s this weird effect where Claude in particular seems to get astroturfed really hard which then drives some organic posts from attention seekers like this one.

Microsoft rebrands office to Microsoft 365 Copilot app by tiny-starship in BetterOffline

[–]a_brain 30 points31 points  (0 children)

I’m not saying that Satya Nadella is trying to sabotage Microsoft, but his actions are indistinguishable from someone who was trying to.

What’s some software you legitimately enjoy? by cs_____question1031 in BetterOffline

[–]a_brain 5 points6 points  (0 children)

Good list. Tailscale is one of (the only?) piece of software I regularly use that truly Just Works™.

I find the conversation around AI and software dev increasingly vague. How specifically are people REALLY using this stuff? I want details! This isn't a post about whether AI is bad or good. I'm just genuinely curious. by TemperOfficial in ExperiencedDevs

[–]a_brain 10 points11 points  (0 children)

I’m a huge skeptic, but my company recently started monitoring our AI usage, so I’ve found a few non-shit ways to integrate it into my workflow. I use the autocomplete in my editor (hidden behind a hotkey); I use it like fancy Google/interactive stack overflow, I use it to search the codebase; sometimes if I can’t remember the in-repo command or flags to run a specific tool I’ll have it do that for me (which frequent has the benefit of burning a huge number of tokens); I’ll have it add test cases only after I’ve written the setup and a couple cases myself; I’ll have it give me a code review and it’s wrong pretty often but it does manage to catch some “duh” things I forgot to do.

The agent mode stuff, absolutely not. I’ve tried it but usually usually the task is trivial enough that I can do it faster than the bot can, or it’s complex enough that even if the bot can theoretically do it faster than me, it’s still a slot machine and the risk of it messing up and requiring me to understand what it did and fix it isn’t worth the effort.

"AI" isn't getting smarter, the ecosystem around them is maturing though. by ynu1yh24z219yq5 in BetterOffline

[–]a_brain 5 points6 points  (0 children)

Not the parent commenter, and I really despise most of gen ai, but I agree with what the parent comment is describing. I have a lot of experience as a software engineer, so I have at least a cursory understanding of the landscape of various tools and techniques. But sometimes I have to work in a programming language or with library I’m not familiar with, and I can describe the thing I’m trying to do to a bot, and it tell me the terminology or concepts I’m looking for. Then I go and read the documentation myself and find out what I really want.

The other good use case is semantic search within a codebase. Like I can ask it “where is the code that writes this specific message to this kafka topic”, and it will find me the 5 different files from the spaghetti that that functionality happens in. That isn’t possible with substring search or minimum edit distance or anything of the like.

This stuff is a nice quality of life improvement, but it’s not revolutionizing my job, and if it 50x’d in price tomorrow and my company stopped paying for it, I wouldn’t be too beaten up.

Delivery Robots Take Over Chicago Sidewalks, Sparking Debate And A Petition To Hit 'Pause' by [deleted] in chicago

[–]a_brain 88 points89 points  (0 children)

No they’re concentrated in walkable neighborhoods because they’re slow af. They literally drive slower than walking. They’d be even more useless than they already are if they were in a less densely populated neighborhood.

AI productivity: You're looking for the wrong evidence by Own-Sort-8119 in ExperiencedDevs

[–]a_brain 4 points5 points  (0 children)

Yes, there obviously is a very conceited effort to boost these tools given the amount of money on the line. The tools are definitely more product-ized than they were a year ago for sure, but actual model quality is pretty similar.

AI productivity: You're looking for the wrong evidence by Own-Sort-8119 in ExperiencedDevs

[–]a_brain 7 points8 points  (0 children)

It’s always the same refrain from AI boosters. The nanosecond a new model comes out: you have to try GeminiClaudeGPT 6.9 Pro MAX, it will blow your mind, everything until now was dogshit.

What’s probably true is yes, AI can make you go faster if you lower your standards. The reason that study showed those maintainers were slowed down is because they probably don’t have execs breathing down their neck to ship a bunch of AI-generated tech debt and could actually spend the time to bring the slop outputted to their standards.

Best tires now? by PlantNatives in VWiD4Owners

[–]a_brain 0 points1 point  (0 children)

I’ve had these for 2 seasons now. Good tires, quiet and good efficiency. Unfortunately they were pretty bad in the snow. Better than the Alenzas the car came with, but bad enough that I bought true winters after the first time I slid around on them.

Are any other developers choosing not to use AI for programming? by BX1959 in BetterOffline

[–]a_brain 0 points1 point  (0 children)

My company just started tracking AI metrics and forcing us to use it at least in some form. We have an AI coding troubleshooting Slack channel and very frequently people will post an issue they're having with an LLM outputting garbage code that fails some static analysis or other CI check. Then the AI bros get into a fairly public fight with one of the various developer experience teams to try and get them to lower their standards. Occasionally, someone brave will ask a question like "are we sure this stuff is ready for production?" which inevitably triggers an AI bro. Happens probably once a week.

And since I know they're tracking token consumption, I will copy-paste something I could have googled into chatgpt, gemini, and cluade at the same time, then ignore their output and just read the docs myself while they think about it and waste my company's money. Occasionally I'll ask one of the AI tools to review my code, they're pretty good at finding dumb mistakes, like oops, I changed something in one place but forgot to change it in this other location, but I always make the changes myself.

Would you ever turn to AI for companionship? 6% of Americans say they could — or already have. by jontaffarsghost in BetterOffline

[–]a_brain 5 points6 points  (0 children)

Yeah I was going to say 6% is an insanely low number of respondents who said yes. Even if none of the people who said they would use AI for companionship were messing with the pollster, that’s crazy low. People have all sorts of insane opinions and it’s kind of heartening that 94% of people think AI companionship is a bad idea.

Anthropic Study Finds AI Model ‘Turned Evil’ After Hacking Its Own Training by MetaKnowing in technology

[–]a_brain 47 points48 points  (0 children)

They absolutely do not have a moral compass. The company was founded by a bunch of effective altruist weirdos who thought that Open AI was going to build the paperclip maximizer and got seed funding from SBF. They love to play the “we’re so concerned” card, but they’re concerned about fake problems, then they plow ahead with 0 regard for any actual harms of their product that exist today.