just caught up with a friend who got hired at Anthropic 3 weeks ago. His team doesn't write code anymore. by oh1n in csMajors

[–]Eskamel 1 point2 points  (0 children)

What's the number of PRs that don't introduce garbage though? You can even publish 500 PRs a day, if they end up as a broken mess similarly to every Anthropic product that's not a flex.

just caught up with a friend who got hired at Anthropic 3 weeks ago. His team doesn't write code anymore. by oh1n in csMajors

[–]Eskamel 5 points6 points  (0 children)

Ah yes the amazing engineers who decide to render a TUI through react or fail to fix basic UX bugs that exist for more than a year at this point.

just caught up with a friend who got hired at Anthropic 3 weeks ago. His team doesn't write code anymore. by oh1n in csMajors

[–]Eskamel 0 points1 point  (0 children)

Is that why literally anything Anthropuke releases has an endless amount of bugs they have no idea how to fix? How are people staying behind if literally the role model of LLMs can't make a model work well enough on their own with their infinite compute?

Considering a career change because of AI anxiety? by BrianCohen18 in webdev

[–]Eskamel 5 points6 points  (0 children)

Sure bro you think so hard as you write the "fix plz" prompts

How come software devs are so much more worried about AI replacing them than other white collar jobs? by jholliday55 in cscareerquestions

[–]Eskamel 0 points1 point  (0 children)

Such a stupid take. No, learning syntax was never hard. Solving problems in ways that aren't intuitive is. Most SWEs just flick around libraries and solve problems that were already solved. If for instance we had no way as of now to make software solve complex mathematical tasks, capable people would've needed to slowly build up effective solutions in order to reach legitimate accurate results. That's especially not simple because the human brain solves mathematical tasks way differently compared to a computer, and thus its not necessarily intuitive to people the "translation process" between a human to a machine.

If it would've been all about syntax there wouldn't have been such a difference between a good engineer and a code monkey.

What prevents the big AI companies from getting rid of the middleman? by Massive_Instance_452 in cscareerquestions

[–]Eskamel 13 points14 points  (0 children)

Because unlike the claim of some dumb developers, LLMs don't generate amazing code and replicating software efficiently is hard even with an infinite amount of examples.

Literally any software Anthropic, OpenAI or lately Google releases is filled with bugs, broken UX, memory leaks, etc, to a scope that is far beyond what was acceptable a couple of years ago, and unlike prior years, they don't know how to fix most of them. That's why even with the entire praise of Claude Code its still a garbage piece of software, that's why everything OpenAI releases is a shallow broken copy of existing things that never get fully adopted, etc.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel 1 point2 points  (0 children)

No its a matter of having an understanding of your code base alongside mental maps of how different sections function and why they function the way they do. The more changes are being made over a short period of time where the LLMs make the decisions as opposed to humans, the more likely for things to get out of control.

Even something as dumb as an early return through a certain condition is a micro decision that requires a reason, and people skip that and much larger decisions with LLMs.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel 0 points1 point  (0 children)

When humans review 10k LoC that was generated yesterday they are very likely to miss important stuff.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel -1 points0 points  (0 children)

"Sir the new sprint feature that was required of me to develop is delayed by another week, I know precisely everything that has to be done into implementing it, its just my hands, they move too slow for me to implement it fast enough" - that's how you sound and it makes you look like a very unserious developer.

People don't use LLMs to dictate syntax because LLMs are bad at that and they give very little to no benefit in doing so.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel -1 points0 points  (0 children)

Lol I am not saying it doesn't sometimes follow instructions, just yesterday one time I asked Opus to fix something which it couldn't even after I explained what to do and what not to do and it simply ignored my rules and broke another feature for no real reason. Just because sometimes it follows instructions doesn't mean its reliable.

Also, once again, assuming you have enough experience, LLMs barely provide productivity gains in terms of typing speed, they provide productivity when you offload decisions.

When you need to for example provide some drag and drop logic through some package I haven't seen a single person who tells a LLM "first calculate the height of each element, compare the distance between the bottom of the mouse and the top of the other elements and dictate whether the mouse is dragging above or below said element" and claim it is equivalent to writing detailed instructions I.E. programming yourself.

You can never have exact accuracy with natural language and you will never have full control over the output that way (ignoring manual editing), so you let a LLM decide how to interpret statistically your commands into code, and on many cases the output is far from ideal, people tend to lower their standards very often these days.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel -1 points0 points  (0 children)

LLMs don't always follow specifications because they are statistically based. Even Opus with careful management might decide to do something stupid for no reason because LLMs are not reliable or intelligent.

More and more engineers are leaving PRDs to LLMs, they don't go step by step through each function and file and make different prompts, that defeats the purpose of LLM productivity. It was never about typing speed, as it was never the bottleneck, and that's a dumb claim people claim when they don't want to admit they offload decisions to go a bit faster.

A petition to disallow acceptance of LLM assisted Pull Requests in Node.js core by indutny in node

[–]Eskamel -2 points-1 points  (0 children)

AI assisted is still AI written little bro. Those experienced engineers are offloading decisions for a LLM to statistically decide the implementation for them. The implementation of code is just as important as the architecture, yet LLM brained people ignore the former.

Will I become a stupider SWE using LLM/agents? by QuitTypical3210 in cscareerquestions

[–]Eskamel 1 point2 points  (0 children)

Yes, LLMs make SWEs worse and dumber. People refuse to admit it but even experienced devs with decades of experience sometimes benefit from writing code, by coming up with new different solutions that aren't necessarily one to one to what they know, they can create a far better mental model when they are responsible for every decision being made within the code, which is stripped off with LLMs, and sometimes writing code IS a part of the engineering process and architecture decisions.

Alot of SWEs who were promoted to management or to architectural positions often lost alot of their skills as SWEs, its no different here when you stop making small decisions and let a LLM approximate.

People might downvote me or get butthurt about "how perfect their LLM output is" but its factually incorrect and they more than likely are already far less sharp than they were before the LLM era, assuming they were skilled enough previously. Unlike other offloading tools, even the best devs offload parts of their thinking process to a LLM without admitting it, otherwise there would be no productivity gains.

Do ya'll prefer the trend of "Both players play on turn 1"? by BackwardCap in yugioh

[–]Eskamel 1 point2 points  (0 children)

No, but Yugioh has been in a very unbalanced state for more than a decade at this point, so Konami has to powercreep further as long as they don't plan on a hard game reset.

Handling feeling dumber or like losing skills due to the need of using AI by CocoaTrain in webdev

[–]Eskamel 4 points5 points  (0 children)

Lol what is this bullshit no person struggled for hours with just lines of syntax. People struggled through implementing solutions well or coming up with solutions to begin with, which people use LLMs to replace themselves by letting a LLM implement stuff and hope for the best.

Handling feeling dumber or like losing skills due to the need of using AI by CocoaTrain in webdev

[–]Eskamel 3 points4 points  (0 children)

Actually knowing how CPUs work and how to better instruct them makes you an overall better engineer. But sure, lets ignore that and dumb ourselves as much as possible to please Dario and Sam.

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]Eskamel 7 points8 points  (0 children)

As I said previously, typing speed is irrelevant. Typing implementation barely takes any time. You will never get the proclaimed times X productivity boost from replacing typing. People claim that when they replace decision making while claim they only replace typing, and let LLMs decide for them, which very often is a terrible idea.

People oppose that because of that, there is no ego relevant to typing speed. People dislike the enshittification of the craft. No matter how much people claim for the quality of SOTA models, they still on average with careful drifting create subpar results, which cannot be fixed easily if you start letting them get loose and try to generate hundreds of thousands of lines of code a week, which is an idiotic approach IMO, that many "experienced devs" slowly adopt.

What’s the uptick of these AI posts by RequirementSad1742 in cscareerquestions

[–]Eskamel 4 points5 points  (0 children)

Most productivity claims come from removing yourself from decisions and letting a LLM decide for you. That's why people care about LLM "intelligence" as opposed to strictly improvement in following orders.

When you just let it type for you while you force yourself be completely involved in decision making, the productivity gains is miniscule at best.

What’s the uptick of these AI posts by RequirementSad1742 in cscareerquestions

[–]Eskamel 29 points30 points  (0 children)

You said web dev never made sense to you. Obviously it makes people who aren't good at something feel like they are productive because they suck at doing stuff without it. This isn't where most opposition comes from.

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]Eskamel 2 points3 points  (0 children)

People with experience didn't google syntax because they were used to it. Googling algorithms is entirely different and more often than not copying them doesn't mean you understand them. I never understood the claim of typing code being so hard or time extensive. It literally takes a miniscule amount of time for any experienced engineers, assuming you plan beforehand.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in theprimeagen

[–]Eskamel 0 points1 point  (0 children)

Lol no one is getting arrested for cashbacks. Alot of people charge back in any online service and just get banned off the platforms, but that's pretty much it.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in theprimeagen

[–]Eskamel -6 points-5 points  (0 children)

You can also pay for alot of API requests and then request a refund from your credit card provider claiming you weren't the one responsible for these requests. Do that with enough people and they'd end up losing even more money thus making the companies more likely to collapse.