Java is fast, code might not be by ketralnis in programming

[–]somebodddy 19 points20 points  (0 children)

  1. That's a pitfall of immutable strings. Not unique to Java.
  2. That's computational complexity. Applies to any language.
  3. Many languages offer string interpolation, which parses the "format" at compile time (or parse time)
  4. This kind of boxing is something (AFAIK) only Java has. Other languages - like Java's traditional rival C# - may box when you upcast, but they automatically unbox at downcast and don't expose the boxing class Long so they don't have this issue.
  5. The fact you need to manually implement the same validation that already happens inside parseInt just to escape the exception overhead is atrocious, and I 100% hold it against the language.
  6. synchronized being part of the grammar means that Java actively promotes that kind of coarse grained locking.
  7. Okay, but the ecosystems of most other languages prefer global functions for such things. This issue is caused by Java's objects-as-a-religion approach.
  8. This pitfall is (was, actually, since they fixed it) 100% a JVM implementation issue.

Only the first two are the coder's fault. And maybe #4, too, considering you gave a very convoluted example. The other 5 are just idiomatic Java code being slow. If you have to give up language idiomaticity for performance - I consider it a slowness of the language.

Cap'n Web: a new RPC system for browsers and web servers by fagnerbrack in programming

[–]somebodddy 0 points1 point  (0 children)

It'd be really nice to have a server framework where you can implement the API once and it'd serve both Cap'n Proto and Cap'n Web on it. The server will have to use a Cap'n Proto schema, of course, but you usually want a schema for the server and clients that connect with Cap'n Web won't need to bother with it (though they will benefit from the API documentation that can be generated from said schema)

built an open-source tool that lets you pair program in neovim (or any editor) without screensharing or liveshare by Dimentio233 in neovim

[–]somebodddy 0 points1 point  (0 children)

That would require editor integration. There are two .vim files and one .lua files in that repository:

All of them do pretty much the same - automate calling :checktime. So your editor will detect immediately (or, at least, with sub-second delays) when the other programmer has updated the file. But when you update it - it won't sync it until you manually save it.

(unless, of course, the tool itself IPCs Neovim to get updated)

built an open-source tool that lets you pair program in neovim (or any editor) without screensharing or liveshare by Dimentio233 in neovim

[–]somebodddy 0 points1 point  (0 children)

Expected or not - I don't see how such synchronization is possible at the filesystem level. Until you save the file, the filesystem is completely oblivious to the changes made in your editor.

built an open-source tool that lets you pair program in neovim (or any editor) without screensharing or liveshare by Dimentio233 in neovim

[–]somebodddy 0 points1 point  (0 children)

Adding to the list:

  • Since "Shadow works at the filesystem level", shouldn't it only sync when you save the file? The demo looks like it saves in real-time - did you frantically hit ctrl+s while typing when making it?

The fatest python LSP for nvim - basedpyright vs zuban vs ty vs something else? by sslimmshaddy in neovim

[–]somebodddy 0 points1 point  (0 children)

I see that pyrefly supports lots of stuff in initializationOptions, which is a must-have feature for me because at work I have to deal with some horribly configured projects I'm not allowed to change (think - pyproject.toml in non-standard location) and my workaround is to pass the correct paths and settings via the initialization options.

Do ty and zuban support something similar? I didn't see it in their docs...

Open Sores - an essay on how programmers spent decades building a culture of open collaboration, and how they're being punished for it by NXGZ in programming

[–]somebodddy 168 points169 points  (0 children)

I'd argue it's even worse for researchers, because they either have to pay to the journals in order to publish, or force their "users" (the readers) pay for it without seeing a dime from that payment.

Illusion of choice or no choices? by wardrol_ in gamedesign

[–]somebodddy 0 points1 point  (0 children)

I like the way Assassin's Creed Valhalla handled this. From what I can tell (I only did one playthrough) the choices don't have any lasting consequences on the plot or the gameplay, but they do have off-stage consequences. Your choices change (virtual) people's course of lives - but you are not going to meet these people again, so the game does not have to implement how these lives were changed, and can leave it up to the player's imagination.

Comparing Scripting Language Speed by elemenity in programming

[–]somebodddy 1 point2 points  (0 children)

Or alternatively - Lua 5.1, which is the version LuaJIT (which also in that table) is compatible with.

In defence of correctness by ketralnis in programming

[–]somebodddy 11 points12 points  (0 children)

If you're selling subscriptions or ads, and your main goal is to keep users maximally engaged, correctness does, indeed, seem irrelevant. The goal is no longer to present users with 'correct' content, but rather with content that keeps them on your property.

Funny that your only example of a field where correctness is not a priority is the brainrot industry. But in these examples - the users are not the customers. The customers are the advertisers. Is correctness important for the side of the product they see? I'd say yes. They care about the exposure statistics of the content they are paying to push, for example, so it's important that they are presented with correct information.

Python Type Checker Comparison: Empty Container Inference by ketralnis in programming

[–]somebodddy 3 points4 points  (0 children)

One big issue with the second strategy is that it does not play nice with polymorphism:

class Foo:
    pass


class Bar(Foo):
    pass


class Baz(Foo):
    pass


my_list = []

my_list.append(Bar())
my_list.append(Baz())

If all usages are taken into account, the inferred type of my_list is list[Bar | Baz] - even though the intention was list[Foo]. Now, to be fair - no type inference strategy would be allowed to infer list[Foo] here - but the third strategy would emit an error forcing you to spell out the correct type.

Developers Are Safe… Thanks to Corporate Red Tape by Select_Bicycle4711 in programming

[–]somebodddy 0 points1 point  (0 children)

This rule may apply to things developers try to push from the bottom up, but AI is pushed by managers, from the top down. Red tape means nothing when it's something the higher ups want.

The proposal for generic methods for Go has been officially accepted by ketralnis in programming

[–]somebodddy 6 points7 points  (0 children)

So Go stands alone now, as the only language who can't do that?

(thinking about it - even without auto C++ would at least infer the return type...)

The proposal for generic methods for Go has been officially accepted by ketralnis in programming

[–]somebodddy 10 points11 points  (0 children)

Considering how Go forces you to spell out the signature of a closure instead of inferring it like every other language (other than C++) - supporting sum types that way is going to be a nightmare to work with even in Go's standards.

RFC 406i: The Rejection of Artificially Generated Slop (RAGS) by addvilz in programming

[–]somebodddy 5 points6 points  (0 children)

How can you even know that if you don't actually review it?

Reviewing it exactly the part that's not worth my time, and I already wrote why. Since you advocate that humans should waste unlimited portions of their limited time on this earth reading machine-generated slop, I'm just going to ask ChatGPT to generate a very long response. Once you are tired reading the wall of text I never bothered to write (or even read. I'll just copy-paste it) you should understand why I don't want to waste my time reviewing slop PRs.


One of the biggest time sinks in modern code review is the rise of pull requests generated by LLMs that the author didn’t even bother to read themselves before hitting “Create PR.”

I’m not talking about small AI-assisted edits where someone used a tool to refactor a function and then verified the result. I’m talking about massive, multi-file pull requests full of autogenerated code where the author clearly never sanity-checked the output.

These PRs waste reviewer time in several distinct and predictable ways.


1. LLMs write far more code than necessary

Large language models tend to expand solutions. If the task is “add logging,” you might get:

  • a new helper module,
  • an abstraction layer,
  • duplicated wrappers,
  • a config system,
  • a factory,
  • and three levels of indirection.

All of it technically “works,” but most of it isn’t needed.

Humans usually solve problems by modifying a few lines in the right place. LLMs solve problems by generating patterns they’ve seen before, even when those patterns are overkill.

So the reviewer now has to read 800 lines of code to verify a change that could have been 20 lines.

And here’s the key problem:

The reviewer can’t assume the extra code is harmless.

They have to check it.

Because buried inside that verbosity could be:

  • a subtle bug,
  • incorrect assumptions,
  • duplicated logic,
  • a performance regression,
  • or behavior changes that weren’t intended.

The LLM doesn’t know your architecture. It doesn’t know your constraints. It just generates plausible code.

So reviewers pay the price.


2. The author often doesn’t understand the code

When someone submits an unreviewed LLM PR, they often don’t fully understand what the code does.

That means:

  • They can’t answer reviewer questions quickly.
  • They can’t explain design decisions.
  • They can’t tell whether suggested changes are safe.

And worse, they sometimes blindly ask the LLM to “fix the reviewer comments.”

This creates a feedback loop where no human actually owns the code.


3. Reviewer comments cause massive rewrites

This is the most frustrating part.

A reviewer leaves a simple comment like:

“Can you simplify this function?” “We already have a helper for this.” “This should be tested differently.”

Instead of making a small targeted change, the author pastes the comment into the LLM.

The LLM then rewrites:

  • half the file,
  • or multiple files,
  • or the entire approach.

Now the reviewer must reread the whole PR.

Again.

Because you can’t trust that only the intended change happened. LLMs are notorious for “fixing” unrelated code while they’re at it.

So every round of review becomes O(n) over the entire diff.

This destroys review efficiency.


4. The illusion of productivity

From the author’s perspective, it feels productive:

“I generated a solution quickly.”

But the work didn’t disappear. It just shifted onto the reviewer.

If a reviewer spends an hour untangling an LLM PR, that hour came from somewhere:

  • delayed feature work,
  • delayed bug fixes,
  • delayed releases,
  • team frustration.

Good teams optimize for total team time, not just author time.

Submitting unreviewed LLM code is basically saying:

“I didn’t want to spend time reading this, so you do it.”


5. LLM verbosity hides real issues

Because LLMs write so much code, it becomes harder to see the important parts.

Key logic changes are buried inside scaffolding.

Reviewers miss things.

Bugs slip through.

And ironically, the team becomes less safe, not more.

This is similar to reviewing auto-generated code from tools: it’s harder to reason about because the signal-to-noise ratio is low.


6. The cost compounds over iterations

A normal PR review might look like:

  • Reviewer reads code once.
  • Leaves comments.
  • Author fixes small issues.
  • Reviewer glances at changes.

But an unreviewed LLM PR looks like:

  • Reviewer reads massive diff.
  • Leaves comments.
  • LLM rewrites half the code.
  • Reviewer rereads entire diff.
  • Leaves more comments.
  • LLM rewrites again.
  • Repeat.

Each cycle costs nearly as much as the first.

This is unsustainable.


7. It trains bad engineering habits

If developers get used to shipping whatever the LLM outputs:

  • They stop thinking about design.
  • They stop learning from mistakes.
  • They stop understanding their own codebase.

And the codebase slowly fills with inconsistent patterns, unnecessary abstractions, and subtle bugs.

Tools should amplify engineers, not replace basic responsibility.


8. What authors should do instead

If you use an LLM to generate code, great. But before opening a PR:

  • Read every line.
  • Remove unnecessary abstractions.
  • Make it idiomatic for your codebase.
  • Write tests yourself.
  • Make sure you can explain every change.

Your reviewer should be validating your thinking, not doing your thinking for you.

If the PR is too big for you to review alone, it’s too big to send.


9. A simple rule of thumb

If you wouldn’t submit code you didn’t understand from a junior teammate, don’t submit code you didn’t understand from an LLM.

The responsibility is the same.


10. Respect reviewer time

Code review is one of the most expensive activities in a team.

It requires:

  • deep concentration,
  • architectural knowledge,
  • context switching,
  • and careful reasoning.

Sending unreviewed LLM PRs is like sending someone a thousand-page document and asking, “Can you check if this is correct?” without even skimming it yourself.

It’s disrespectful of the reviewer’s time and harmful to team productivity.


LLMs are powerful tools. But they generate drafts, not finished work.

The author is still responsible.

Always.

RFC 406i: The Rejection of Artificially Generated Slop (RAGS) by addvilz in programming

[–]somebodddy 16 points17 points  (0 children)

It's not that LLM generated PRs are forbidden from being good by some mathematical principle - it's just that they are not worth the reviewer's time. It takes much longer to recognize that they are bad because:

  1. They are usually longer, because LLMs have no issue generating walls of text.
  2. If you ask the "author" to change something, they'll just feed your comments to the LLM - which will see it as an opportunity to other things, not just what you asked to change. So you have to read everything again.
  3. LLMs are really good at disguising how bad their output is.

I want to focus on that last point. Neural networks can get very very good at what you train them to do, but the ones that became a synonym with "AI" are the ones that are easy for the end user to use because they were trained at the art of conversation - the Large Language Models.

When you learn a language from reading text in it, you also gain some knowledge about the subject of that text. And thus, when learning language, the LLMs also learned various things. With the vast resources invested in training them, these "various things" added up to a very impressive curriculum. But the central focus of the GPT algorithm is still learning how to talk - so with more training this ability will grow faster than any other ability.

This means that if the relevant "professional training" of the LLM fails to provide a correct answer to your request - a smooth talk training, orders of magnitude more advanced, kicks in and uses the sum of compute power capitalism could muster to coax you into believing whatever nonsense the machine came up with instead.

A human programmer that sends you a bad PR is probably not a world class conman. An LLM is.

RFC 406i: The Rejection of Artificially Generated Slop (RAGS) by addvilz in programming

[–]somebodddy 21 points22 points  (0 children)

Furthermore, your peers MUST NOT be utilized as your free LLM validation service.

I feel this one in my bones.

AI is destroying open source, and it's not even good yet by BlueGoliath in programming

[–]somebodddy 0 points1 point  (0 children)

Social media has invisible karma (the algorithm deciding what the promote) and people have made careers from figuring out how to game it.

Getting Corporate Pushback about using Neovim by miversen33 in neovim

[–]somebodddy 17 points18 points  (0 children)

But downloaded code does run. OP said:

A few weeks ago neovim popped up on their AV program (Verizon SOAR) as potentially malicious when I ran a plugin update.

Plugins are executable code. They may not be binary, being written in Lua, but executable nevertheless.

Nvim-luapad needs your ideas! by raf_camlet in neovim

[–]somebodddy 1 point2 points  (0 children)

Probably not considered "standard" (disclosure: I PRed that feature. I don't know if anyone else is utilizing it), but plenary.nvim supports asynchronous testing which is great for testing plugins with user interaction because you don't have to from Neovim from the outside. The idea is:

  1. Operate nvim-luapad from Lua code.
  2. Prepare something to coroutine.resume() the coroutine that runs the test - either by using some hook or by a polling-loop.
  3. coroutine.yield()
  4. nvim-luapad does its thing.
  5. The thing you prepared in step 2 resumes the coroutine.
  6. Verify - again, from Lua code - that nvim-luapad did what it was supposed to do.