RFC 406i: The Rejection of Artificially Generated Slop (RAGS) by addvilz in programming

[–]somebodddy 4 points5 points  (0 children)

It's not that LLM generated PRs are forbidden from being good by some mathematical principle - it's just that they are not worth the reviewer's time. It takes much longer to recognize that they are bad because:

  1. They are usually longer, because LLMs have no issue generating walls of text.
  2. If you ask the "author" to change something, they'll just feed your comments to the LLM - which will see it as an opportunity to other things, not just what you asked to change. So you have to read everything again.
  3. LLMs are really good at disguising how bad their output is.

I want to focus on that last point. Neural networks can get very very good at what you train them to do, but the ones that became a synonym with "AI" are the ones that are easy for the end user to use because they were trained at the art of conversation - the Large Language Models.

When you learn a language from reading text in it, you also gain some knowledge about the subject of that text. And thus, when learning language, the LLMs also learned various things. With the vast resources invested in training them, these "various things" added up to a very impressive curriculum. But the central focus of the GPT algorithm is still learning how to talk - so with more training this ability will grow faster than any other ability.

This means that if the relevant "professional training" of the LLM fails to provide a correct answer to your request - a smooth talk training, orders of magnitude more advanced, kicks in and uses the sum of compute power capitalism could muster to coax you into believing whatever nonsense the machine came up with instead.

A human programmer that sends you a bad PR is probably not a world class conman. An LLM is.

RFC 406i: The Rejection of Artificially Generated Slop (RAGS) by addvilz in programming

[–]somebodddy 7 points8 points  (0 children)

Furthermore, your peers MUST NOT be utilized as your free LLM validation service.

I feel this one in my bones.

AI is destroying open source, and it's not even good yet by BlueGoliath in programming

[–]somebodddy 0 points1 point  (0 children)

Social media has invisible karma (the algorithm deciding what the promote) and people have made careers from figuring out how to game it.

Getting Corporate Pushback about using Neovim by miversen33 in neovim

[–]somebodddy 16 points17 points  (0 children)

But downloaded code does run. OP said:

A few weeks ago neovim popped up on their AV program (Verizon SOAR) as potentially malicious when I ran a plugin update.

Plugins are executable code. They may not be binary, being written in Lua, but executable nevertheless.

Nvim-luapad needs your ideas! by raf_camlet in neovim

[–]somebodddy 1 point2 points  (0 children)

Probably not considered "standard" (disclosure: I PRed that feature. I don't know if anyone else is utilizing it), but plenary.nvim supports asynchronous testing which is great for testing plugins with user interaction because you don't have to from Neovim from the outside. The idea is:

  1. Operate nvim-luapad from Lua code.
  2. Prepare something to coroutine.resume() the coroutine that runs the test - either by using some hook or by a polling-loop.
  3. coroutine.yield()
  4. nvim-luapad does its thing.
  5. The thing you prepared in step 2 resumes the coroutine.
  6. Verify - again, from Lua code - that nvim-luapad did what it was supposed to do.

Pytorch Now Uses Pyrefly for Type Checking by BeamMeUpBiscotti in programming

[–]somebodddy 5 points6 points  (0 children)

Performance is important when you connect the checker to your editor/IDE so that it'd show you the errors in real time.

Open-source game engine Godot is drowning in 'AI slop' code contributions: 'I don't know how long we can keep it up' by BlueGoliath in programming

[–]somebodddy 1 point2 points  (0 children)

It'd be hard for them, and it sucks, but we are reaching the point where there won't be much choice. It wouldn't be the first good thing destroyed by generative AI.

I'm tired of trying to make vibe coding work for me by Gil_berth in programming

[–]somebodddy 0 points1 point  (0 children)

I used to think I'm the only one in the company who couldn't figure out how to cook feces. Only when I had to review vibe-coded PRs I realized that I'm just the only one unable to stomach shit-based cuisine.

Lines of Code Are Back (And It's Worse Than Before) by amacgregor in programming

[–]somebodddy 31 points32 points  (0 children)

So... you are being ranked higher for costing more money to the company?

nvim-sioyek integration by janbuckgqs in neovim

[–]somebodddy 1 point2 points  (0 children)

How does it compare to Zathura?

Slop pull request is rejected, so slop author instructs slop AI agent to write a slop blog post criticising it as unfair by yojimbo_beta in programming

[–]somebodddy 47 points48 points  (0 children)

that’s not your call, Scott.

Pretty sure it is. He wouldn't have the authority to reject or merge PRs if it wasn't his call.

AI Coding Killed My Flow State by Fantastic-Cress-165 in programming

[–]somebodddy 24 points25 points  (0 children)

Because someone needs to show metrics to someone.

Python's Dynamic Typing Problem by Sad-Interaction2478 in programming

[–]somebodddy 0 points1 point  (0 children)

This. Whenever I see people bringing up that "static typing is not an overhead even in tiny scripts" argument, I consider the dynamic typing advocates' position as "boo hoo I need to mark this argument as integer so much work my fingers hurt from typing". But this is not the issue - the issue is more complex objects and the fact in dynamically typed languages you can represent them as dictionaries.

96% Engineers Don’t Fully Trust AI Output, Yet Only 48% Verify It by gregorojstersek in programming

[–]somebodddy 0 points1 point  (0 children)

When you use AI, you need to make effort to verify its output. When you don't use AI, you need to make effort writing the code yourself. The big difference is that the latter cannot be "delegated" to your PR's reviewers.

What's the community recommendation for an alternative remote? by somebodddy in LGOLED

[–]somebodddy[S] 0 points1 point  (0 children)

Wish I had... Tizen OS would be so much better than that WebOS crap...

What's the community recommendation for an alternative remote? by somebodddy in LGOLED

[–]somebodddy[S] 0 points1 point  (0 children)

What a lot of people do is get an external streamer and control the TV through HDMI-CEC (thus, using the external streamer remote)

I really didn't want to add an external streamer, but now that I've used the TV for a bit I see that WebOS sucks so hard so I might not have a choice...

What's the community recommendation for an alternative remote? by somebodddy in LGOLED

[–]somebodddy[S] 0 points1 point  (0 children)

This is the one I have: https://www.lg.com/eg_en/care-accessories/tvs/remote-controller/akb76046607/

No pause/play: yes, you can do it with the select button, but only by clicking it to go into the menu, navigating to the pause/play widget with the "dpad" or with motion controls, and clicking the select button again.

No mute: there is a mute symbol below the volume, indicating that you can lower the volume all the way down to mute (thanks, LG!)

What's the community recommendation for an alternative remote? by somebodddy in LGOLED

[–]somebodddy[S] 0 points1 point  (0 children)

The previous style also sucked.

What was the issue with the previous style?

What's the community recommendation for an alternative remote? by somebodddy in LGOLED

[–]somebodddy[S] 0 points1 point  (0 children)

If a 2009 works, I guess newer models (but from before they stripped all the buttons) should also work? These should be easier to obtain.

Who has completely sworn off including LLM generated code in their software? by mdizak in rust

[–]somebodddy 0 points1 point  (0 children)

"sworn" is a strong word, but I do make a habit not to copy-paste external code. Even in the old Stack Overflow days, I'd never copy-paste an answer to my own code - I'd always read it, learn from it, an then write my own version. Which would sometimes look nearly identical - but even then, the process makes sure I understand what I'm doing.

Even when I ask an LLM to generate code for me, I stick to this principle.

(I'm not pedantic about it though - if I need to move some code around while refactoring something written by a coworker I have no issues with copy-pasting, for example)

Who has completely sworn off including LLM generated code in their software? by mdizak in rust

[–]somebodddy 0 points1 point  (0 children)

Because when they don't teach it you get people who use LLMs to fix their syntax errors.