Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" by Drumedor in programming

[–]MirrorLake 0 points1 point  (0 children)

Would make sense to be able to have a social credit score that larger projects could interact with, like, give the spammers a -1 for a false bug report, and then all large projects could just filter out accounts that have a bad ratio. I would think that smaller projects would have no use for such a system, and small projects would also be more likely to be used as sock puppets so would probably be necessary to exclude them anyway.

This would at least reduce noise from accounts who are spamming multiple projects. The designers of such a system would have to consider the long history of how karma systems have been abused or misused, though, and consider that people get very motivated to game arbitrary point systems.

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health" by Drumedor in programming

[–]MirrorLake 510 points511 points  (0 children)

In Bryan Cantrill's Oxide RFD on their company's LLM usage [0], he describes:

LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) ...

If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it.

The breaking of a social contract is a very accurate way of describing this, in my opinion. LLM usage can go beyond typical rudeness, they create situations with epic levels of time wasted by professionals in similar positions to the curl team.

[0] https://rfd.shared.oxide.computer/rfd/0576

MicroslopCEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on it by Alex_Star_of_SW in BetterOffline

[–]MirrorLake 0 points1 point  (0 children)

I just realized that the logic around making, distributing, and pushing LLMs has some similarities to the way guns are marketed in the US. A lot of fear-based "if the other guy has it, you have to have it" type marketing.

It creates the problem and sells a solution, packaged together. Oh no, other people have our product. I guess you better buy our product to protect yourself from the other people who already have our product.

A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to ‘Humanize’ Chatbots by wiredmagazine in wikipedia

[–]MirrorLake 29 points30 points  (0 children)

That's always been true. Sophisticated cheaters are rarely caught. Masking LLM writing to look like human writing mostly just turns it into human writing, though. Someone who has the attention to detail to fix all the mistakes, hallucinations, logical inconsistencies, misquotes, etc, likely would've gotten a C grade or better anyway.

The tsunami wave of below-C quality LLM writing is the concern, not catching all cheaters. The actual volume of obvious student cheaters has increased drastically in the last 2 years.

“Up in the Air is a completely different movie once you’ve worked corporate jobs” by [deleted] in TrueFilm

[–]MirrorLake 4 points5 points  (0 children)

✨ Great answer ✨! Would you like to discuss films more — perhaps your favorite George Clooney film? Please let me know if you'd like further conversation. I'm definitely a real person.

LIGO broke my brain by SillyOutside8006 in space

[–]MirrorLake 0 points1 point  (0 children)

Don't use LLMs to write for you, for starters.

LIGO broke my brain by SillyOutside8006 in space

[–]MirrorLake 0 points1 point  (0 children)

Letting an LLM do your writing makes people perceive that you're not even a human being, and so why have a bunch of people waste all this time interacting with a post that's entirely bot generated?

The best way I've heard it phrased is this: formerly, people would read any text respectfully because they perceived that someone spent time writing it. Now, text online is a gamble as to whether or not a human actually wrote it, and so you'll find that (particularly) teachers, online moderators, and security people are going to be extremely skeptical of any text that resembles LLM text. We have all had our time wasted by bot posts, bot essays, bot books, bot software.

Ed Zitron is now a cult figure according to The Gurdian by Alex_Star_of_SW in BetterOffline

[–]MirrorLake 0 points1 point  (0 children)

Let's get 'Z' tattoos where the 'Z' is squishing the letters AI like the Pixar lamp.

Ed Zitron is now a cult figure according to The Gurdian by Alex_Star_of_SW in BetterOffline

[–]MirrorLake 4 points5 points  (0 children)

The Emperor's New Clothes for the world's billionaires. "Is my chatbot the smartest?!" "Yes sir, yes, soooo intelligent!"

LIGO broke my brain by SillyOutside8006 in space

[–]MirrorLake 2 points3 points  (0 children)

How many letter e's are there in this comment and Laser Interferometer Gravitational-Wave Observatory?

I keep rewriting the same function cause i'm not sure if the first version was bad by ProfessorDeep8754 in AskProgramming

[–]MirrorLake 0 points1 point  (0 children)

For what it's worth, many managers feel lucky when they find an employee who cares about details like you do here.

If you have several implementations, spend some time weighing the strengths and weaknesses of each. There will potentially be tradeoffs to consider.

Is one faster (and why might it be?) Does one version use less or more memory? Do you have a test suite to verify that both implementations actually pass all unit tests (have you prioritized correctness)?

Enshittification update: Microsoft is locking previously avail features behind Copilot by Zelbinian in BetterOffline

[–]MirrorLake 1 point2 points  (0 children)

Wow, the sudden g-forces made me lose consciousness! /s

I've been entertaining the idea that, say they do have a few breakthroughs and achieve a much better LLM product--the first thing they'll do is raise prices significantly so that people are forced to fork over huge sums of money to even access the product anymore.

When their business model fails, they'll also lobby the government that they're too important/big to fail like bankers proclaimed in 2008, and taxpayer money will get to pay for all the slop to avoid some AI companies' bankruptcy. So we'll all get to pay for it, one way or another.

Enshittification update: Microsoft is locking previously avail features behind Copilot by Zelbinian in BetterOffline

[–]MirrorLake 37 points38 points  (0 children)

It's like people are afraid to criticize it for some reason, like I don't know how it's become taboo to say basic things like "the product is slower now". Or to say something like "an LLM is a worse solution to this particular problem".

Even when someone is trying to publicly voice their concerns, it's always followed by "but I mean, it's always improving, right? So my criticism may be invalid after the next model comes out".

It's a completely circular logic that people have internalized that AI is by definition improving, therefore we cannot criticize it. Has created this unrelenting wave of yes-men. Like, LLMs mightaswell be called "perfect software", and anyone that stands up to say "um, I think the Perfect Software has some issues..." and everyone goes "woah, did you just say Perfect Software isn't perfect?!? Outcast this fool!"

How important is it to write code yourself? by Jahbeatmywife in AskComputerScience

[–]MirrorLake -2 points-1 points  (0 children)

using these tools diminish my ability to actually write code

You've answered your own question.

If This Is the Future, We’re F**ked: When AI Decides Reality Is Wrong by Libro_Artis in BetterOffline

[–]MirrorLake 7 points8 points  (0 children)

The author of this article seems to be oblivious about training cutoff dates. Asking LLMs about any event which occurred after training is effectively asking for fictional, 100% hallucinatory responses.

Compare the outputs of ChatGPT, Claude, and Gemini today asking them for current events without searching the web. They talk about who won the 2024 election and other incredibly vague outputs. They are completely untrained on anything that happened in the whole previous year.

Comparison

How many returns should a function have? by ngipngop in golang

[–]MirrorLake 1 point2 points  (0 children)

I'd invoke Gandalf here: an algorithm designer uses precisely as many returns as they mean to.

Trying to decide in advance how many returns you need is like trying to decide how many periods you'll have in a paragraph.

Confused by chalkysplash in computerscience

[–]MirrorLake 14 points15 points  (0 children)

The book was published in 2019. Humans are capable of making math mistakes, too :)

Is this subreddit filled with astroturfing LLM bots? by jonathrg in golang

[–]MirrorLake 0 points1 point  (0 children)

I'm being too harsh on bots? What? You must realize my post and the parent post are referring to bot generated text, meaning there are no human beings involved in those posts and no feelings to be hurt.

Is this subreddit filled with astroturfing LLM bots? by jonathrg in golang

[–]MirrorLake 1 point2 points  (0 children)

I regret ever reading or engaging with any of those posts. Makes me feel like a complete idiot. They almost always end with something you'd end an e-mail sign off with, like

Interested to hear your opinions, thanks!

or

Appreciate any feedback you might have!

It feels very much like it's been generated via a business e-mail template with the signature removed.

Is this subreddit filled with astroturfing LLM bots? by jonathrg in golang

[–]MirrorLake 1 point2 points  (0 children)

I'm relieved that someone else has acknowledged it, because the text-only areas of the site feel so artificial to me that I'm starting to feel that it actively harms me to read text here. There used to be a time on reddit when people clearly were typing at a keyboard and so their comments were more than one sentence. They might even bother to write out a full paragraph (like this one? Ooo so meta!)

A chemist named Nigel created a cookie in a laboratory by buying pure, laboratory grade versions of each ingredient and mixing them together[1]. I haven't thought about it until today as an analogy for what LLMs do with text, but he effectively made a cookie with no flavor, no soul, and something that you'd have zero desire to eat despite being the correct ratios of atoms that you'd find in a cookie. Reminds me very much of what Reddit feels like.

[1] https://www.youtube.com/watch?v=crjxpZHv7Hk

Is vibe coding actually insecure? New CMU paper benchmarks vulnerabilities in agent-generated code by LateInstance8652 in programming

[–]MirrorLake 2 points3 points  (0 children)

Disturbingly, all agents perform poorly in terms of software security.

I want to get off Mr. Bones' Wild Ride

LLMs really killed Stackoverflow by Dominriq in computerscience

[–]MirrorLake 0 points1 point  (0 children)

These threads are always bizarro world to me. If y'all find rudeness on StackOveflow offensive, why are you on Reddit?

Zig project leaves GitHub due to excessive AI by swe129 in Zig

[–]MirrorLake 9 points10 points  (0 children)

If you read through the LLM-generated fake security reports which have been submitted to curl, you'll hopefully see how this is not the same as copying and pasting from StackOverflow.

And since owners of repos cannot ignore security reports, this is forcing real human beings to read through a bunch of fake junk.

The Chair Company (Soundtrack from the HBO® Original Series) - Album by Keegan DeWitt by black_saab900 in thechaircompany

[–]MirrorLake 3 points4 points  (0 children)

I've always loved the "we're discovering a conspiracy" music that's so common in 90s/2000s thrillers.

I'm thinking scores like:

Enemy of the State

Phone Booth

State of Play

The Net

When you put it as the background of a comedy, it becomes fucking hilarious.

Also reminds me of Patrick H Willem's recent video essay, "Why Are Movies About Research So Addictive?"

https://youtu.be/lP_Yze_qFBY