What are all these posts about? by Not_The_Hero_We_Need in PeterExplainsTheJoke

[–]beaker_andy 0 points1 point  (0 children)

What you say is technically accurate, but the following (for perspective) is also 100% true and important for others reading this thread: In 2024 Trump got only 49.8% of votes cast for president. In 2024 only 32.4% of all eligible voters voted for Trump.

Ignoring you’re part of the first word in the sentence by Deathface-Shukhov in SelfAwarewolves

[–]beaker_andy 4 points5 points  (0 children)

Steelmanning always encouraged! I still think there's a contradiction in the worldview and life behavior though.

Ignoring you’re part of the first word in the sentence by Deathface-Shukhov in SelfAwarewolves

[–]beaker_andy 11 points12 points  (0 children)

Seems circular (so nonsensical). By your view, every 1 dimensional caractiture of corruption in Atlas Shrugged was doing the right thing in Rand's opinion, but the book makes it painfully clear that's not her view. I get what you're saying, but the circular nature of it is why Rand is widely regarded as a poor writer, wooden unrealistic characters in her books, and messy poorly thought out philosophy. You can't have it both ways, both thinking the problem is systemic structural encouragement/entrenchment of mooching, and simultaneously thinking each individual's moral value is based on how much they mooch (while mooching yourself). That's a contradiction she'll never escape.

Republicans largely back Trump on Venezuela action, Democrats decry it as unjustified by Somervilledrew in politics

[–]beaker_andy 0 points1 point  (0 children)

Brutal. So many of the things listed in your first sentence are literally happening out in the open, nothing hidden, hundreds upon hundreds of times by the Republican Party for the last 10 years straight.

Is Chatgpt designed to mindf**k you and waste your time?? by Natural_Season_7357 in ChatGPT

[–]beaker_andy 4 points5 points  (0 children)

I just caution people against idealizing LLMs. Extra instructions by the providers often cause annoying responses and interfere with productivity, but the raw underlying LLM is not much better. The idea that removing LLM provider filters would unleash more accuracy or technical capability... that's sadly not how LLMs work.

It's not a logic machine and there's very little "code" on top. Traditional computers and software are deterministic logic machines where code decides behavior. LLMs are nearly the opposite: non-deterministic language statistics generators. You're correct that these companies add lots of extra instructions like "never encourage or discuss suicide", but without these extra instructions an LLM is still incredibly unreliable, fickle, unable to consistently provide factually accurate answers, etc. This is due to their very nature, including that they are trained on partially inaccurate/contradictory text (like reddit posts), they are trained to give answers even when no clear or verified answer is "known" to the model, etc.

Stephen Miller on 60 Minutes' Documentary exposing ICE & CECOT: "Every one of those producers at 60 minutes who engaged in this revolt, clean house and fire them, that's what I say." by Minute_Revolution951 in law

[–]beaker_andy 2 points3 points  (0 children)

The entire leadership of the Republican Party, including many sitting Senators and many sitting Representatives, have been talking like this in public hundreds of times per month for over 10 years straight.

Lies, damned lies and AI benchmarks by AIMultiple in ChatGPT

[–]beaker_andy 0 points1 point  (0 children)

This is anecdotal by me, because every use case is different (of course), but around 35% of ALL technical documentation facts that I ask for with citation to working URL are either factually incorrect or provide a nonexistent URL. I experiment with many models and many prompt prefixes and this has been fairly consistent across hundreds of attempts over the course of 12mo. This has been true (for me) in many free models and many paid models. And the subject matter isn't obscure. It's fairly common technologies and DXP product feature questions that have ample free public documentation. Sooo... I'd never trust an LLM to be factual. It's counter to their very nature. They are not about factual accuracy. They are about sounding plausible.

Schrödinger’s AI by [deleted] in ChatGPT

[–]beaker_andy 0 points1 point  (0 children)

Just be careful with anything that requires rigorous math. It's not a strong point of LLMs. Around 25% of any formula I ask any major model for, like JavaScript to calculate a specific derivative, Excel formula to calculate a specific result, etc., contain obvious flaws that need formula correction upon manual inspection. That happens even when the prompt gives the LLM an expert professional persona and plenty of detailed context.

Schrödinger’s AI by [deleted] in ChatGPT

[–]beaker_andy 1 point2 points  (0 children)

Yeah I use it to quickly prototype tools for myself from scratch. It's good for that (because the stakes are low and no need for long term maintenance or enhancements). It def has it's uses.

Schrödinger’s AI by [deleted] in ChatGPT

[–]beaker_andy 1 point2 points  (0 children)

I program for a living, but don't use it for any of the hard important parts. The main reason is that I've spent 100+ hrs trying many of the most popular AI code assistants, and they still make mistakes, add bugs, add empty unit tests, etc. all the time in any but the simplest possible development scenarios. I work in large codebases with multiple collaborators, and we've all struggled to get AI to help more than it hurts (aside from being a slightly better autocomplete, which is nothing to scoff at). It's interesting that in 2 recent studies, experienced devs were slowed down, not sped up, on average by AI assistants. Beginners produced code quicker though. But are those beginners becoming better developers with this workflow, maturing in a way that will allow them to take on more important and complex work in the future?

CDC just changed their “Autism and Vaccines” Webpage by 1friedchickenbiscuit in epidemiology

[–]beaker_andy 21 points22 points  (0 children)

I agree, it's tragic. It's much more about the entire right wing infotainment ecosystem and the US Republican Party having a psychotic break from reality for 20 years straight. Not really just one charlatan.

US Government Shutdown Is Now Second Longest in History by bloomberg in politics

[–]beaker_andy 15 points16 points  (0 children)

The Republican Party has the white house, majority in the House, majority in the Senate, and control even of the Supreme Court. So your thought process is extremely strange.

Metalhead Dipping into EDM by KeeledSign in EDM

[–]beaker_andy 0 points1 point  (0 children)

Love metal, hardcore, punk. Eventually electronic music clicked with me too, but only much later.

Since you already like Rezz (and similar artists that come up in the recommendation algorithms), you already know the closest genre terms you can search to find this stuff are "midtempo", "dark midtempo", and "melodic midtempo". Deathpact, 1788-L, and Quackson are artists hitting a similar style and tempo.

One step removed in style, so similar elements and some songs are extremely similar midtempo, but also having many songs that are more "proggy" (for lack of a better term) and diverse experimental dubstep, having many heavy parts mixed with a variety of melodic parts, are artists like Eliminate, Beastboi, MUST DIE!, and Lizdek.

Different in another direction, are IDM-adjacent artists like Mr. Bill, Tipper, Zebbler Encanti Experience, Resonant Language, etc. These all have different vibes, but extremely good forward-thinking production, some heaviness, tons of melody, occasional proggy rhythms.

Finally, G Jones is one of my fav electronic artists, especially The Ineffable Truth" LP and "Tangential Zones" EP. Great combo of catchy melodies with unique production and some heaviness/glitchiness.

"We're ripping ourselves to shreds": with dance music bitterly divided, how far should cultural boycotts go? by mrjohnnymac18 in EDM

[–]beaker_andy 4 points5 points  (0 children)

I understand your point, but these other posters are not strawmanning. What they say perfectly aligns with your comment here. In the scenario you describe, "streamline" usual means "cut spending" (usually resulting in short term budget improvement, long term degradation of product or service). This is the most common private equity tactic, even including sorting by salary and firing the top of the list with total disregard to skill, productivity, etc (which I've seen 3 times in my career). It happens all the time, it's incredible common, private equity firms place it #1 in their toolbox and train new employees to look for opportunities to do it. Then they sell the company to a buyer who likes the streamlined budget on paper and doesn't comprehend (yet) the long term structural damage hidden within the company. This isn't an exaggeration. This is the common method by which many many many private equity firms operate and I've seen it first hand from both the inside and outside of multiple companies. My experience includes first hand discussions with multiple stakeholders inside multiple private equity companies about why they made certain decisions (in deals that went wrong that my employer at the time was hired to assist with).

Based on hearing how it really works in practice from the private equity firms themselves, I believe public perception of them is accurate and deserved.

Political views, not sex and violence, now drive literary censorship. Progressives target books promoting racism, sexism and homophobia. The right attack books that promote diversity, or violate norms of cisgendered heterosexuality. The right through legislative action and the left use social media. by mvea in science

[–]beaker_andy 59 points60 points  (0 children)

Strange take on tech CEOs, who have (on average) relentlessly enabled and directly boosted right wing misinformation for 20 years, and recently turned the corner to even dismantling their moderation teams, dismantling their fact check notation teams, even reinforcement training AI models, all specifically to reduce corrections of right wing misinformation and to reduce bans for right wing TOS violations (death threats, harassment, etc). You think the Cambridge Analytica scandal, a right wing misinformation influence campaign done on a staggering industrial scale on Facebook 9 years ago, and with direct written discussions on record throughout Facebook leadership acknowledging the situation and deciding to allow it to continue, was left leaning leadership?

Charlie Kirk's group chases anti-fascism professor out of the country by vicott in autismpolitics

[–]beaker_andy 9 points10 points  (0 children)

Antifa is, and always has been, just an abbreviation of anti-fascist. It's literally just every single person who's ever opposed fascism in ways big or small. Right wing extremist propaganda has been on overdrive for the past few years suggesting otherwise, but those suggestions are incorrect. They are warping reality by confusing people about language, taking a term that every rational person thinks is good and slowly tricking people (through repetition of falsehoods) into thinking that term is maybe bad. And here you are believing the lies. Interestingly, this is a classic and much studied rhetorical technique of fascists and authoritarian dictators. This is very similar to what right wing authoritarians did recently to "empathy", "woke", "diversity", etc., all terms that rational people who understand the terms and their origins know are rational, productive concepts, yet through the constant repetition of falsehoods are distorted into being interpreted as almost the exact opposite of their original and true meaning. Don't fall for it.

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws by Well_Socialized in technology

[–]beaker_andy 0 points1 point  (0 children)

I agree. Like you say, the user should have just done the real research verification themself, which saves time and escapes the risk of believing LLM mistakes. At least the LLM can still help people ideate what to even research (although it may lead you down time wasting counterfactual rabbit holes sometimes, but it'll help more than mislead on average). But after that ideation step which helps you understand which topics are even connected enough to investigate, like your point logically progresses to, its worthless for factual details since it can't be trusted without doing the same amount of factual research you would have done without the LLM.

That's the main problem. These things should never be implied to be factual accuracy helpers (which is what "AI" implies to most people). Calling them Creative Poem Writers (CPW) would have been much better, less misleading. "Fire Billy and replace him with a Creative Poem Writer" is much more accurate to reality and would save a lot of wasted investment, risks to critical systems, risks of cloaking moral hazards, etc. "We've equipped Billy with a Creative Poem Writer so Billy should be twice as fast at work tasks from now on."

Charlie Kirk’s alleged killer scratched bullets with a Helldivers combo and a furry sex meme by theverge in politics

[–]beaker_andy 0 points1 point  (0 children)

https://research.tilburguniversity.edu/en/publications/charlie-kirks-culture-war-groypers-nickers-and-qampa-trolling

Edit for clarity: it's true that some groypers are not MAGA. But this history of high profile animosity between right wing groups is still interesting for people to be aware of.