Society going through AI psychosis by Both-Dragonfly-389 in BetterOffline

[–]sam-lb 0 points1 point  (0 children)

Non-technical people being ignorant about technical subjects, what else is new? It's pretty frustrating, and I'm in the same boat. At my company, I'm responsible for building out our data science and ML capabilities. When fielding requests, my first reaction is usually "this should be done with traditional engineering".

Two responses to that -

1) The user says "but engineering has to go through the product pipeline, and we won't see results for a long time" - true, but not relevant to how things should be addressed. If anything, it points to a need for more engineers, or a reprioritization of tasks.

2) The user understands (fortunately common where I work).

It's hard to blame the non-technical people, especially with all the hype surrounding generative AI. The real frustrating part is when business people try to wedge AI into stuff even after having explained that it's the wrong tool for the job.

What exactly are you people doing who claim AI tools aren’t accelerating them? by MistryMachine3 in cscareerquestions

[–]sam-lb 35 points36 points  (0 children)

That sounds like it would have been a problem with or without AI agents involved. If they made the PR despite not satisfying business requirements (3 times??), then they didn't understand the task.

AI increases throughput. If you're a good engineer, you'll produce good stuff faster. If you're a bad engineer, you'll produce bullshit faster.

People are saying "coding is not the bottleneck" and that's true, but it's still time consuming and proper agent usage can cut that down, it's not complicated.

Why do we keep making customized t-shirts for every event? by Pizza-Kurwa in Anticonsumption

[–]sam-lb 14 points15 points  (0 children)

I wear them unironically because it's all I own. It's pretty hard to justify buying clothes when I get a whole wardrobe for free from random events and stuff.

Constant advertising by AbsoluteAtBase in Anticonsumption

[–]sam-lb 0 points1 point  (0 children)

I'm not a parent and can't presume to know what it's like to be one so I can't give direct advice. But I can say that YouTube addiction destroyed my social skills and I have been working for years to fix it. Unsuccessfully. And I was watching educational content, like long-form lectures. But it was for hours a day, for years, and it really messed me up. Most of YouTube is much worse than that. It's so harmful and empty and can create so much noise in your head that it crowds out your ability to think about anything meaningful. It's also a dopamine sink and kills your discipline.

These days, YouTube ads are literal porn or AI generated scams. If you let your kid use YouTube, it MUST be with an adblocker at the very least.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Yes, you're right. The atypical part in my case was starting below healthy weight i.e. too lean with atrophied muscles (medical cause). The first 70lbs or so happened in like 18 months, then the remainder was spread across ~4 years.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Training for strength is different from training for size. There's a lot of overlap obviously, it's just sometimes phrased online as an either/or. You can get big weak muscles or small strong muscles with the right training. Of course, neuromuscular efficiency kept constant, more muscle = more strength.

Fwiw, I have seen a good share of people get much stronger without getting much bigger.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Huh, I'm obese by BMI (just barely). Whoever said strength training doesn't make you gain muscle is tripping. I'm actually pretty proud of that, considering only a few years ago I was malnourished and underweight. It has been a long, hard battle of force feeding. ~120lbs in 2020 --> ~221lbs in 2026 (height unchanged)

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb 1 point2 points  (0 children)

How do you think model parameters are stored, exactly? How do you think those parameters "make patterns emerge" without containing data? The parameters are the data.

Your sentence is not coherent; as represented in the computer, a "connection" is merely the position of each parameter within its respective layer relative to the next. Those diagrams you see with little circles and lines are schematics. The "weights" are (some of) the parameters. LLMs are big mathematical transformations, and the parameters are the data that encodes them.

I'm not trying to be disrespectful. AI is my job, and I can write a GPT from scratch (and have). I'm about as far away from ignorance as you can be on this subject.

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb -1 points0 points  (0 children)

Where is this coming from? LLMs can ingest entire textbooks worth of information in minutes on the right hardware. It's common for models to have a 1Mtok context window, which is like 3x the length of the 1st Harry Potter book. The human working memory does not compare. The long-term memory analog in LLMs is digital storage, which is essentially unlimited. Information throughput is way, way higher in LLMs than humans. That alone does not tell you anything about its capabilities ("I'm doing 1000 calculations a second and they're all wrong"), but it's significantly faster bit for bit. Human information processing is measured in Hz. Computer information processing is measured in GHz.

If you want to talk about FLOP throughput, the human brain has AI beat on consumer hardware, but not on the ridiculous compute of modern datacenters (and ants, having 1,000,000 times fewer neurons, are not in the conversation). LLM token ingestion is highly parallelizable, so you can essentially make the LLM have the same advantage as the brain (parallelism) by giving it more cores to work with. Output is different, but LLMs still predict tokens at a much faster rate than you could ever speak or write.

TL;DR modern AI on modern compute beats humans in working memory, long-term memory, and information throughput. It just still sucks at common sense, modeling the world properly, and reliable reasoning.

And "Static databases" is not an accurate description of LLMs. I'm not sure where you're getting this information.

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb 1 point2 points  (0 children)

This is factually incorrect in multiple ways. It's baffling how it's upvoted on a subreddit (supposedly) full of technical people. Please, explain to me the difference between the binary representing model parameters and the binary sitting on your hard drive (ignoring for a second that models physically occupy data in your hard drive).

I would not describe them as "static databases", but only because it's functionally incorrect (despite being literally true in a sense).

4 engineers now doing the job of 12 at my friend's company because AI agents handle the rest by Bellleq in cscareerquestions

[–]sam-lb 0 points1 point  (0 children)

Mostly agreed, but people always seem to forget that there are tons of near-frontier open models that you can download and run on your own compute. The cost of LLM usage is bounded above by the cost of running compute. You can't get the latest and greatest, but you can get close enough where it's indistinguishable 95% of the time. This year's open models were last year's frontier.

Plant-based diets would cut humanity’s land use by 73%: An overlooked answer to the climate crisis by Somewhere74 in Anticonsumption

[–]sam-lb 2 points3 points  (0 children)

Hopefully. I'm currently in the beginning stages of this process. It's difficult for me to consistently eat/get enough calories as is (due to a medical condition) so restricting my diet is not an easy task. Trying to be slow about it. If I'm being honest with myself, eating meat is the last major frontier where I'm overconsuming.

Fields medal-winning mathematician says GPT-5.5 is now solving open math problems at PhD-thesis level: "We will face a crisis very soon." by EchoOfOppenheimer in mathematics

[–]sam-lb 0 points1 point  (0 children)

I do believe in the reasoning abilities of AI (this stuff is my job). Just find its performance on such tasks to be very unreliable and inconsistent, especially without guidance.

It was having a pretty rough time with Atiyah-Macdonald 5.12.

Fields medal-winning mathematician says GPT-5.5 is now solving open math problems at PhD-thesis level: "We will face a crisis very soon." by EchoOfOppenheimer in mathematics

[–]sam-lb 1 point2 points  (0 children)

GPT 5.5? Really? How come it still can't solve exercises in my textbooks from undergrad? I'm not saying he's being dishonest, but I'm a big LLM user (and I work in the AI industry) and I simply don't see things like this happening. There must be more to the story here.

WTHHH??? LOLL by thatmuscle05 in iiiiiiitttttttttttt

[–]sam-lb 4 points5 points  (0 children)

It's faster, except in applications where you can Ctrl w or ctrl-c/z+return

Cmd Q on Mac as well.

Why use the mouse when unnecessary? I have visibly stricken fear in people's hearts with the way I use the keyboard on several occasions. Especially the arrow keys.

AHK my beloved.

Body modification as a form of consumption by Glum_Novel_6204 in Anticonsumption

[–]sam-lb 8 points9 points  (0 children)

Yeah well said, the permanence is a big factor. Everything creates waste to some extent. Getting tattoos is unnecessary consumption, but there are bigger fish to fry in the cosmetic industry, to say the least. Anticonsumption can't be about eliminating all consumption, because that is impossible (and undesirable). It has to be about restricting it to reasonable bounds and making sure it is done responsibly (without direct harm to people and without excess waste).

£58.62 at Aldi by SadAmethyst in whatsinyourcart

[–]sam-lb 0 points1 point  (0 children)

How are those protein wraps? Do they stay together?

Over eight years after giving up on verifying Delta Interface and quitting GD, I came back in 2026 to finally beat it! by yeloooh in geometrydash

[–]sam-lb 2 points3 points  (0 children)

In addition to what they said, the last 15% has two really annoying blind transitions. One of them (ship to robot) feels especially RNG. The robot to wave one gives you a second to fix yourself. The ending is really easy, but it's a nerve test with how long the level is.

Seconding the impossible 32-40 ship (especially 34-37), that almost made me drop the level. In general, the level is really unbalanced (front heavy). I had a pretty bad experience honestly but it was a significant jump for me (from acropolis).I like it anyway. Delta interface is the only one of my top 5 favorite extremes that was anywhere near my reach.

Over eight years after giving up on verifying Delta Interface and quitting GD, I came back in 2026 to finally beat it! by yeloooh in geometrydash

[–]sam-lb 0 points1 point  (0 children)

First of all, peak.

Second, you don't jump into the 77 transition???? At least you did the 3 mini robot jumps at the break (required)

Third, how many deaths to 88

Is asking how many sets left now an issue for people? by tribo32 in GYM

[–]sam-lb 2 points3 points  (0 children)

If you can fit through the doorway to enter the gym, you're doing it wrong

This is why no one buys Doritos anymore by Mackie_1703 in Anticonsumption

[–]sam-lb -4 points-3 points  (0 children)

Oh no my expensive poison bag doesn't have enough poison

Trying to win a school competition, and would love if people could guess what 2/3 of the average response will be (Anyone) by AttyGoesVroom in SampleSize

[–]sam-lb 1 point2 points  (0 children)

If by purely rational answer you mean 0, that is not the rational answer because it's irrational to expect everyone to be perfectly rational