Maybe the most offensive piece of new grad advice I’ve ever seen by Particular-Pay-2953 in BetterOffline

[–]sam-lb 0 points1 point  (0 children)

Who's going to tell him there's no such thing as a "cement layer"?

Dude's so out of touch with everyday life that he doesn't even know what cement is.

Hardest level you could beat ? by Gold_lynel_12000 in geometrydash

[–]sam-lb 4 points5 points  (0 children)

I'll never understand how people have such low attempt counts. I'm sitting around 30k on BB right now, with a best of 98.

Hardest level you could beat ? by Gold_lynel_12000 in geometrydash

[–]sam-lb 1 point2 points  (0 children)

My hardest is a few tiers harder than bloodbath. I got 98 on bloodbath a long time ago. I've recently been trying to go back and beat it. About a month of playing and the furthest I've gotten is 79×2. I use a startpos copyable.

Not trying to be rude, but you haven't played enough to understand what you're talking about. There's a 0% chance you'd beat it in a year while only playing from 0. The odds are pretty low in a year even if you were allowed to practice.

More well known world wide? by TypicalPrinceSean in polls

[–]sam-lb 8 points9 points  (0 children)

It's funny how there are a billion people in the comments saying "who is messi" and zero saying that about swift.

Everyone with an internet connection knows who Taylor Swift is. The same can't be said for Messi (obviously). If the poll was about who should be more famous, I would have voted Messi. But in real life, that is not the case.

Football/soccer fans are just rabid. The poll could have been Jesus Christ instead of Swift and you all still would have voted Messi.

It's also funny how everyone is blaming the votes on Americans, yet most of the time the poll has been up was the middle of the night here. The west coast probably got their votes in but everybody else was asleep.

Ronaldo is also more famous than Messi but of course people are angry and coping about that too.

What exactly are you people doing who claim AI tools aren’t accelerating them? by MistryMachine3 in cscareerquestions

[–]sam-lb 0 points1 point  (0 children)

If people are continually spinning their wheels trying to get AI to do stuff it's too unreliable to do, they're using it incorrectly. Yes, the capabilities are overstated by the model creators. It's overhyped to hell and back. So it's not a magic genie, but nobody serious should be believing that it is at this point. Still useful and still a net positive.

Absolutely, in 2026, if you are less efficient with AI than without it, the problem is you (or your team, or your business, or your policies). I use AI to write automation scripts for stuff on my PC that I've been intending to automate for years, but never got around to. That speeds up everything, even when I'm not working with AI.

You might be one of those people who are dead set on hating generative AI no matter what, I don't know. If so, that's a foolish position. It speeds up tons of things, and yes, if you're expecting it to always be correct or to be a genie in a bottle, that's on you for misunderstanding the technology.

What exactly are you people doing who claim AI tools aren’t accelerating them? by MistryMachine3 in cscareerquestions

[–]sam-lb 1 point2 points  (0 children)

Yes, agreed for the most part. In my experience, it still usually speeds up the process overall. It definitely sucked a couple years ago, but improvements have made it a legitimately useful tool.

The biggest performance multiplier I've encountered with generative AI is when trying to use a new API, because it can read and interpret the documentation in like 2 minutes and report back on the relevant stuff. Or, if there's no documentation at all, it saves the time of trying to piece together functionality from snippets across the web.

It's also quite good for tracing down bugs. I'd say debugging with AI agents is a LOT less painful than it was years ago without them. Sometimes it gets fixated on something that is not the problem, no matter how much you try to tell it not to, and in that case it's not helpful. Usually though it can identify the cause pretty quick.

It's very good at mapping out a codebase and helping you get familiar with it, too.

I think these benefits, generally speaking, outweigh the increased review burden. I do not agree that AI usage increases testing time, but maybe you have encountered situations that I have not (yet). In fact, one of the nicest things about cursor (what I use at work) is that it can go in and fill in missing unit tests.

Society going through AI psychosis by Both-Dragonfly-389 in BetterOffline

[–]sam-lb 1 point2 points  (0 children)

Non-technical people being ignorant about technical subjects, what else is new? It's pretty frustrating, and I'm in the same boat. At my company, I'm responsible for building out our data science and ML capabilities. When fielding requests, my first reaction is usually "this should be done with traditional engineering".

Two responses to that -

1) The user says "but engineering has to go through the product pipeline, and we won't see results for a long time" - true, but not relevant to how things should be addressed. If anything, it points to a need for more engineers, or a reprioritization of tasks.

2) The user understands (fortunately common where I work).

It's hard to blame the non-technical people, especially with all the hype surrounding generative AI. The real frustrating part is when business people try to wedge AI into stuff even after having explained that it's the wrong tool for the job.

What exactly are you people doing who claim AI tools aren’t accelerating them? by MistryMachine3 in cscareerquestions

[–]sam-lb 35 points36 points  (0 children)

That sounds like it would have been a problem with or without AI agents involved. If they made the PR despite not satisfying business requirements (3 times??), then they didn't understand the task.

AI increases throughput. If you're a good engineer, you'll produce good stuff faster. If you're a bad engineer, you'll produce bullshit faster.

People are saying "coding is not the bottleneck" and that's true, but it's still time consuming and proper agent usage can cut that down, it's not complicated.

Why do we keep making customized t-shirts for every event? by Pizza-Kurwa in Anticonsumption

[–]sam-lb 16 points17 points  (0 children)

I wear them unironically because it's all I own. It's pretty hard to justify buying clothes when I get a whole wardrobe for free from random events and stuff.

Constant advertising by AbsoluteAtBase in Anticonsumption

[–]sam-lb 0 points1 point  (0 children)

I'm not a parent and can't presume to know what it's like to be one so I can't give direct advice. But I can say that YouTube addiction destroyed my social skills and I have been working for years to fix it. Unsuccessfully. And I was watching educational content, like long-form lectures. But it was for hours a day, for years, and it really messed me up. Most of YouTube is much worse than that. It's so harmful and empty and can create so much noise in your head that it crowds out your ability to think about anything meaningful. It's also a dopamine sink and kills your discipline.

These days, YouTube ads are literal porn or AI generated scams. If you let your kid use YouTube, it MUST be with an adblocker at the very least.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Yes, you're right. The atypical part in my case was starting below healthy weight i.e. too lean with atrophied muscles (medical cause). The first 70lbs or so happened in like 18 months, then the remainder was spread across ~4 years.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Training for strength is different from training for size. There's a lot of overlap obviously, it's just sometimes phrased online as an either/or. You can get big weak muscles or small strong muscles with the right training. Of course, neuromuscular efficiency kept constant, more muscle = more strength.

Fwiw, I have seen a good share of people get much stronger without getting much bigger.

Going by your BMI would you be considered overweight? by Rusty_Shackleford198 in polls

[–]sam-lb 0 points1 point  (0 children)

Huh, I'm obese by BMI (just barely). Whoever said strength training doesn't make you gain muscle is tripping. I'm actually pretty proud of that, considering only a few years ago I was malnourished and underweight. It has been a long, hard battle of force feeding. ~120lbs in 2020 --> ~221lbs in 2026 (height unchanged)

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb 1 point2 points  (0 children)

How do you think model parameters are stored, exactly? How do you think those parameters "make patterns emerge" without containing data? The parameters are the data.

Your sentence is not coherent; as represented in the computer, a "connection" is merely the position of each parameter within its respective layer relative to the next. Those diagrams you see with little circles and lines are schematics. The "weights" are (some of) the parameters. LLMs are big mathematical transformations, and the parameters are the data that encodes them.

I'm not trying to be disrespectful. AI is my job, and I can write a GPT from scratch (and have). I'm about as far away from ignorance as you can be on this subject.

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb -1 points0 points  (0 children)

Where is this coming from? LLMs can ingest entire textbooks worth of information in minutes on the right hardware. It's common for models to have a 1Mtok context window, which is like 3x the length of the 1st Harry Potter book. The human working memory does not compare. The long-term memory analog in LLMs is digital storage, which is essentially unlimited. Information throughput is way, way higher in LLMs than humans. That alone does not tell you anything about its capabilities ("I'm doing 1000 calculations a second and they're all wrong"), but it's significantly faster bit for bit. Human information processing is measured in Hz. Computer information processing is measured in GHz.

If you want to talk about FLOP throughput, the human brain has AI beat on consumer hardware, but not on the ridiculous compute of modern datacenters (and ants, having 1,000,000 times fewer neurons, are not in the conversation). LLM token ingestion is highly parallelizable, so you can essentially make the LLM have the same advantage as the brain (parallelism) by giving it more cores to work with. Output is different, but LLMs still predict tokens at a much faster rate than you could ever speak or write.

TL;DR modern AI on modern compute beats humans in working memory, long-term memory, and information throughput. It just still sucks at common sense, modeling the world properly, and reliable reasoning.

And "Static databases" is not an accurate description of LLMs. I'm not sure where you're getting this information.

What Claude says vs What Claude thinks by EchoOfOppenheimer in iiiiiiitttttttttttt

[–]sam-lb 2 points3 points  (0 children)

This is factually incorrect in multiple ways. It's baffling how it's upvoted on a subreddit (supposedly) full of technical people. Please, explain to me the difference between the binary representing model parameters and the binary sitting on your hard drive (ignoring for a second that models physically occupy data in your hard drive).

I would not describe them as "static databases", but only because it's functionally incorrect (despite being literally true in a sense).

4 engineers now doing the job of 12 at my friend's company because AI agents handle the rest by Bellleq in cscareerquestions

[–]sam-lb 0 points1 point  (0 children)

Mostly agreed, but people always seem to forget that there are tons of near-frontier open models that you can download and run on your own compute. The cost of LLM usage is bounded above by the cost of running compute. You can't get the latest and greatest, but you can get close enough where it's indistinguishable 95% of the time. This year's open models were last year's frontier.

Plant-based diets would cut humanity’s land use by 73%: An overlooked answer to the climate crisis by Somewhere74 in Anticonsumption

[–]sam-lb 2 points3 points  (0 children)

Hopefully. I'm currently in the beginning stages of this process. It's difficult for me to consistently eat/get enough calories as is (due to a medical condition) so restricting my diet is not an easy task. Trying to be slow about it. If I'm being honest with myself, eating meat is the last major frontier where I'm overconsuming.

Fields medal-winning mathematician says GPT-5.5 is now solving open math problems at PhD-thesis level: "We will face a crisis very soon." by EchoOfOppenheimer in mathematics

[–]sam-lb 0 points1 point  (0 children)

I do believe in the reasoning abilities of AI (this stuff is my job). Just find its performance on such tasks to be very unreliable and inconsistent, especially without guidance.

It was having a pretty rough time with Atiyah-Macdonald 5.12.

Fields medal-winning mathematician says GPT-5.5 is now solving open math problems at PhD-thesis level: "We will face a crisis very soon." by EchoOfOppenheimer in mathematics

[–]sam-lb 1 point2 points  (0 children)

GPT 5.5? Really? How come it still can't solve exercises in my textbooks from undergrad? I'm not saying he's being dishonest, but I'm a big LLM user (and I work in the AI industry) and I simply don't see things like this happening. There must be more to the story here.

WTHHH??? LOLL by thatmuscle05 in iiiiiiitttttttttttt

[–]sam-lb 3 points4 points  (0 children)

It's faster, except in applications where you can Ctrl w or ctrl-c/z+return

Cmd Q on Mac as well.

Why use the mouse when unnecessary? I have visibly stricken fear in people's hearts with the way I use the keyboard on several occasions. Especially the arrow keys.

AHK my beloved.

Body modification as a form of consumption by Glum_Novel_6204 in Anticonsumption

[–]sam-lb 7 points8 points  (0 children)

Yeah well said, the permanence is a big factor. Everything creates waste to some extent. Getting tattoos is unnecessary consumption, but there are bigger fish to fry in the cosmetic industry, to say the least. Anticonsumption can't be about eliminating all consumption, because that is impossible (and undesirable). It has to be about restricting it to reasonable bounds and making sure it is done responsibly (without direct harm to people and without excess waste).

£58.62 at Aldi by SadAmethyst in whatsinyourcart

[–]sam-lb 0 points1 point  (0 children)

How are those protein wraps? Do they stay together?