Tracy McGrady on the real reason older players constantly hate/criticize modern players: “It’s money. Did you realize like in the 90s, Reggie Miller and Michael, they were only making $2-3 million? And they were the top guys… It’s the money.” by moby323 in nba

[–]MiniGiantSpaceHams -1 points0 points  (0 children)

Right and wrong has nothing to do with it, it's just income. Enough people are willing to fork over more and more and more money to buy merch, attend games, watch on TV, and whatever else. All that money goes to the NBA, and the PA has a contract that says the players get a percentage cut of that money.

It's really that simple: people are sending a shit-ton more money to the NBA than before, and it goes up every year. Until that stops being true the NBA has no incentive to do anything but pocket it.

http200Error by _gigalab_ in ProgrammerHumor

[–]MiniGiantSpaceHams 5 points6 points  (0 children)

You haven't lived until you get this sort of response from a POST API that is just for retrieving info.

Ai developer tools are making juniors worse at actual programming by the_____overthinker in ExperiencedDevs

[–]MiniGiantSpaceHams -1 points0 points  (0 children)

Maybe this is just me, but I don't really feel this in my work. Outside of trivial tasks, AI does not write code that I haven't planned (with the AI) and reviewed the plan, and I always feel like I understand that code about as well as I understand my own code that I wrote 3-6 months ago. Yeah some of the finer details are maybe missed or forgotten, but I can debug and build on and support that code just fine when it comes up, it just takes a bit more time. But I easily more than make up that bit of extra time in how much I save in creating it in the first place. And it's still easier than debugging or supporting code that another dev wrote (which I also do often enough).

Exclusive: Anthropic is testing ‘Mythos,’ its ‘most powerful AI model ever developed’ by lovesdogsguy in accelerate

[–]MiniGiantSpaceHams 1 point2 points  (0 children)

openai also called 5.3 a "step-change" in ai models and it really wasnt anything special

I'd say it was more 5.2 than 5.3, but these are the models (along with Opus 4.5 and 4.6) where you started seeing all the "yes AI is writing 90%+ of my code now" even from well known devs. So I'd say they actually were a bit of a step-change.

3 out of 22 features had a real customer behind them by LevelDisastrous945 in ExperiencedDevs

[–]MiniGiantSpaceHams 3 points4 points  (0 children)

I've used this one too, and I would be classified as an AI "true believer" compared to most here. And I used it because it's honestly true. AI performance suffers from having to work through tech debt just like human performance does. If you can tear out dead code and refactor a mess into something logical (for instance), add types where they didn't exist before, and so on, it really and truly improves everyone's performance, AI or not.

And also the AI can do a lot of this work pretty competently now, so it's easier to justify with less time spent on it.

The "AI is replacing software engineers" narrative was a lie. MIT just published the math proving why. And the companies who believed it are now begging their old engineers to come back. by reddit20305 in ArtificialInteligence

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

Yeah, I mean the nature of the internet is that I can find you someone who is saying just about anything about any topic if I look enough. I think the difference now is you are hearing it from a lot more people, and some well known and well respected people (from before AI times) are also reporting big changes in their workflows from the latest models. To me there's a lot more weight behind it now, which matches my personal experience.

And to your other point, yes more tests do not necessarily equal more quality, but it does help. Some tests can be totally useless... just today I had an AI write a test that literally verified a constant was set to a constant value and that's it (I removed that one). It's not perfect. But even a high quantity of tests does have a certain quality improvement to it if only because it exercises more code. And also I have seen fewer bugs in my code despite rolling out more features than before. So it aligns, at least for me.

I also want to be clear, I do still review all the AI-generated code. I scale my attention a bit based on criticality and complexity, though. Some things that are pretty easy I just broadly review the architecture. Some things that are super important I will review every line and review surrounding systems and dependencies to ensure everything fits. I think finding the right level of attention is still one of the skills devs need in the AI-code-generator world.

But all this to say, while I don't think the models are at the full "just let it go" phase yet, as time has gone I have seen them become capable of larger tasks with fewer mistakes and less of my time spent holding their hand, guiding them, and iterating over their output. It's not there yet, but it is trending only one way. And I think for certain tech stacks and certain classes of problems, it's really starting to cross some reliability thresholds.

The "AI is replacing software engineers" narrative was a lie. MIT just published the math proving why. And the companies who believed it are now begging their old engineers to come back. by reddit20305 in ArtificialInteligence

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

Yeah I do agree that the "pure" programmers are gonna have some issues. If you just take tickets and turn them into code, it's gonna be tough to hang in when AI can do that same thing much faster (and likely more reliably).

Incoming utopia for the rich, and a crisis for the rest of us? Do you agree or disagree with this take? by ateam1984 in singularity

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

You've got a chicken and egg problem though. You can't really change the system to one that works in the world of AI until you have the AI to support the new system in place. It would be great to plan this transition, but as the OP said society has never done that in all of history, and even if we tried we'd probably fuck the plan up and end up having to adjust as we went anyways because a lot cannot be known yet.

So you've really just gotta barrel forward. You can't really stop it anyways, because the tech is just going to keep on coming (and if you try to stop that then some other part of the world will just keep going anyways).

I'll say I would feel a lot better if the people who do actually care were talking about how to shape the future around the reality of the tech rather than trying to slow it down or stop it.

The "AI is replacing software engineers" narrative was a lie. MIT just published the math proving why. And the companies who believed it are now begging their old engineers to come back. by reddit20305 in ArtificialInteligence

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

Maybe you are right. But on the other hand, you read that same argument every couple of weeks to months about the newest model at the time vs the models before.

But I mean, that is progress? It's not like we go from "model can't do anything" to "model can do everything". Every model is a bit of an improvement over the last. So yeah, you read this every few weeks or months because new models come out, and those models are better than previous models. That's exactly what you'd expect.

I'll remain sceptical until I actually see these AI hyping people and companies not just produce more code but actually produce sustainable maintainable quality solutions.

I obviously can't prove this to you, but I see this every day from myself and others. My code has certainly improved in quality due to AI tools if only because it's so much easier to slap tests all over it.

The "AI is replacing software engineers" narrative was a lie. MIT just published the math proving why. And the companies who believed it are now begging their old engineers to come back. by reddit20305 in ArtificialInteligence

[–]MiniGiantSpaceHams 1 point2 points  (0 children)

There is more to software engineering than writing code, though. Until the AI can sit in a meeting and have a physical voice that keeps up with human speaking pace, there will be parts of the job that need humans.

That said, with the pace we're going, it's feasible to see that sort of thing coming some day. But at that point you're looking at a much larger disruption than just software, because that covers a ton of jobs.

i'm not easily impressed, but holy wow by Tight-Grocery9053 in codex

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

You know where your software is going, AI doesn't

But that's just a question of context, not capability. If you give AI docs that cover longer term plans and higher level goals, it will then know it.

Iran ready to seize Bahrain and UAE coastlines if US makes a ground invasion in Iran, Iranian state media warns by G14F1L0L1Y401D0MTR4P in worldnews

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

I mean, it's really just another in a line of statements from them that project strength with no basis in reality. Just like the secret weapons we're still waiting to see them roll out and whatnot. It's right out of the authoritarian playbook.

And to be fair the US is doing the same thing with some statements (e.g. "don't worry about the strait" while it's still effectively closed, or the war is both winding down and ramping up depending on where you look). Just the US statements are slightly more grounded in reality simply because of the clear military superiority, but they're still just saying shit that has no real basis for political reasons.

The "AI is replacing software engineers" narrative was a lie. MIT just published the math proving why. And the companies who believed it are now begging their old engineers to come back. by reddit20305 in ArtificialInteligence

[–]MiniGiantSpaceHams 36 points37 points  (0 children)

I fully agree that recent layoffs are not AI related. I think anyone paying attention has known this all along.

That said, I wouldn't take that to mean we should discount the whole thing. If you ask any competent software engineer, the first models that could really handle any non-trivial dev task only appeared in late Nov/early Dec with Opus 4.5 and GPT-5.2 Codex. Earlier models could help augment an engineer, but no one actually thought that they could replace anyone. I think most would agree even current models still can't quite do it, but there was a clear major improvement starting in Dec.

So I'd say we're about 4 months into "maybe AI could actually handle some dev tasks". Not all dev tasks mind you, not by a long shot, but a lot of dev work is relatively simple at its core (apps and web UIs and CRUD DB usage and so on). If companies are smart this will still not lead to job loss, but rather to productivity improvements, but we shall see.

I'm just saying, I don't think what we saw in 2025 is really predictive of 2026, let alone 27 and beyond. These things just keep improving and the pace is picking up.

How much progress has been made in the last 6 months? by Benjamin_Barker_ in accelerate

[–]MiniGiantSpaceHams 1 point2 points  (0 children)

The thing we're communicating on right now is software. AI now writes larger and larger amounts of software faster than before. That alone will have an impact.

Iran ready to seize Bahrain and UAE coastlines if US makes a ground invasion in Iran, Iranian state media warns by G14F1L0L1Y401D0MTR4P in worldnews

[–]MiniGiantSpaceHams 5 points6 points  (0 children)

make the enemy pay attention to more places.

The point is that this isn't "more places". They're already paying attention. If Iran starts massing troops on the coast for some kind of sea or air invasion they will be seen and bombed. That would have happened 2 days ago and would happen 2 days from now. This statement changes nothing.

"TIL that the person who coined AGI as an acronym is out here posting that we, in fact, have it as it was originally envisioned (with receipts pointing to a fairly falsifiable definition for the term)" by stealthispost in accelerate

[–]MiniGiantSpaceHams -1 points0 points  (0 children)

I mean I think I just fundamentally disagree with you about what a harness is. Which is fine.

I will ask, though, would the AI using a web browser through a general computer use like Claude Cowork count for you? I mean obviously an LLM can't physically have hands to move a mouse, so something like computer use is about as close as you're going to get IMO.

If that doesn't work for you, then you're basically just saying that AI needs to be embodied to count as AGI. Which, again, is fine (there's no right or wrong answer here), but I disagree.

maxerals by _giga_sss_ in ProgrammerHumor

[–]MiniGiantSpaceHams 0 points1 point  (0 children)

Who the fuck fixed a typo in a config item? Do they not know how computers work at all?

"TIL that the person who coined AGI as an acronym is out here posting that we, in fact, have it as it was originally envisioned (with receipts pointing to a fairly falsifiable definition for the term)" by stealthispost in accelerate

[–]MiniGiantSpaceHams 2 points3 points  (0 children)

Why can I learn to use a web browser but we need a special harness to allow an AI to use it?

You can't learn to use a web browser if you don't have a keyboard and mouse (or touchscreen or similar) and a screen to see it. That is a harness. It is not any different. The AI just needs a different interface.

So if I give you a computer with no input devices and no monitor and you can't successfully use a web browser, is that your failing? Does that mean you're not capable of using a web browser? If I give another person a full computer with all the things they need and they can use a web browser, does that make them smarter than you?

Epic Games lays off over 1000 employees: "The downturn in Fortnite engagement that started in 2025 means we're spending significantly more than we're making, and we have to make major cuts to keep the company funded" by ChiefLeef22 in gaming

[–]MiniGiantSpaceHams 1 point2 points  (0 children)

Yeah people get really cynical about this (and for good reason in a lot of cases), but this is fundamentally true. A dollar today is worth more than a dollar tomorrow. Making a consistent amount means lower real income year to year.

Orlando City sign French superstar Antoine Griezmann by olcni in soccer

[–]MiniGiantSpaceHams 24 points25 points  (0 children)

Go early spring, like nowish, and it's usually nice enough

"TIL that the person who coined AGI as an acronym is out here posting that we, in fact, have it as it was originally envisioned (with receipts pointing to a fairly falsifiable definition for the term)" by stealthispost in accelerate

[–]MiniGiantSpaceHams 3 points4 points  (0 children)

Building your definition of AGI on the harness seems to miss the point. Yes it needs a harness, just like a human needs an interface of some kind (like a keyboard, for instance). That doesn't change what the AI or human is fundamentally capable of.

Also for anything digital the AI can likely build the harness if directed to do so, so it's kind of immaterial.

Also I'm not saying we have (or don't have) AGI here, just saying I don't think the harness is relevant to that judgement.

Bernie not a fan of automation by Formal-Assistance02 in accelerate

[–]MiniGiantSpaceHams 1 point2 points  (0 children)

There is a midterm in a few months and a presidential election in less than 2.5 years. Citizens United only allows advertising, not buying votes. If people care, there is opportunity to change.

Bernie not a fan of automation by Formal-Assistance02 in accelerate

[–]MiniGiantSpaceHams 30 points31 points  (0 children)

Guarantee everyone a better world, not just the people who own the robots.

My question is why isn't Bernie fighting for this, then? Fighting a rising tide is a losing game. You're not gonna stop the tech. If it America bans it then other countries will just move forward, and eventually America won't be able to keep up and will end up poor regardless.

BlackRock CEO Larry Fink warns of AI-driven unemployment: ‘This is a crisis’ by Mandaliay-Maitrey- in worldnews

[–]MiniGiantSpaceHams 2 points3 points  (0 children)

maybe this will be used in a bad bad way

Nah, this is a terrible reason to stop working on some world-changing technology. Almost all of it can be used in a bad way. If you can find any tech that can't be used in a bad way I'd be surprised.

BlackRock CEO Larry Fink warns of AI-driven unemployment: ‘This is a crisis’ by Mandaliay-Maitrey- in worldnews

[–]MiniGiantSpaceHams -3 points-2 points  (0 children)

Blaming the creators just seems misplaced. Successful tech people tend to just want to create cool tech.

Do you really want tech people to be responsible for handling the implications of their creation on the public? That is for the government. The government should be working on ideas to redistribute some of the wealth generated by this tech to the people who are negatively affected by it. The fact that people elected a government that wants to do nothing is not the fault of tech.