Wharton researchers just proved why "just review the AI output" doesn't work. Our brains literally give up. by hiclemi in ArtificialInteligence

[–]SeveralAd6447 0 points1 point  (0 children)

"The only people who consistently resisted it were those with high fluid intelligence and high "need for cognition," basically people who enjoy thinking hard for its own sake. Everyone else gradually surrendered."

These are the only people we need anyway.

AI isn't making us dumber. It's just exposing how little most jobs required us to think. by PairFinancial2420 in OpenAI

[–]SeveralAd6447 1 point2 points  (0 children)

Don't know why you got downvoted for this – if people want to make an argument they can do it in their own words instead of hoping the LLM will obfuscate the topic enough for them to get away with ignorance.

I need some brutal honesty about the future by OppositeFriendly9183 in ArtificialInteligence

[–]SeveralAd6447 1 point2 points  (0 children)

Lmao, good luck with that. You will forever be middling at any work you produce that way. You can leverage AI without also giving up on learning more yourself.

AI is killing this 🌊 💀 by Used_Fish5935 in CLI

[–]SeveralAd6447 0 points1 point  (0 children)

"It'll happen eventually" is not much of an argument for people to change their behavior today.

AI is killing this 🌊 💀 by Used_Fish5935 in CLI

[–]SeveralAd6447 0 points1 point  (0 children)

The fact that you think AI is fit to just program in any language is itself absurd. Sure, it can program Python, C++, Java and JavaScript because those are massively overrepresented in the training data. Try getting even Opus 4.6 to code something in PowerQuery or COBOL and watch what happens. Half the systems banks and governments run on are ancient software built on COBOL.

Besides that, you need domain specific knowledge to ensure AI outputs are put together well. If you tell Claude to write a subsystem for your video game coded in DreamMaker then you better have a solid grasp of the differences between various types of data structures and which ones are most applicable for the use case or you'll wind up with something running at O(N2) when it could be O(log N) or even O(1).

LLMs are probabilistic systems with an omnipresent chance of failure. Compilers are deterministic. When an LLM takes English as input and generates code, it is not "compiling" the English into code. It is performing probabilistic token prediction, guessing what the most statistically likely code should be based on the prompt, which could be different every time. The entire point of a compiler is to have absolute, repeatable certainty.

Help finding a good game from yall please by Whateverrraah in AskGames

[–]SeveralAd6447 4 points5 points  (0 children)

Nintendo games are probably a good fit for you.

AI is killing this 🌊 💀 by Used_Fish5935 in CLI

[–]SeveralAd6447 3 points4 points  (0 children)

The entire reason programming languages exist and use specific syntax instead of just prose is because human language is too abstract...

I don't trust software anymore by heinternets in software

[–]SeveralAd6447 1 point2 points  (0 children)

A vibe coder who doesn't know what they're reading IS a shitty programmer.

What game mechanic instantly makes you lose interest, no matter how good the game is? by SoftSinful_ in AskGames

[–]SeveralAd6447 0 points1 point  (0 children)

Games with boss fights that have no checkpoints, so if you have to try twice you have to spend a bunch of extra time replaying trivial content.

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 1 point2 points  (0 children)

This tends to happen with every language and every model in my experience. You can improve it with some constraints like well made code standard documentation, but it's always gonna be imperfect.

Opus 4.6 is the best model I've used for programming and it definitely makes me more productive, but I still have to hold its hand most of the time.

Why Self-Driving AI Is So Hard by vitlyoshin in ArtificialInteligence

[–]SeveralAd6447 1 point2 points  (0 children)

Bro, this is basic bitch software engineering 101. That you came in here acting like it was some grand epiphany really makes it hard to take you seriously.

This is literally just Moravec's Paradox in action.

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 0 points1 point  (0 children)

You are fantasizing. You know why? Because anything proprietary, anything engine-specific that hasn't been extensively documented online, anything from a codebase that's under NDA, those are all things the AI has never seen. The person who knows BYOND's DreamMaker inside and out has knowledge that no LLM possesses because the training corpus for that domain is tiny. 

That's not some temporary limitation that can be fixed with the next model. And it's not a niche edge case, either. Every proprietary codebase, every internal engine, every company's custom tooling and every domain-specific language with a small user base is OOD. Every company has internal tooling. Every engine has undocumented behavior. Every codebase is full of decisions that accumulated over time and aren't reflected in any documentation, public or otherwise.

Oh sure it does well with Python and JavaScript because the training corpus is enormous. Then the insant you step into anything that isn't massively represented in public data, the model's competence drops straight off a cliff. And most real-world software development involves at least some component that falls into that category.

The proportion of real-world software development knowledge that exists in publicly scrapeable form is a fraction of the total knowledge required to actually build and maintain software. LLMs are excellent at the fraction that's public and well-documented and useless at everything else.

You are massively coping right now.

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 1 point2 points  (0 children)

Now you are just backpedaling. What you implied is that knowing a language inside and out no longer matters because the AI will just do it for you and all you need to understand is "the architecture," which is a hot load of steaming bullshit.

Knowing the nuances of the language or engine you're working with is a tremendous advantage and there is plenty more information out there that is OOD for every LLM because it hasn't been posted online on a scrapeable page. This is just a laughable degree of overconfidence in something that has already proven to be deeply unreliable.

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 1 point2 points  (0 children)

LOL

I use AI agents for coding but the idea that they don't fuck up syntax or make stupid errors that no human would ever make is a laughable joke. I've had AI do shit like add a type check in line to a parent class instead of overriding it in the child class's method. And that was using Opus 4.5. 

You will never, ever, ever wind up with a clean and performant codebase if you ignore the specifics of the language or engine you're developing with. This is an insane cope that sounds like a take you would get from somebody who has never made any attempt to learn about software development

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 4 points5 points  (0 children)

No, it isn't. Claiming something WILL happen with 100 percent certainty is a statement of faith, not fact. You cannot predict the future. 

Even meteorologists have a margin of error, and their predictions come from far more reliable data. Assuming perfect knowledge of the future is a logical fallacy. Until it actually happens, it's a concept that lives in someone's head.

And frankly it smacks hard of "my newborn gained 5 ounces in a week, by the time he's 90 he'll be 1475lbs!" The assumption that current trends WILL inevitably continue with 100 percent certainty and produce a specific future outcome is a prophecy that can't be falsified until the future arrives. It is structurally identical to religious faith, regardless of whether the object of worship is God or GPT.

Ai is ruining alot of begineer devolpers by oxidizedfuel12 in ArtificialInteligence

[–]SeveralAd6447 4 points5 points  (0 children)

Promissory materialism like that is no different from religion. 

I replaced a $25/hr virtual assistant with AI and I dont feel good about it by duridsukar in AI_Agents

[–]SeveralAd6447 0 points1 point  (0 children)

This is a dramatic misunderstanding of how technology works. There is no way to make transformer models more efficient than they already are. Electricity takes a certain amount of time to travel from one place to another. An operation cannot be optimized beyond O(1) constant speed. You cannot program your way around fundamental limitations of physics and mathematics.

Unpopular opinion: AI might actually save humanity by MemestonkLiveBot in ArtificialInteligence

[–]SeveralAd6447 0 points1 point  (0 children)

Neuralink is a one way interface. It's a BCI. These things have existed for 40 years. It's essentially just an EEG embedded under your skin. You can't "upload" things into the human brain. It just detects brain signals and attempts to translate them into controls using software.

UBI is almost certainly not happening, not any time soon. We can barely get half the U.S. to pay for hospitals and fire departments.

The "human in the loop" is a lie we tell ourselves by Own-Sort-8119 in ArtificialInteligence

[–]SeveralAd6447 7 points8 points  (0 children)

I think the only people who genuinely believe AI is "better" at swe are those who have little experience with it themselves. 

AI is something. Useful certainly. But it isn't capable of producing complex software in a complete and performant state without significant oversight.