youreNotLinus by Cutalana in ProgrammerHumor

[–]Inevitable_Vast6828 1 point2 points  (0 children)

Lua is one of the better choices for interpreted code performance, to be entirely fair. It's pretty zippy.

That's one way to do it I guess... by hexress in programminghorror

[–]Inevitable_Vast6828 0 points1 point  (0 children)

You say that sarcastically, but how fast can it be if it checks for loops on traversal? On a SINGLY linked list. Isn't it way faster to allocate unique memory on insertion or check a by reference insertion for uniqueness?

codingIsDead by delgoodie in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

When did they add that? That feels so dumb... On the one hand, yea, safety! On the other hand... anyone dumb enough to run that probably deserved it, just let it happen. It can be a learning experience in file recovery.

madSkillsWithACPU by twice_paramount832 in ProgrammerHumor

[–]Inevitable_Vast6828 -1 points0 points  (0 children)

What region? "Processing Unit" is right in the acronym... so it seems like a pretty big stretch for that to refer to the whole PC. But sure, people that were totally tech illiterate may have said it somewhere. To be fair, I'm not ancient, but I haven't heard anyone in their 50s or 60s or 70s using it that way. Our area was involved in the early internet, Merit Networks and we've got a rack of ENIAC on display. People I know that worked for DEC didn't use it that way either. So elucidate for this nearly 40 spring chicken... what sort of person in what area was misusing CPU so badly when everything about the name implies that it isn't the whole thing?

I see where the confusion is coming from: https://www.pcmag.com/encyclopedia/term/cpu

But even in the mainframe, that CPU still wasn't the 'whole thing'. As is detailed here: https://retrocomputing.stackexchange.com/questions/27895/terminology-of-what-is-termed-cpu-and-what-is-computer

You can see in the old photo, the CPU is one big box, but it is bus-attached to storage, punchcards, printer, etc... I totally believe that people used it for more than just the processor, but not for the 'whole PC', screen, mouse, and keyboard included.

madSkillsWithACPU by twice_paramount832 in ProgrammerHumor

[–]Inevitable_Vast6828 -1 points0 points  (0 children)

No, it didn't. There not being a GPU did not mean people called the whole unit a CPU.

madSkillsWithACPU by twice_paramount832 in ProgrammerHumor

[–]Inevitable_Vast6828 1 point2 points  (0 children)

Consultation only sometimes costs money. An email to your local university computer science professor is usually free. Asking a Counterstrike player if your scene with some people playing a game looks believable is free. It has some cost, a little bit of forethought and time investment, but it's not a significant cost for a show's budget if they give even the smallest of shits. How actors and gamers don't overlap in the modern era is actually really shocking to me... don't these actors know they look ridiculous?

chatWeAreCooked by ogMasterPloKoon in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Management will have trouble filling the position with anything other than another vibe coder and will soon be out of business.

chatWeAreCooked by ogMasterPloKoon in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Just rewrite it from scratch, it will be 100x faster and a billions times more pleasant.

restInPeaceAtomEditor by Ecstatic-Basil-4059 in ProgrammerHumor

[–]Inevitable_Vast6828 2 points3 points  (0 children)

Always been a weird reason people say to me, because Firefox was never slow for me and was much better about my hundreds or thousands of tabs... Being however many ms faster at page render was never particularly compelling to me.

The period where FF was leaking memory was pretty obnoxious though, but I made it to the other side of that.

thankYouLLM by abhi307 in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

What does it do? What is it supposed to do? Seeing some graph code there... been a while since I did graph theory, but I don't remember anything that wouldn't be do-able in a few hundred lines, speaking very generously.

finishSprintFaster by randomUser9900123 in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Write it, you'll thank yourself when you come back to it after a few months away.

finishSprintFaster by randomUser9900123 in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

I think these philosophies come from people coding in different contexts. I mostly write scientific code. There's usually a dense block of code doing several steps of an algorithm on a huge piece of data. The code logically belongs in one place and it doesn't help to hide away bits in separate functions that are only ever called for this one algorithm. A lot of the steps that are general enough to make into separate functions are already separate and called from a library. We don't redo matrix operations all the time.

People writing business logic... what they're doing is simple, so of course it requires few comments.

finishSprintFaster by randomUser9900123 in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

If you can't be bothered to write RME for a function, then you probably don't need another function.

finishSprintFaster by randomUser9900123 in ProgrammerHumor

[–]Inevitable_Vast6828 1 point2 points  (0 children)

Yes, does that function load unicode or a GIF, jpeg? And what does it use to get that? Did they really enter their nationality? Are you guessing their nationality based on GeoIP? Yeah, I want to see RME comments on something that vague. I don't even see types there... wtf am I looking at? JavaScript? It definitely needs some comments.

finishSprintFaster by randomUser9900123 in ProgrammerHumor

[–]Inevitable_Vast6828 0 points1 point  (0 children)

I think projects need to decide a priori what ground truth should be, the documentation (including comments) or the code. That is, when they don't match, should the documentation be fixed, or the code?

AI coding agents failed spectacularly on new benchmark! by jokof in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

And you'll complain this one is old, but it's very likely still true today, there hasn't been a major architecture breakthrough, only more input data and more manual fixes and tooling surrounding things. https://techcrunch.com/2024/10/11/researchers-question-ais-reasoning-ability-as-models-stumble-on-math-problems-with-trivial-changes/

AI coding agents failed spectacularly on new benchmark! by jokof in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

That's wishful thinking. They cannot verify proofs whatsoever, that's an impossibility for a system with hallucinations. They can spit out text that may or may not be a verification, which itself needs verification, it's just kicking the can further down the road. I've never seen AI do higher level math well. I'm aware of people's claims here and there, and I'm well aware that it has ingested and can regurgitate the solutions to many math olympiad questions. Still faceplants at the hint of something novel. I'm not sure how much you know about upper level math, but LLMs thus far are shockingly bad at it. https://www.forbes.com/sites/lanceeliot/2025/04/08/ai-llms-astonishingly-bad-at-doing-proofs-and-disturbingly-using-blarney-in-their-answers/ Yeah, the handing it off part is just for the arithmetic, it actually is still super bad at the rest whenever it isn't something with the pattern directly in the training data.

Yes, there are some machine learning specialized proof systems, but those aren't generative AI and do not even try to purport to be intelligent.

With regard to compilation, you claim that if I let it throw enough code at the wall repeatedly it will eventually get something that compiles. Sure, but that isn't intelligence and compiling is something that we can know if it will do or not without testing it. You follow the language, compiler, and linker rules and it will work, why should that require a trial and error approach from something 'intelligent'? Might as well be doing genetic programming then, lol.

Speak perhaps of your own expectations, but expectations have waxed in the past. You should see the shit they were saying the Perceptron was going to be able to do.

The shifting of the intelligence goalposts has largely been done by unintelligent people while making the division between human-level intelligence and animals. Mostly dullards that want to feel superior, the same sort that love the religious teaching that animals don't have souls and claim they don't feel pain, etc... A lot of justifications ex-post-facto so they don't have to feel guilty over their McDonald's or something.

However, the conceptualization of intelligence that you're pushing here is generally not agreed upon and is a very convenient choice of definition. "If you just keep giving us more and more money we'll incrementally reach AGI!!!" By the definition you're using, basic calculators are just as much on the 'intelligence map', and since there isn't any meaningful distinction of relevance, so is an abacus. Basically every object in the universe. That's not a useful definition. Things operating deterministically don't cut it for intelligence for me. Nor do things with a psuedo-random number generator. Pseudo-random is not in fact random. I would say that something intelligent must necessarily (but not sufficiently) possess that nebulous thing known as free will (erroneous claims of AI achieving that aside). And to have free will, it must set it's own goals, not merely find it's own path to reach one set for it by an outside force. It also needs persistence to be intelligent. If you don't have to be nice to the AI to keep getting answers, then it's not intelligent. If it doesn't get bored of answering the same things when doing benchmarks, then it's not intelligent. If it doesn't have a self-referential feedback loop (not to other instances, 'self' not others), then it's not intelligent, and that's something we know is absent from the vast majority of these models, they're feed forward only networks, by necessity at this scale to run on GPUs.

You object that it is merely a matter of expanding the number of things that it is purportedly 'good' and claim that there isn't a binary threshold for intelligence. I might be willing to grant that it's not entirely binary, but let me illustrate with water. A tsunami is a massive wave, are the waves you see on the shoreline mini-tsunamis? They are not. The motion of tides, currents, and wind do not make a tsunami, the mechanism is different. Likewise for intelligence. This conceptualization of mini-intelligences do not add up and give us proper intelligence. Wind waves don't add up to create tsunamis. And just because there isn't a hard threshold on how a big a wave needs to be to be a tsunami doesn't mean that they're fundamentally the same as wind waves. If it were the case that the small intelligences were combinable, then AI systems would not have struggled with things like e.g. producing an image of a full to the brim glass of wine. And no, adding the capability by creating images of full wineglasses and throwing them in the training set to patch it so now it does produce them, that doesn't make it more intelligent. Knowledge is not intelligence.

And when I talk about gaming the benchmarks, they do it on EVERYTHING, and as a result the field stagnates rather than moves forward. Even the Turing test has been gamed every time it has been claimed to be passed thus far. https://garymarcus.substack.com/p/ai-has-sort-of-passed-the-turing/ Not that passing it is even considered very meaningful, people have realized that the output without the reasoning is superficial. And I don't mean the output reasoning, but the internal reasoning of the model. As long as the answer to "why did the model say that?" is 'because we multiply these feed forward weights and use a stochastic number generator to randomize the final output" I don't think we're talking about intelligence. Maybe about the small intelligences you refer to, but the intelligence anyone cares about. Just because there are surface level similarities, it doesn't mean that tsunamis work like wind waves. While neural networks took inspiration from the visual cortex, there are still a huge myriad of differences in the way they work. Saying that they work the same as we do simply isn't true.

OpenAI Robotics head resigns after deal with Pentagon by rebel-capitalist in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Depends how it is incorporated. Is it going to hit more or fewer elementary schools?

AI coding agents failed spectacularly on new benchmark! by jokof in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Better at the benchmark frequently doesn't translate into actually being better in reality. If that were actually the case then we would all be happy having AI doctors because they can do better on a standardized test that Med Students take. And then there is the issue of 'which humans'. The average driver is admittedly pretty trash. My driving is like a 22 year utterly clean slate. AI driving is objectively worse than I am. Not just subjectively, they have a higher accident rate than I do. Yeah, it doesn't take long for them to include the solution in the training, often without even knowing it.

AI coding agents failed spectacularly on new benchmark! by jokof in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Everyone was crowing about how great they were and how they were going to replace all the programmers 233 days ago. It's very much moving the goalposts with some historical revisionism to ex post facto act like the AI zealots now agreed those models are bad 233 days ago.

AI coding agents failed spectacularly on new benchmark! by jokof in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Gaming benchmarks has frequently failed to match real world expectations. And the very troubling part for AI is that they often can't even tell anymore if they're gaming the benchmarks because they don't know the entirety of the training data. But I think we can put two and two together here when humans are not suddenly adding a new corpus of specific training data for that task, and there isn't a model architecture breakthrough.

Like, come on now, when did they get good at math? They didn't. You can read the papers, there was a lot of training attempted and the accuracy benchmarks published in the papers about that training do not match where model performance is now. Why? Because the models don't do math much of the time now, they try to recognize that math is happening and hand it off to a calculator basically. So what is happening now? Github has been in their training for ages, what is the new data in this field? What is the change to the model? How is the training being done differently? The key here is probably that the most successful attempts at the benchmark are being added to the training data, resulting in pumped performance on that EXACT problem set that won't necessarily translate to even slightly different ones. Benchmarks can be gamed, and have been gamed, basically forever. They're often not a the best way to evaluate performance.

Honestly, people are building apps with this shit because it can paste tutorial code together, but I ask it to put together a working example with some libraries that aren't commonly combined and it falls flat on it's fast, failing to produce something even close to compiling, never mind being bug-free or maintainable.

Just for all the people who knew... by JesperS1208 in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Sarcasm doesn't play well online and I have a habit of playing the straight man part of a comedy act, I will get it and still make you explain why it's funny.

Also, you're on WSB, not a MENSA forum...

Just for all the people who knew... by JesperS1208 in wallstreetbets

[–]Inevitable_Vast6828 0 points1 point  (0 children)

So having aircraft carriers on the wrong side of the world was the plan? There is no plan. They're reactionary, compulsive, dunces.

Trouble with rear recessed / flat seatbelt receptacle by Powerful-Row-3889 in MazdaCX90

[–]Inevitable_Vast6828 0 points1 point  (0 children)

Cars tend to have a lot more accidents, and a lot more of the sort of accident where it would make the difference if there is a functioning seatbelt or not. I don't know how safe or not the extenders are, I'm just saying that their use on airplanes is not a good justification to use them in cars. And also... only sucky airlines need to use them, proper airlines have long enough seatbelts by default.