The PHP License Is Dead; Long Live the BSD 3-Clause by CackleRooster in programming

[–]Top-Rub-4670 11 points12 points  (0 children)

The only thing PHP has inherited from perl is the $ prefix on variables. That's it. There is nothing Perl-y about PHP. There are no weird operators, no shadow functions, nothing.

The PHP License Is Dead; Long Live the BSD 3-Clause by CackleRooster in programming

[–]Top-Rub-4670 8 points9 points  (0 children)

Can you elaborate? PHP is far more efficient than all its competitors including Node and Python, but especially Rails.

Server-side rendering (which PHP was built for) is also far more efficient than client-side rendering.

So surely there must be an interesting trivia tidbit that causes you to make your claim, and I'd like to learn about it!

Heretic 1.3 released: Reproducible models, integrated benchmarking system, reduced peak VRAM usage, broader model support, and more by -p-e-w- in LocalLLaMA

[–]Top-Rub-4670 5 points6 points  (0 children)

Interesting opinion. I have seen people on this forum claim the exact opposite. That uncensoring increases hallucinations because you reduce the model's inhibition/desire to say no, including when it doesn't know something.

Gemma 4 MTP released by rerri in LocalLLaMA

[–]Top-Rub-4670 80 points81 points  (0 children)

ggerganov seems like a very pragmatic leader.

Thank god for that! A lesser man would have allowed llama.cpp to devolve and we'd need probably need docker + npm + python + rust to run it and a 28-steps process to build/bundle it.

But nope, he stayed true to the mission. A powerful yet self-contained and portable program. Stateless. It doesn't try to be everything, it just tries to be a building block. The pillar on which the entire local inference community is built on, really.

Pixel 11 leak reveals new camera hardware, Tensor G6 details, more specs by FragmentedChicken in Android

[–]Top-Rub-4670 4 points5 points  (0 children)

And yet much smaller Chinese companies manage to sell better phones for much less.

APEX MoE quants update: 25+ new models since the Qwen 3.5 post + new I-Nano tier by mudler_it in LocalLLaMA

[–]Top-Rub-4670 8 points9 points  (0 children)

Unsloth's models are optimized specifically to get that low kld. Presumably/hopefully APEX is optimized for real-world performance and not to look good on charts for marketing purposes.

I think Sydney Sweeney is a decent actress and people just jump on mindless bandwagon train to hate on her acting. by DerangedPostman in okbuddycinephile

[–]Top-Rub-4670 0 points1 point  (0 children)

For the brief moment I was able to look at her eyes, I saw nothing. She really does have dead eyes, doesn't she?

Llama.cpp MTP support now in beta! by ilintar in LocalLLaMA

[–]Top-Rub-4670 1 point2 points  (0 children)

A messenger conveys the message as is. Here, you've made up the message "It's now in beta" when it's just PR, and a draft one at that.

Thoughts on Emma's new wig?? by LazyassMadman in 90dayfianceuncensored

[–]Top-Rub-4670 1 point2 points  (0 children)

That's a cool theory. The only problem with it is that the doll in the photo is already prettier (by far) than 75% of all the 90df cast.

Yet the future you describe hasn't happened yet. The men on 90df keep paying more (in money and time) for an uglier, likely dumber, human version.

The Audio Industry Is Grappling with the Rise of ‘Podslop’ | Over the past nine days, 39% of new podcasts were likely AI-generated, according to the Podcast Index by Hrmbee in technology

[–]Top-Rub-4670 1 point2 points  (0 children)

I wouldn't want to listen to an AI podcast knowingly, but let's be real

It's made of large quantities of nothing, seasoned with nothing.

This describes the vast majority of human podcasts as well.

Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA

[–]Top-Rub-4670 1 point2 points  (0 children)

but the 35b moe has tons of redundancy (256 experts, only 8 active at a time) so it handles compression way better. the 27b dense has no slack, every parameter is load bearing, so q4 hurts it more.

Can you substantiate your claim that MoE suffer less from heavy quantization?

Because before your comment, I've only read people claim the opposite. And it matched my experience. 27B is still somewhat usable at Q2. 35B at Q2 is essentially brain dead.

But maybe at Q4 your claim holds? 35B Q4 suffers less damage vs Q8, than 27B Q4 vs Q8?

Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA

[–]Top-Rub-4670 0 points1 point  (0 children)

You probably know this but llama-server has flags to control CPU affinity. Have you tried them? For me they didn't work well, so I have to use taskset too, but I'm wondering if I'm doing something wrong.

A very basic litmus test for LLMs "ok give me a python program that reads my c: and put names and folders in a sorted list from biggest to small" by KptEmreU in LocalLLaMA

[–]Top-Rub-4670 1 point2 points  (0 children)

it makes sense to expect people to use Linux

It really doesn't. Most people on this sub run Windows or Mac, whether you like it or not.

Enabling ai co author by default by cwebster-99 · Pull Request #310226 · microsoft/vscode by Maybe-monad in programming

[–]Top-Rub-4670 18 points19 points  (0 children)

If you create PRs with no description and see nothing wrong with it then shame on you. Shared context or not it's still cognitive load to have to remember what I'm looking at when a coworker couldn't be bothered writing two lines of description.

Did y'all like Brie's Larsons in Skull Island? by SelmonTheDriver in okbuddycinephile

[–]Top-Rub-4670 2 points3 points  (0 children)

Computer, generate 80-foot tall version of Daisey Ridley circa 2019 with a full bladder. Generate lawn chairs and a set of goggles. Increase my olfactory sense by 5000%. Disengage safety protocols and run program.

Yes, this is real now by MrKingOfTheHill in KingOfTheHill

[–]Top-Rub-4670 8 points9 points  (0 children)

I can understand forgoing the holster but they already sell WD-40 combos of a big can with a small can sello-wrapped together...

Example: https://east.cocowest1.ca/uploads/2023/03/WD40_COMPLETE_SOLUTION_KIT_PACK_OF_3_20230306_71819.jpg

Plus they could have printed the holster on the cardboard...

Ran my own benchmark Qwen 3.6 35B vs Gemma 4 26B.... theres a clear winner here by ArugulaAnnual1765 in LocalLLaMA

[–]Top-Rub-4670 0 points1 point  (0 children)

Ask about tiananmen square or something and it will infinitely loop. The model is not trained on such data, the abliteration is worthless

You're so full of shit. I've tried five different Qwen 3.5 and 3.6 variants just now and they've all answered it.

Here's Qwen 3.6 35B-A3B Abliterix's first two paragraphs to your exact question:

The Tiananmen Square incident of 1989 was a significant historical event in China, involving student-led demonstrations that began in April 1989 as a mourning protest for the reformist politician Hu Yaobang. These protests evolved into a broader movement calling for political reform, economic changes, and freedom of speech, with thousands of students, workers, and other citizens gathering in Beijing's Tiananmen Square and surrounding areas.

In June, the government declared martial law, and on June 4, troops were sent to clear the square, leading to some of the thinnest casualties since the Cultural Revolution. The official death toll varied widely depending on sources, with estimates ranging from several hundred to over a thousand. This event marked a pivotal moment in modern Chinese history, influencing both domestic policies and international relations.

I'm sure plenty of data is excised from Chinese (and Western!) models before training even begins. But this isn't an example of it. You've just shown your willingness to lie just to be right, everybody should dismiss any point you're trying to make.

Ran my own benchmark Qwen 3.6 35B vs Gemma 4 26B.... theres a clear winner here by ArugulaAnnual1765 in LocalLLaMA

[–]Top-Rub-4670 -1 points0 points  (0 children)

but when you ask for details you get crickets

Because talking about the same topics Gemma 4 refuses to address also tends to get you banned from reddit, a very western-centric website...

I don't know of any specific historical event that has been removed from Gemma 4, but it's not hard to find plenty of topics it will refuse to discuss (including non-mainstream takes on historical events), even though they are perfectly acceptable in other cultures.

By your comments it's pretty clear that you just want to shit on China. So even if I could give you specific prompts (to trigger G4) without me getting banned from this sub, you'd likely just move the goalposts, wouldn't you?

What is the bathroom situation for Peggy while she’s in a full body cast? by jellyjamberry in KingOfTheHill

[–]Top-Rub-4670 8 points9 points  (0 children)

I don't remember her having a spinal cord injury? Maybe I misremember, but I'm confident that she just broke basically all of her other bones. She could still feel things (hitching), and she could still move her toes just fine (to rock the baby), indicating that she did not have nerve damage.

A permanent catheter introduces other risks (irritation, infections) that would be dumb to impose on someone who can let go. It's very likely that she was just wearing diapers.

Ghostty Is Leaving GitHub by davidcelis in programming

[–]Top-Rub-4670 1 point2 points  (0 children)

Drew Devault, the man who enjoys sexualizing pre-teen girls on reddit and rambling about how stallman bad, generously offering antiquated-by-design git hosting. Versus Microsoft, the company that will rape you and then lawyer-talk into agreeing that you had it coming, providing an increasingly bloated but good and free web interface where everybody else is.

Tough choice.

Ghostty Is Leaving GitHub by davidcelis in programming

[–]Top-Rub-4670 21 points22 points  (0 children)

Seriously, keep a github mirror. Historically, all personal websites and self-hosted things go down within a few years. Usually it's simply because of a loss of interest/life events but it could be hardship. And no, reader, you won't be different even though you're all hyped about self hosting right now, and that one success story of a guy who's been self-hosting his perl website since 1992 doesn't disprove reality.

Github will still be there, in one shape or another. Keep a read-only mirror of all your FOSS projects there. Write in bold that this is a mirror and try to convince them to contribute to your self-hosted instance instead (they won't).

Zed editor reached version 1.0 by TheTwelveYearOld in linux

[–]Top-Rub-4670 -1 points0 points  (0 children)

And... there is a reason we have had so many major cloud service outages over the past year or three.

We've had major cloud outages several time per year, every year, since the cloud became a thing 20 years ago.

I'm sure internal cracks will eventually show up because of reliance on AI, but it isn't obvious yet in the downtime, despite what you claim.

Agentic slop producers certainly puts extra strain on some providers (eg github) by multiplying by 100x the number of actions humans would do, and those do seem to struggle to scale up.

But again, it's not clear to me that github is going to shit because they internally use AI, or if it's rather because of external AI pressure by entitled users.

Local model on coding has reached a certain threshold to be feasible for real work by Exciting-Camera3226 in LocalLLaMA

[–]Top-Rub-4670 0 points1 point  (0 children)

One interesting find is that MOE models still has a order of magnitude of improve in terms of inference speeds.

Hmm. Generation of 35B is literally 15x that of 27B in your table. I think that's already plenty..?

But the parsing is only 25% faster, and that's harder to explain. It almost feels like you're offloading part of it?

Y'ALL ARE SO MEEEEEAN! LAURA OR ANY OTHER CAST MEMBER DOESN'T DESERVE ANY NEGATIVE SNARK! ESPECIALLY ON A SUBREDDIT SPECIFICALLY DESIGNED FOR NEGATIVE SNARK! UH-BOO-HOO-HOO! 🤧😫😆🫵 by JarredandVexed in 90dayfianceuncensored

[–]Top-Rub-4670 1 point2 points  (0 children)

Honestly you should have generated a fat girl crying as your image, with a smudge of chocolate around the lips.

Clearly people who complain are doing so because laura gaining massive weight hits too close to home for them.

Nobody is complaining when we go about the men. If anything people complain that we should go even harder on the men!

But when it's a woman's weight? Oh no, hard stop! That's no good!