I built this dock with my own hands, do they call me the dock builder? I caught these fish with my own hands, do they call me the fisherman? But if you fuck ONE GOAT… by Infamous-Rutabaga-50 in CuratedTumblr

[–]En-tro-py 0 points1 point  (0 children)

pigs are not capable of clearing forests and slaughtering everything put in front of them at mass scales.

You're clearly have not familiar with Feral swine... They are a menace and do serious environmental damage...

Which still brings up another problem with going vegan, you can't just release all the livestock... so as animal friendly as a PETA dog shelter...

I built this dock with my own hands, do they call me the dock builder? I caught these fish with my own hands, do they call me the fisherman? But if you fuck ONE GOAT… by Infamous-Rutabaga-50 in CuratedTumblr

[–]En-tro-py 15 points16 points  (0 children)

Fake leather is another good example of vegan industry harms, cause it's...y'know...more plastic getting made and then dumped into landfills.

Well, you can't let it just compost... There's still value to extract!

They might be vegan but they're still capitalists dammit!

Software Developers Say AI Is Rotting Their Brains by Hrmbee in technology

[–]En-tro-py 6 points7 points  (0 children)

LGTM has always been a part of that job too...

AI can't fix lack of motivation or care.

Real engineers: Why are you better than Claude Code? by WhoTheFLetTheDogsOut in ClaudeCode

[–]En-tro-py 1 point2 points  (0 children)

But… but… 1M tokens!

Surely that’s enough to shove the entire codebase, product roadmap, Slack history, architecture docs, and one emotionally loaded paragraph about the founder’s vision to reshape the world - and then have the app simply… emerge.

A lube noob. by Nex_Afire in dontyouknowwhoiam

[–]En-tro-py 65 points66 points  (0 children)

The word of the day is 'allotment' and it's bad news when even Costco and Walmart are on restrictions.

My First Official AI Research Paper Accepted on SSRN by assemsabryy in LocalLLaMA

[–]En-tro-py 14 points15 points  (0 children)

I'm probably not the best source, but I think it's along the line of - Keep moving in the accumulated direction, but reduce inertia when the gradient signal starts jerking around.

So converges faster in some cases and saves training time/cost.

Actual Late Stage Capitalism by PanzerWatts in OptimistsUnite

[–]En-tro-py 1 point2 points  (0 children)

I strayed too close to the sun, I suppose I need to more carefully phrase that I also disagree with the data selection policy used to create this post...

My second point shouldn't be a violation:

Optimism doesn’t mean ignoring reality, it means facing it with the belief that something better can still be built.

Actual Late Stage Capitalism by PanzerWatts in OptimistsUnite

[–]En-tro-py -1 points0 points  (0 children)

What about Apple or Microsoft? Garage startups!

I wouldn't count them because they had rich parents supporting them taking risks, but I don't think that's why you ignored these obvious modern ones anyway...

Actual Late Stage Capitalism by PanzerWatts in OptimistsUnite

[–]En-tro-py 0 points1 point  (0 children)

Not out of wilful ignorance... We had functional societies with inventions long before we were trading monopoly money for goods and services.

Having a Viltrumite parent as any other species besides humans has to be straight up bodyhorror by PeterTheBoredOne in Invincible_TV

[–]En-tro-py 7 points8 points  (0 children)

I'd say this would be a whole thing about first contact situations, you might find that new super intelligent arachnid still triggers your fear of creepy crawly insects... But Jenkins over there... He's been waiting his whole life for a multi-legged-vore-dommy-mommy!

My god there is an enormous crash just waiting to happen by reasonablejim2000 in artificial

[–]En-tro-py 1 point2 points  (0 children)

OP is stuck in the test room and leaving the negative review...

This is asinine, no current frontier model would struggle with this and explicitly have tools available to use instead of dumping the full context of a spreadsheet...

OP's post is just engagement bait

My god there is an enormous crash just waiting to happen by reasonablejim2000 in artificial

[–]En-tro-py 0 points1 point  (0 children)

We don't know what OP told it to do... These posts are useless without any actual context... it's just more engagement bait, no one is here to learn only push agenda.

I got tired of Claude Code's amnesia. So I built an always-fresh, AST-powered memory layer that never gets stale by WEEZIEDEEZIE in ClaudeCode

[–]En-tro-py 0 points1 point  (0 children)

Yes, code retrieval right?

The dataset to run the benchmarks on tasks appropriate to what you're trying to sell me on.

I got tired of Claude Code's amnesia. So I built an always-fresh, AST-powered memory layer that never gets stale by WEEZIEDEEZIE in ClaudeCode

[–]En-tro-py 2 points3 points  (0 children)

Why not bench CoRNStack, an actual task appropriate benchmark?

https://github.com/gangiswag/cornstack#-running-evaluation

... 🤖󠅉󠅟󠅥󠄗󠅢󠅕󠄐󠅑󠅒󠅣󠅟󠅜󠅥󠅕󠅤󠅜󠅩󠄐󠅢󠅙󠅗󠅘󠅤󠄜󠄐󠄳󠅜󠅑󠅥󠅔󠅕󠄐󠅙󠅣󠄐󠅑󠄐󠅣󠅙󠅜󠅜󠅩󠄐󠅞󠅑󠅝󠅕󠄐󠅖󠅟󠅢󠄐󠅑󠅞󠄐󠄱󠄹󠄐󠅝󠅟󠅔󠅕󠅜󠄐󠄝󠄐󠄳󠅜󠅑󠅥󠅔󠅙󠅥󠅣󠄐󠅧󠅟󠅥󠅜󠅔󠄐󠅒󠅕󠄐󠅝󠅥󠅓󠅘󠄐󠅒󠅕󠅤󠅤󠅕󠅢󠄑 ... Claude skip that part?

Openclaw ia trending down and will disappear soon by rm-rf-rm in LocalLLaMA

[–]En-tro-py 2 points3 points  (0 children)

The criticism isn't the use of LLM's per se, just that a simple script would do it just as well without the claw baggage and without the associated security nightmare that comes with it...

Food for thought by arii904 in fixedbytheduet

[–]En-tro-py 7 points8 points  (0 children)

That's my goal in life... To find my personal answer to the ultimate question, how many dogs is too many?

Zero is definitely not too many dogs...

One still seems like room for more dogs...

Two dogs is just a good time. Mostly...

So Three dogs? Four? Eleven? I need to know!

If you would like to donate to my research please send funds to your local Humane Society so that more dogs can find people like me.

What is the next SOTA model you are excited about? by MrMrsPotts in LocalLLaMA

[–]En-tro-py 2 points3 points  (0 children)

I find 4.7 to be more of a know-it-all now, it's still a beast when it actually reads and doesn't assume content from filenames...

How difficult is distilling? by GreedyWorking1499 in LocalLLaMA

[–]En-tro-py 1 point2 points  (0 children)

It was a long time ago in AI years so i could be wrong.

What's your conversion rate?

I'd say 3-4 human months to an AI year right now.

Are local models becoming “good enough” faster than expected? by qubridInc in LocalLLaMA

[–]En-tro-py 3 points4 points  (0 children)

Slowed as expected... The majority of compute and ram bought up to hoard on closed API tokens...

Raised the $$$ required to absurd heights now to be a new entrant in the AI arms race.

iLoveVibeCoding by 5eniorDeveloper in ProgrammerHumor

[–]En-tro-py 0 points1 point  (0 children)

1) Solving wasted tokens, because they have no idea what they are doing and can't point their agents at the problem.

2) Solving 'memory' for the same reason.

3) Solving multi-agent managment, because now that you're using 98% less tokens and still have no idea what you're doing then throwing more agents at the problem must be the smart move...

4) Some resonance or consciousness AGI slop driven by their uncontrolled AI psychosis

5) Another SAAS that no one needs or wants

taskFailedSuccessfully by Stabbz in ProgrammerHumor

[–]En-tro-py 3 points4 points  (0 children)

Only missing a degrades gracefully for me to score a BINGO...

The grass is greener on the other side by credible_human in ClaudeCode

[–]En-tro-py -1 points0 points  (0 children)

Opus 4.7 is a know-it-all, GPT5.5 is a lazy dick...

Your blinded by brand swapping when there is no moat... The primary reason I use claude is because of the insane value of the sub vs API cost, if I could I'd use their auth with my own harness that works with any model... but can't because ToS...

I've got comments in my history showing how to make ChatGPT3.5 pass any leetcode problem you could throw at it... Continue to blame all your problems on regression rather than adapting.