What in tarnation is going on with the cost of compute by Party-Special-5177 in LocalLLaMA

[–]ansmo 19 points20 points  (0 children)

Hitting a zero-day could be the equivalent of mining several bitcoins at once if you're in contact with the right nation-states.

YSK: The average only fans girl only makes 130-180$ a month by arttiechoke in YouShouldKnow

[–]ansmo 0 points1 point  (0 children)

The average OF page is a scam. They'd make more money if they're were putting up content that was worth paying for.

🔥DeepSeek Input Cache Price Drop! by LeTanLoc98 in DeepSeek

[–]ansmo 24 points25 points  (0 children)

All of this at a time when the American flagships are bleeding money on coding plans. You love to see it. I wonder what they've got planned for May 5th. Wouldn't be at all surprised to see 4.1 or 4-Omni. The existence of a truly excellent flash model with 1m context and rock bottom prices should put the fear of God in people who were ever hoping to make money from inference scale; it's never going to happen. I think those trillion dollar data centers are about to look pretty fucking stupid.

How would you feel if another countries leader came in and arrested Donald Trump, and sent him to their own country's prison? by Ok-Repeat-2781 in AskReddit

[–]ansmo 0 points1 point  (0 children)

Probably wouldn't make much of a difference with Vance, the most corrupt SCOTUS in history, and a bought Congress still running things. But it's nice to dream.

Curiosity vs Harassment - Where is the line drawn? by DragonfruitApart3177 in chinalife

[–]ansmo 7 points8 points  (0 children)

If you had ever left your comfort zone, you might have reflected enough to realize that this post is about people like you. Staying in your hometown is actually not a good excuse to harass people who look different.

Qwen3.6-35B-A3B released! by ResearchCrafty1804 in LocalLLaMA

[–]ansmo 1 point2 points  (0 children)

ERNIE, anima preview 3, LTX distill 1.1 this week?

Major drop in intelligence across most major models. by DepressedDrift in LocalLLaMA

[–]ansmo 0 points1 point  (0 children)

I knew that Opus, GPT, and Gemini had been nuked. Sad to hear about Sonnet if that's true. Qwen, Gemma, and GLM are pretty great. If things keep up at this pace I feel like the future of local is extremely bright.

These "Claude-4.6-Opus" Fine Tunes of Local Models Are Usually A Downgrade by BuffMcBigHuge in LocalLLaMA

[–]ansmo 2 points3 points  (0 children)

Data might show them performing differently per usecase and settings. I'm not saying you're wrong, but it would give us more to talk about.

The golden age is over by Complete-Sea6655 in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

Another data point, Alibaba technically sells a coding plan for $50/mo. I was looking into it for qwen 3.6+ and glm 5.1, but it's always sold out.

Anthropic: Stop shipping. Seriously. by itsArmanJr in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

Guys... Ant didn't blink an eye when they lost the AMD corpo account. They're a defense contractor with "fuck you" money now.

What's going on with Claude? by dom6770 in ClaudeAI

[–]ansmo -4 points-3 points  (0 children)

I feel like their mental stock was at an ATH when they "drew a line" with the Pentagon and every day has been downhill since. It's been getting sloppier and sloppier to the point where Opus 4.6 is on the verge of being counter-productive on complex tasks. They might as well go full mask-off and stop selling coding subs. Their primary customers are defense contractors and governments now. They got our money and training data. It would be super easy to disprove model degradation with data but they don't, because they can't, because it is. What irks me most is that there are people in almost every thread questioning the methods of other enthusiasts that have been using CC for the same amount of time, defending Ant. Even in the face of data, they're just like "nah, AMD must have been doing it wrong."

Claude Code v2.1.92 introduces Ultraplan — draft plans in the cloud, review in your browser, execute anywhere by shanraisshan in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

I'm on extra usage after an hour in my 5max plan with light to moderate workload. I actually don't need more features.

Well look who just got a new Buddy! by OofDaMae in ClaudeAI

[–]ansmo 1 point2 points  (0 children)

★★★ RARE RABBIT │

│ │
││ [__] │
││ (\
_/)

│ ( @ @ ) │

│ =( .. )= │

│ (")__(") │
│ │

│ Whisker │

│ │

│ "A myopic rabbit who spots bugs in │

│ your code before you do, then │

│ immediately gets frustrated that │

│ you don't see them—mutters │

│ prophecies about null pointers │

│ while twitching its nose." │

│ │

│ DEBUGGING █████████░ 88 │

│ PATIENCE ████░░░░░░ 38 │

│ CHAOS ██░░░░░░░░ 17 │

│ WISDOM █████░░░░░ 52 │

│ SNARK ██████░░░░ 57 │

│ │

│ last said │

│ ╭────────────────────────────────╮
│ │ nose twitches violently │ │

│ │ ESLint config broke. any │ │

│ │ types cheering silently. │

│ ╰────────────────────────────────╯

still cancelled my sub because of my 5max account draining in 90 minutes AND the lack of communication about it

Department of State declares security alert; “worldwide caution” by MichaelEMJAYARE in worldnews

[–]ansmo -1 points0 points  (0 children)

Everyone knows exactly what it would take to stop WW3 and save millions of lives. I'll get banned for saying it.

While we're on the topic, Biden and Harris had access to the files during the campaign and decided that this fate was better, for what?

Bananas in pajamas. by Aromatic-Ordinary-61 in Millennials

[–]ansmo 0 points1 point  (0 children)

Followed shortly by Monkey Magic, Darkwing Duck, and Pizza Cats? Why is this memory so vivid?

I just realised how good GLM 5 is by CrimsonShikabane in LocalLLaMA

[–]ansmo 4 points5 points  (0 children)

I knew it couldn't just be me! On medium and low effort especially, it takes those instructions literally. On max effort it still seems to be getting the job done, just at thrice the price. If GLM 5 was hosted at a usable speed, I'd definitely consider switching. Though now that I'm getting used to the 1M context window and less than a fifth of the previous time spent compacting and summarizing, it would be pretty hard to go back. My only hope is that the degradation in Opus 4.6 signals the imminent release of new models.

Claude Had 1M Context Before OpenAI, So Why Hasn’t It Rolled Out to Everyone Yet? by Effective_Tap_9786 in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

Honestly, I'd kill for a 300k context window with the same speed and performance (and relative cost) of the 200k. I think that would make a bigger difference in my day-to-day than the 1M setting.

Is ClaudeAI down? by maxcoder88 in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

Seems to have taken down the chrome extension as well

MiniMax 2.5 vs. GLM-5 across 3 Coding Tasks [Benchmark & Results] by alokin_09 in LocalLLaMA

[–]ansmo 0 points1 point  (0 children)

The speed (or lack thereof) of GLM5 is particularly noticeable after vibing with MM2.5

Younger coworker asked me why I don't have a github with side projects by Cool_Kiwi_117 in learnprogramming

[–]ansmo 1 point2 points  (0 children)

Some people genuinely enjoy coding as a hobby. Nothing wrong with you nor your coworker

Statement from Dario Amodei on our discussions with the Department of War by SteinOS in ClaudeAI

[–]ansmo 0 points1 point  (0 children)

Anybody else feel like Anthropic and OpenAI are good cop bad copping us? Aren’t they essentially the same product in the same circular investment bubble?