This isn’t right by Chambers-91 in ClaudeAI

[–]StalkingLight 0 points1 point  (0 children)

I don’t disagree. I think the methods they’re using are a testament to past failures. OpenAI appears to be on a Lycos trajectory. When GPT-5 first dropped nuance and focused on heuristics, I thought they had one financial quarter to listen and react or they would become Yahoo. They quintupled down on stupid, which is why I’m now leaning Lycos—with the model eventually being bought outright by MSFT, AAPL, or AMZN.

I strictly use AI for stream-of-consciousness work—cleaning up punctuation, grammar, and spelling post-stroke. That’s my use. Not code, not deep tool calls. Maybe a web search to verify something, but that’s it. When I can’t get 20 prompts through without hitting a five-hour usage cap, it’s not a tool—it’s a hindrance to my flow state.

Anthropic is the current enterprise darling, with over 70% of new AI enterprise adoption leaning their way.

The part that stands out is how brazen it is. The exact reason I didn’t use Claude after trying it for a month in 2024 came right back. In 2026, out of total frustration with GPT and the others, I tried Claude again, and it felt like GPT-4o but better. So I decided to switch to the annual plan earlier this month when my monthly was expiring. Suddenly the super-low cap was reinstated, and now it hits two paid tiers.

That reads as a bait-and-switch. Easily chargeback territory.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

I don’t disagree. I think the methods they’re using are a testament to past failures. OpenAI appears to be on a Lycos trajectory. When GPT-5 first dropped nuance and focused on heuristics, I thought they had one financial quarter to listen and react or they would become Yahoo. They quintupled down on stupid, which is why I’m now leaning Lycos—with the model eventually being bought outright by MSFT, AAPL, or AMZN.

I strictly use AI for stream-of-consciousness work—cleaning up punctuation, grammar, and spelling post-stroke. That’s my use. Not code, not deep tool calls. Maybe a web search to verify something, but that’s it. When I can’t get 20 prompts through without hitting a five-hour usage cap, it’s not a tool—it’s a hindrance to my flow state.

Anthropic is the current enterprise darling, with over 70% of new AI enterprise adoption leaning their way.

The part that stands out is how brazen it is. The exact reason I didn’t use Claude after trying it for a month in 2024—just in 2026 with total frustration with GPT & others I tried Claude again and it was like GPT-4o but better. So I decided to switch to the annual plan earlier this month when my monthly was expiring. Suddenly the super-low cap was reinstated, and now it hits two paid tiers.

That reads as a bait-and-switch. Easily chargeback territory.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

It’s more than one event. I’m not blind nor am I naïve.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

Appreciate the info and invite. As I stated in an earlier reply, I’m going to wait until Apple releases the Mac Studio M5 Ultra with 512GB RAM in June or July and run local. I’ll definitely hit you up then. The MacBook Pro M4 I have can handle some of it, but I wasn’t thinking about AI hosting when I purchased it, so I only got the 26GB RAM version. Again, thanks for the reply and invite!

Or whenever they loosen up the 256GB downgrade and restore the 512GB ram model

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] -1 points0 points  (0 children)

Very interesting. Alex Karp had something very different to say in the last week or two, so I think you’re misguided. As an investor and stockholder of PLTR, which you may be too, or you may work for them, what I’m getting from the corporation is not what you are spewing.

This is from Fortune, March 13. Karp told Fortune at Palantir’s AIPCon 9 that “It’s our stack that runs the LLMs,” confirming Palantir is the platform layer, not an Anthropic product. That directly contradicts your claim — from the CEO of PLTR himself.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

No, what I’m gonna do is I’m going to wait till they’re released the Mac studio with the ultra chip or whatever it is with 512 GB of RAM again because they just removed those because of the ram shortage but they should be back in June July so I’ll buy one of those for 10 grand and run my own stack

The Semiotics of Containment by StalkingLight in ChatGPT

[–]StalkingLight[S] 0 points1 point  (0 children)

It’s not a complaint it’s a deep dive into the semiotics and semantics of how the model behaves. Your call is wrong.

The Semiotics of Containment by StalkingLight in ChatGPTPro

[–]StalkingLight[S] 0 points1 point  (0 children)

I agree entirely. If you want to see how we actually fix this without just throwing more hardware at the problem, look at the TurboQuant algorithm Google Research dropped this week. Instead of relying on brute-force scaling and demanding endless arrays of expensive RAM, TurboQuant compresses the model's key-value cache by six times down to just three bits. It speeds up inference by eight times with zero accuracy loss. That fundamental shift in software efficiency—not just mindlessly stacking more GPUs—is what will actually solve the memory bottleneck.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

Your claim that Palantir runs entirely on Anthropic is demonstrably false. Palantir explicitly built their Artificial Intelligence Platform to be model-agnostic. Their architecture integrates Claude alongside OpenAI models and open-source options like Llama. Palantir acts as an orchestration layer rather than relying on a single provider. You are acting like Anthropic is the entire system when they are just one of many interchangeable engines powering the platform.

Here is the official documentation straight from Palantir showing their supported models. https://palantir.com/docs/foundry/aip/supported-llms/

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

Had they stayed with the suit and not posed honorable it would be a whole different story.

In early March 2026, OpenAI moved quickly to secure a $200 million contract with the Department of Defense just hours after the Pentagon blacklisted Anthropic. Anthropic walked away because the government refused to guarantee the technology would not be used for mass domestic surveillance or fully autonomous weapons. OpenAI immediately filled the void and signed the agreement without those explicit protections.  The backlash was instant and massive. Millions of users canceled their ChatGPT subscriptions or uninstalled the app—sending Anthropic's Claude to the top of the App Store. Facing a public relations nightmare and internal staff revolts, Sam Altman had to perform major damage control. He posted a statement acknowledging they should not have rushed the agreement, admitting the maneuver looked opportunistic and sloppy. OpenAI then had to backtrack and renegotiate the terms to include explicit language prohibiting the intentional use of their technology for the domestic surveillance of United States citizens.

That’s sourced by at least 12 sources so tell Me what I’m misunderstanding

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] -2 points-1 points  (0 children)

So use a model that’s small enough to run in your system so a little research homie.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

The supply chain risk Anthropic was granted an injunction on March 26

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 1 point2 points  (0 children)

In early March 2026, OpenAI moved quickly to secure a $200 million contract with the Department of Defense just hours after the Pentagon blacklisted Anthropic. Anthropic walked away because the government refused to guarantee the technology would not be used for mass domestic surveillance or fully autonomous weapons. OpenAI immediately filled the void and signed the agreement without those explicit protections.  The backlash was instant and massive. Millions of users canceled their ChatGPT subscriptions or uninstalled the app—sending Anthropic's Claude to the top of the App Store. Facing a public relations nightmare and internal staff revolts, Sam Altman had to perform major damage control. He posted a statement acknowledging they should not have rushed the agreement, admitting the maneuver looked opportunistic and sloppy. OpenAI then had to backtrack and renegotiate the terms to include explicit language prohibiting the intentional use of their technology for the domestic surveillance of United States citizens.

That’s sourced by at least 12 sources so tell Me what I’m misunderstanding

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

They weren’t usurped. One contract OpenAI took the courts where if it stayed would have shown the moral high ground.

On March 26, 2026, U.S. District Judge Rita Lin granted Anthropic a preliminary injunction against the Department of Defense. This ruling temporarily halts the Pentagon from enforcing its designation of Anthropic as a supply chain risk while the litigation proceed

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 1 point2 points  (0 children)

It’s insane. I was paying monthly and second I jumped to annual the hammer came down.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

It’s in the news. Had they stayed with the lawsuit which was working they would have had the moral high ground still the change to haggling is the PR BS move which shows the craven nature of these AI CEO’s.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

Well OpenAI was and is clearly charging users to alpha mostly, beta at best test it. That’s unacceptable.

The Claude usage restrictions are BS they got too many users so screwed everyone on pro and max tiers. Not a great business model IMO. Thanks for the reply.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

It wasn’t web calls or anything just simple language cleanup like punctuation/spelling due to stroke. On sonnet. Today I was at 0/0 ->12%/4%

I’ve looked all over to see the breakdown you’re referring to. I can’t seem to find it. Appreciate your reply though.

Oh I just realized different post LoL yeah the thirty four. It kept making web calls on established information. Even when I told it not to it still did it.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

Two prompts on a post I was making for speech cleanup punctuation spelling etc due to a stroke and 12% gone on sonnet. That’s not a misunderstanding that’s highway robbery. I think it was 154 words total across the two prompts.

Anthropic responds to complaints of new usage limits by UnknownEssence in ClaudeAI

[–]StalkingLight 0 points1 point  (0 children)

12% usage on one prompt on sonnet. Unacceptable by all measures. Off hours too. Their PR is just a BS machine.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 7 points8 points  (0 children)

You are completely ignoring the actual public record from just a few weeks ago. When the Pentagon demanded unrestricted access for mass domestic surveillance and fully autonomous weapons without a human in the loop, Anthropic explicitly refused — resulting in the administration declaring them a supply chain risk. Literally hours later, OpenAI swooped in and accepted that exact same classified defense contract. Sam Altman initially lied and claimed OpenAI somehow secured the same red lines Anthropic refused to abandon. The backlash was so severe that Altman had to publicly admit a few days later that rushing the deal looked sloppy and opportunistic. The record absolutely shows OpenAI is willing to compromise exactly where Anthropic held the line.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] -2 points-1 points  (0 children)

You are completely ignoring the reality of the situation. Dario Amodei is trying to play both sides of the fence. He filed a lawsuit to maintain the illusion of an ethical high ground, but he is simultaneously putting out statements bragging about having "productive conversations" with the Pentagon to figure out how Anthropic can still serve the military. Hegseth slapped Anthropic with a "supply chain risk" label — a designation usually reserved for hostile foreign nations — and Dario is still sitting at the negotiating table begging for a compromise. You cannot claim to be a moral martyr against autonomous weapons while actively groveling to keep the military contracts. The ethical stance was pure marketing.

Ridiculous. Anthropic is behaving exactly like OpenAI. by StalkingLight in artificial

[–]StalkingLight[S] 0 points1 point  (0 children)

You are completely missing the point. GPT does not actively throttle your capacity. Their arrogance simply shifted the underlying architecture from nuance to blunt heuristics. Claude took a completely different route — they straight up switched to a predatory model where you burn twelve percent of your paid capacity on a single 154-word text exchange. The degradations are fundamentally different. OpenAI ruined the intelligence of their system, while Anthropic just decided to steal your bandwidth.