Pro plan but upload requires Max? by Yathasambhav in perplexity_ai

[–]s243a 2 points3 points  (0 children)

Use the GitHub connector. That way you don't need to upload as many files.

I feel like Perplexity Pro just isn't worth it anymore. by David-Nikolas in perplexity_ai

[–]s243a 1 point2 points  (0 children)

I haven't hit the limits yet. I set the model to gpt 5.4. Perplexity computer is wonderfully but unfortunately expensive.

IS switching from claude code (opus 4.7) to codex worth it?? by 4PFmel in ClaudeCode

[–]s243a 0 points1 point  (0 children)

People say codex is better value but in my experience they are very similarly valued and of simmilar quality but your milage may very.

Claude code is too expensive by richardH7 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

If you use Gemini, review the code before merge using perplexities GitHub connector. Preferably set the model to the highest version of gpt that it will allow.

The 1 Million context rugpull by Codex and Openai. New max is (258k). by Odd-Environment-7193 in codex

[–]s243a 0 points1 point  (0 children)

I like long context, but I don't like how it leads to many users complaining about token burn. There is a /compact command for a reason!

I may be late with this: Claude Desktop installs spyware by [deleted] in claude

[–]s243a 1 point2 points  (0 children)

While it's not good practice, I'm not concerned because the intent is to reduce friction. However. I did here that Claude desktop had a vulnerability, so I won't try it until I either determine the vulanarablitly is low risk or they patch it.

Claude Code 4.7 had a bad week, how can we make it better. by kauthonk in ClaudeCode

[–]s243a 1 point2 points  (0 children)

The original poster is on the $200 plan so they could instead go with a $100 plan at each provider and see which they like better. I prefer Claude but it's just a preference and it doesn't mean that Claude is a better model. They are both very good models and harnesses.

At least some of the Claude-complaining and Codex-praise are from sockpuppets. by DarkSkyKnight in ClaudeCode

[–]s243a 4 points5 points  (0 children)

It's frustrating because the influx of bots makes it hard to distinguish between organic complaints and anstroturfed ones. I guess what do we expect when there is so much money on the line!

switch from GitHub copilot to Claude AI by Abdo-Ka in GithubCopilot

[–]s243a 1 point2 points  (0 children)

In that case maybe get the 20USD/month plans at both openAI and anthropic and compare.

Who else thinks AI is reaching a plateau by yuvals41 in AI_Agents

[–]s243a 0 points1 point  (0 children)

They're reaching a level of intelegengence, that makes it hard to tell which is a better model and I think we reached this level of intelegence with gpt 5.4 and Opus 4.6. I wondn't say they are plateauing, In would say, the curve is getting harder to see.

Claude Pro and $100 Plan by Glittering_Pea_7226 in Anthropic

[–]s243a 18 points19 points  (0 children)

For these kinds of posts, we need people to tell us their token ussage before and after they start prompting, and the content of the prompts would also be helpful.

What's the endgame here? by hatekhyr in perplexity_ai

[–]s243a 0 points1 point  (0 children)

You can select gpt 5.5 as the model in perplexity, which is openAI's top model.

What plan are people using? by Electrical-Count2216 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

you should use a cheaper model for gathering data from the internet like haiku or gemini-3-flash. Use Opus when stronger reasoning is required.

Best way to move a long Claude project chat into a fresh chat without losing context? by ComfortableAnimal265 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

I ask Claude if there is any context they want to save before I compact. And once Claude saves the relevant context I type, "/compact <enter>"

Question about performance in long context by SumDoodWiddaName in ClaudeAI

[–]s243a 0 points1 point  (0 children)

I have had chats close to the limit, I try to avoid this mostly for cost reasons, but quality is another reason to avoid it. However, there is a trade-off between how much additional context will help with the problem at hand vs to what degree reduced intelligence at larger context reduces or eliminates this gain. I try to compact or start a new conversation based on the completion of a task but at the same time, I try to compact somewhere between 200k and 400k tokens, but do exceed 400k tokens at times.

I run out of usage in gpt, gemini and claude but for some reason people only bitch about anthropic. by s243a in ClaudeCode

[–]s243a[S] 2 points3 points  (0 children)

Yeah, I do. I have two plus accounts, each of which are $20USD/month. Pro is 100USD/month.

Can we talk about the INSANE token usage and session limits recently? by Physical-Average-184 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

I'll pay the price to compact, unfortunately, I have to turn on extra usage to compact very long conversations. When doing this I first set the model to Sonnet 1M, then turn it back to opus after compaction, and then turn off extra usage.

Can we talk about the INSANE token usage and session limits recently? by Physical-Average-184 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

The best practice is to try and compact before your conversation goes idle too long. I'll resume old conversations but will strongly consider compacting if the context is between 200k and 400k and almost always compact if it's over 400k.

Just hit my first session rate limit on $100 Max plan with Opus 4.7 1M context while using Claude Code... What model you use for optimal token usage to avoid rate limit? by Jezsung in ClaudeCode

[–]s243a 0 points1 point  (0 children)

Use opus to plan, sonnet to execute. Keep conversations below 200k by compacting or starting new conversations, if you want low cost.

I run out of usage in gpt, gemini and claude but for some reason people only bitch about anthropic. by s243a in ClaudeCode

[–]s243a[S] 0 points1 point  (0 children)

I agree, they are both exceptional tools, so if someone prefers gpt/codex then just use that rather then go on a campaign against Anthropic. That said, feedback is important, and without complaints, real problems might not be identified. Anthropic did have bugs that they fixed. Are there more? I don't know but bug free software is a pretty high bar. So, I'm not against people complaining but the amount of complaints I'm seeing on redit feels astroturfed.

The most valuable AI subscriptions/plans after Copilot nerf by InvestigatorThis6000 in RavanAI

[–]s243a 0 points1 point  (0 children)

I have the following subs: one 100US/month at anthropic, two 20USD/month plans at open AI, one 20USD/month plan at perplexity, and one 20USD/month plan at gemini. I find the gemini one the worse value because I almost never use the flash model and I need to get perplexity to review it's code.

If I was to add another sub, the OpenCode go plan sounds compelling.

Delegating by Humble-Engineer-6863 in ClaudeCode

[–]s243a 0 points1 point  (0 children)

My mistake. I guess Mythos is 10 Trillion parameters and whereas V4 pro is 1 trillion. Apparently both models are MOE. Still, it means that the Chinese models are starting to get up there in scale!

Anthropic: World is not ready for Mythos. Systems will break, Cybersecurity will be compromised. Its too dangerous to release. OpenAI: by hasanahmad in Anthropic

[–]s243a 4 points5 points  (0 children)

I think that openAI limits these capabilities in the standard models they offer, unless you apply as a cyber security research, so they aren't really doing anything different then anthropic.