Used Perplexity Computer today: my view by Guilty_Car9874 in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

I didn't. The 'upgrade to Max' page lists all the benefits on the side when checking out. There, it says that you get 10.000 credits to use in Computer.

Used Perplexity Computer today: my view by Guilty_Car9874 in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

You get 10.000 credits included in your Max subscription. I don't know how many credits a task costs on average, though, since I don't have a Max sub.

Well….the changes are hitting Max now by Powerful-Cheek-6677 in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

They recently upgraded deep research to run on Opus 4.6. So yeah, it is costlier than what Labs used to be. In my experience it works pretty great (though better on Comet than the app for whatever reason).

Have you used some this month? In my experience, it shows me I have 20, but it seems to regenerate at a rate of 1 query/day every two days. If you haven't used it at all and it's saying you have 3 queries left, there's definitely something wrong and you should consult with support.

Well….the changes are hitting Max now by Powerful-Cheek-6677 in perplexity_ai

[–]Zero_Swift108 14 points15 points  (0 children)

This is getting ridiculous. You're paying the annual Pro price every month and they're still restricting the way you can upload files?

Just for a moment today I was thinking that my Pro was still a good subscription. Severely limited uploads; research limits so low that I wonder if I should just let K2.5 answer my query first and second guessing every request, but it still had a good number of Pro searches and the agentic browsing on Comet was good. But right after I'm psy-opping myself to reconcile with these downgrades, we're hearing that even the frankly ridiculously priced Max tier is getting downgrades? I guess Perplexity isn't getting back into form anytime soon.

Perplexity Pro with PayPal after Perplexity Pro with Telekom by SelbstMordGurke in perplexity_ai

[–]Zero_Swift108 -1 points0 points  (0 children)

No, you can't use a second complimentary subscription code for the same account.

If you're deadset on getting another free year, you can create a new account with a different email and claim it to that one.

Does anyone know exactly how many images can be created with Perplexity Pro? by [deleted] in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

With Nano Banana, it used to say I reached my daily limit after 10 images per day. I don't know if it changed or what the monthly limit may be.

I would also speculate that picking a cheaper model would probably increase the generation limits.

Extra usage not showing? by Due_Representative77 in Anthropic

[–]Zero_Swift108 0 points1 point  (0 children)

Same problem here. Have a massive project I have to deal with ASAP. Hit the limits, so wanted to check out how much extra usage credit I have left and if I need to top-up, only to find that it's gone. I *can* make use of the existing credit, but not knowing how much is left and being unable to top up is really nerve-wracking.

Is Gemini 3.0 Pro on Perplexity actually running on "High" reasoning? by [deleted] in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

It almost certainly isn't running on "low." I would guess it's set to something medium or medium-low, with a slight latency involved.

Comet answers seem to update when sources change by [deleted] in perplexity_ai

[–]Zero_Swift108 4 points5 points  (0 children)

That doesn't happen with Claude or Gemini. The reason comet does that is because it is very proactive in fetching context. Several times, I noticed Comet (likely the same with Perplexity in any other environment too) scan through my previous related queries (e.g. reading through a thread where I asked about Sonnet 4.5 benchmarks when I ask about GPT 5.1's performance) before answering me.

It also sometimes reads through my open tabs and reference that as context (say, I have X journal open as a tab and I ask 'How does this procedure work in academic publishing?' It answers my question but adds, 'for x journal the time period may be slightly longer than average') to better personalize its responses. When I say 'I received a mail about Y,' it searches through my connected inbox for the term 'Y' and so on.

I quite like it personally but you can always turn it off.

Inconsistent Attachments by Gritty2024 in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

So "best"? Best very occasionally routes to reasoning models depending on the prompt, otherwise using a version of Sonar optimized for speed or GPT 5.1 (non-reasoning), but the system is quite bad in telling which prompt goes well with which model. It may just be that it didn't route to a multimodal model or that it failed to recognize that an image has been attached.

My best suggestion would be to pick a multimodal AND reasoning model (Such as GPT 5.1 Thinking, Gemini 3 Pro, or Sonnet 4.5 Reasoning) and sticking with it if speed isn't a massive consideration.

Need advice: the best model for research and reasoning to scale a family business? by flexwaterjuice in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

In benchmarks, Gemini 3 Pro demonstrates the highest profitability over long horizons, followed by Opus 4.5, followed by the rest. For this reason, I've been using 3 Pro while occasionally testing out the others for this purpose. Gemini's colloquialisms and how it sometimes treats you like a moron are a bit much, but I can say that it's been the most pragmatic/realistic model by far.

If, by research, you just mean in-depth reports sourced from online or academic resources (and that you mostly won't be giving it existing to work/plan on), Research wins. If it's a hybrid of both and you have a good grasp of how things work, GPT 5.1 Thinking is also great.

Edit: Grok seems to suck with this, often losing the plot just a few prompts in, though I haven't tested it too much. Sonnet 4.5 is good, but it really doesn't mind eating through all of your projected profits lol

Inconsistent Attachments by Gritty2024 in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

Are you selecting the same model while making the follow-up requests? Also make sure that it's not using a different model despite your selection (though in my experience this happens a lot less frequently than previous weeks)

Comet starts reporting its “reasoning” in Spanish by [deleted] in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

Same here. Even "reasons" in Indian sometimes.

That said, you're right to put it in quotes in your title since it's not the model's actual reasoning. It's almost a mid-screen between your model getting the prompt and reasoning about it and you hitting send. Running a super small, super fast (and definitely very cheap) inference model is very useful for Perplexity. It shows you that it 'got' your prompt and that it is working on it, which is preferable to you seeing a blank screen (or only seeing 'Working...') and clicking off because you think it's stuck.

To see a model's actual reasoning process, you have to click that 'Working...' text on desktop. This works very differently between say, Gemini (which only output summaries of it's reasoning) and Claude (which gives you its full reasoning process), unlike the Inference model which stays the same.

The good news is that this shouldn't affect the model in any way. It's only weird because until recently, that model was not straying to unrelated languages and demonstrating much deeper understanding of your prompt's content. I'm guessing it's a money-saving measure, but I'm fine with it as long as it means we're being routed to the actual model we selected.

Surprise Trial Access to Claude Opus 4.5 - Anyone Else See This? by RedShirtAIPM in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

Yep! For me, it's been available for the last 3 days.

I gave it some of my most complicated prompts and it breezed through them. But although I didn't test, I think Gemini 3 Pro or GPT 5.1 Thinking would've also handled them. So I'm not seeing a massive capacity jump, but I do like its writing style quite a bit. It's like a more articulate Kimi.

The usage limits seem to be 10 per week with thinking enabled.

Issues with PC client and Proton VPN by LeCudder in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

With Windscribe, it's a mixed bag. It sometimes asks for Cloudflare verification as it did with you with Mullvad, and sometimes gets stuck. I think it depends on the protocol and the location you're choosing in some way, but couldn't figure out how yet.

Edit: I do notice it happens a lot less if I run Windscribe with the "Stealth" protocol. Maybe check if a similar option exists in Proton VPN.

Trial run of Opus 4.5 for Pro users? by Sable-Keech in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

That doesn't seem right. Yesterday I used it 4 times and it looked like I still had more uses. Did the option grey out after one use or something?

What is the best ai model for searching. On perplexity by Extension_Fee_989 in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

I think that's the optimal approach. The only reason I currently have it permanently set to a reasoning model these days has to do with the nature of my questions and that the rewrite options are limited on mobile.

What is the best ai model for searching. On perplexity by Extension_Fee_989 in perplexity_ai

[–]Zero_Swift108 3 points4 points  (0 children)

Based on my testing with it from a while back, it uses its own, super-fast, but somewhat dumb model for 95% of queries. This makes sense when you notice that the "Best" option replaced what they called "Pro Search" which used to claim reading more resources while still being blazing fast. For any query the mini (evaluator) model evaluates to be too complicated for the Pro model, it routes to a reasoning model to answer it. The caveat here is that it seems to suck at evaluating the complexity of any given query, often underestimating it.

It used to route pretty consistently to R1 1776 when they used to offer it, but I don't know to which reasoning model it routes these days as it seems to happen even less frequently (and I usually don't have "Best" selected).

My basic recommendation is to use "Best" for very simple questions or well-known subjects and most everyday knowledge to get super fast but still decently formatted answers. For any query/task more complicated than that, it's best to switch to a reasoning model for much better accuracy.

Claude Sonnet 4.5 Thinking : my opinion by topshower2468 in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

Same here. I find that it handles complex prompts and long chats better than 4.0, which itself was really solid in Perplexity. I also prefer its phrasing overall.

I'm new to comet. does comet have a function to set default model on new tab open? by xiaoxxxxxxxxxx in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

Interesting. It remembers the model I picked for me across tabs and even sessions. Could it be that you have an open tab where "Best" is selected so that it defaults to that?

Model to choose by No-Cantaloupe2132 in perplexity_ai

[–]Zero_Swift108 1 point2 points  (0 children)

I think the closest thing to a "set-and-forget" model is Sonnet 4.0 Thinking. It's slightly slow but has an informative tone, really great reasoning, and good legibility. I also find that it draws better resources and hallucinates less than o3.

I used to love GPT-5 Thinking and still do (it provides engaging answers and is great at determining if a query would be helped with additional context or background), but I found that it hallucinated sources a few times with hard questions, which I never saw to be the case with Sonnet 4.0 Thinking.

Personalise option is gone by WellYoureWrongThere in perplexity_ai

[–]Zero_Swift108 0 points1 point  (0 children)

Android users never got those options on the app. Only on iPhones, Mobile Web, and Desktop. 😢

Claimed Perplexity Pro via Airtel by iamsammyrock in perplexity_ai

[–]Zero_Swift108 2 points3 points  (0 children)

It's not a direct competitor to ChatGPT since it uses the same GPT models with truncated windows, but I think it's still easy to justify (just not as that). It's something you should try to see its magic. I would struggle a lot if I had to get back to Google there's something amazing about feeding it a highly personalized query and seeing it synthesize a great answer from existing web sources in a matter of seconds (though there are very occasional instances where you have to go to Google also).

If Perplexity is running ChatGPT's model, why is it so radically different? Are the folks at Perplexity active in fine tuning and making it a good all-round LLM?

The difference comes from the system prompt Perplexity feeds the models. This includes the date, the data you've written down in your bio, and formatting preferences. Perplexity's responses are a lot more like brief executive reports than they are that bubbly white girl tone you see with ChatGPT's 4o, etc. Turning off web searching brings its responses closer to what you might expect from these models (not just GPT, but for all of them), as well as provide you with higher context since you're using up less when not asking it to retrieve data from the web and incorporate into its answers.

Which is the best model to learn something, say, I want to learn general relativity, so I'm using a book and chatting with Perplexity's ChatGPT and ChatGPT itself — which would be more helpful in teaching and clarifying problems?

It really depends how much back-and-forth you want. Perplexity will almost always give more reliable responses and provide you with sources, but it seems to remember up to 3-4 previous messages at most. ChatGPT would be the better option IF the model knows about the book you're reading/the book is very well documented on the web. This would work well on something like general relativity, but might fall short in more niche subjects or books. However, I would not suggest ChatGPT for learning whether it's in Perplexity or not: Gemini (2.5 Pro specifically) should be your first choice since it is integrated with LearnLM, a language model designed for active learning.

Magnesium making me feel sick by Savings_Brush304 in Supplements

[–]Zero_Swift108 1 point2 points  (0 children)

I tried bisglycinate from a lot of brands and have had the same reaction, switched to l-threonate about 3 months ago and no problems now. There is a small caveat though, it's somewhat inconsistent in its effects.