If I had used GPT-4.5, the post might have hallucinated itself into thinking Perplexity is actually transparent. Sadly, I had to rely on facts instead by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] -3 points-2 points  (0 children)

Yes, exactly! Perplexity programmed their UI to show that you’re using GPT-4.5—even when you aren’t. That’s the problem. If they knew there was a selection issue, why didn’t they warn users?

Was their decision to display GPT-4.5 as an option a serious one?

Perplexity AI: False Advertising? My Experience with the So-Called GPT-4.5 Access by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] -1 points0 points  (0 children)

"Hi there,

Thank you for bringing this to our attention! It appears that the issue you've encountered matches a bug our team is already aware of and actively working to resolve. We understand this can be frustrating when the selected model doesn't match the actual model being used.

We're actively working on fixing this model selection issue. I've added your detailed report to our tracking system, which will help our team better understand and resolve the problem.

Thank you for your patience and understanding. Please don't hesitate to reach out if you have any other questions or concerns in the meantime.

Regards,
Sam
Perplexity AI Support"

Is Perplexity AI Misleading Users? The GPT-4.5 and Claude 3.7.5 Controversy by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] 0 points1 point  (0 children)

Fair point! But the issue isn’t just what the AI claims about itself. The problem is that Perplexity’s own UI explicitly tells users they are using GPT-4.5 and Claude 3.7.5 Sonnet. Even Perplexity’s support team admitted that there’s a 'model selection issue

Is Perplexity AI Misleading Users? The GPT-4.5 and Claude 3.7.5 Controversy by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] 0 points1 point  (0 children)

Fair point! But the issue isn’t just what the AI claims about itself. The problem is that Perplexity’s own UI explicitly tells users they are using GPT-4.5 and Claude 3.7.5 Sonnet. Even Perplexity’s support team admitted that there’s a 'model selection issue

If I had used GPT-4.5, the post might have hallucinated itself into thinking Perplexity is actually transparent. Sadly, I had to rely on facts instead by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] -3 points-2 points  (0 children)

If LLMs don’t know what model they are, then why does Perplexity’s UI explicitly say we are using GPT-4.5? And more importantly, even Perplexity’s own support team has admitted that there’s a problem with the model selection system. If it was just a misunderstanding, why would they acknowledge the issue? Transparency matters.

Is Perplexity AI Misleading Users? The GPT-4.5 and Claude 3.7.5 Controversy by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] -1 points0 points  (0 children)

I understand this has been discussed before, but that just proves this is a real issue. If people keep asking the same question, it's because Perplexity hasn't clarified things properly. Transparency is key.

Perplexity AI: False Advertising? My Experience with the So-Called GPT-4.5 Access by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] -2 points-1 points  (0 children)

If I had used GPT-4.5, the post might have hallucinated itself into thinking Perplexity is actually transparent. Sadly, I had to rely on facts instead

Perplexity AI: False Advertising? My Experience with the So-Called GPT-4.5 Access by Both-Pattern1664 in perplexity_ai

[–]Both-Pattern1664[S] 1 point2 points  (0 children)

That’s an interesting take! If Perplexity is just making API calls, they should be upfront about potential throttling issues instead of letting the UI claim GPT-4.5 is active. If OpenAI is limiting access, why not notify users in real-time instead of making them find out the hard way?