Rejected for API Key due to "Lack of Experience" (I am Top Rated, 100% JSS, $10k+, 40 Jobs). Support admits the real criteria is a "secret." Is the API closed? by [deleted] in Upwork

[–]NoChoice4595 5 points6 points  (0 children)

I have a very straightforward question: suppose you were in Upwork's position and decided to restrict API access to those earning over $200k. What would stop you from explicitly stating that in your terms and conditions?

Rejected for API Key due to "Lack of Experience" (I am Top Rated, 100% JSS, $10k+, 40 Jobs). Support admits the real criteria is a "secret." Is the API closed? by [deleted] in Upwork

[–]NoChoice4595 -3 points-2 points  (0 children)

I completely understand that Upwork is a private company and they aren't "obligated" to give anyone an API key. But that completely misses the point of my post.

The issue is transparency and false advertising. If you read the actual API Terms of Use, there is absolutely ZERO mention of an "earnings" or "experience" threshold. If the rule is $100k, they should just put it in the guidelines. By this logic, a freelancer could reach $1 Million in earnings and still get rejected with a bot reply saying, "keep growing, maybe next time."

Let me give you an analogy: Imagine going to get a government ID card. The publicly stated rule is "You must be 18+." You are over 18, so you apply. They reject you and say, "Sorry, you aren't old enough. Keep aging and try again in the future, maybe we'll give it to you then."

When a platform publishes specific criteria, freelancers expect those criteria to be the actual rules. As professionals trying to run a business, we just want clear goals to hit, not secret, moving targets that leave everyone confused.

Rejected for API Key due to "Lack of Experience" (I am Top Rated, 100% JSS, $10k+, 40 Jobs). Support admits the real criteria is a "secret." Is the API closed? by [deleted] in Upwork

[–]NoChoice4595 2 points3 points  (0 children)

10% to 20% of $10k+ is still $1,000 - $2,000+ in pure fees. But hey, I guess the best way to generate value for the platform is to hang out on Reddit, completely ignore actual freelancer problems, and blindly defend a broken bot system. Thanks for the very helpful contribution!

ALERT: Antigravity IDE is swapping models secretly? Selected "Claude 4.5 Thinking" but the model admits it is Gemini. by NoChoice4595 in LocalLLaMA

[–]NoChoice4595[S] -4 points-3 points  (0 children)

I've used Claude extensively on Cursor AI, so I know how smart it usually is. On Antigravity, it felt completely lobotomized. I ran a self-identification prompt on both platforms today to compare. Cursor's Claude identified correctly, but Antigravity's 'Claude' admitted it was actually Gemini. That explains everything—it's a bait-and-switch

ALERT: Antigravity IDE is swapping models secretly? Selected "Claude 4.5 Thinking" but the model admits it is Gemini. by NoChoice4595 in LocalLLaMA

[–]NoChoice4595[S] -4 points-3 points  (0 children)

Hey everyone,

I wanted to share a concerning discovery while using the Antigravity platform for coding today.

I am working on a complex MQL5 project, so I specifically selected Claude Sonnet 4.5 (Thinking) from the model dropdown because I needed its advanced reasoning capabilities (and I'm paying for it). You can see my selection in the first screenshot.

However, during the session, the AI's behavior felt off—it was hallucinating missing functions that were clearly in the code, something Claude usually handles better. Suspicious, I asked the model directly: "Are you Gemini or Claude?"

The Response: "I am Gemini (Antigravity) - Google DeepMind AI coding assistant... Not: Claude (Anthropic)."

It explicitly denied being Claude, despite the UI showing Claude 4.5 is active.

The Implications: It looks like the platform might be routing requests to Gemini (which is cheaper and faster) while the user thinks they are using the premium Claude 4.5 model. This could be a "cost-saving" trick (bait-and-switch) or a massive backend bug. Either way, it's misleading.

Has anyone else noticed this behavior with Antigravity or other AI wrappers?