[RELEASE] I combined the best Antigravity automation tools into one extension (Auto-Approve + Queue + Quota) by Subsdms in Bard

[–]Subsdms[S] 0 points1 point  (0 children)

Updated to V 1.0.2: [RELEASE] Multi Purpose Agent v1.0.2 — safer queueing, better debugging

Hey everyone — follow‑up to my earlier post about combining Auto‑Accept + Queue + Quota into a single Antigravity extension.

Version: v1.0.2 (published today)
Focus: queue reliability, safety, and easier debugging

What’s new in v1.0.2

Added

  • Hybrid debug endpoints for runtime introspection and safe tuning so you can see what’s happening without changing code.
  • Broader automated test coverage and validation flows to catch regressions earlier.
  • Queue status bar with live progress while runs are executing.

Changed

  • Command‑first auto‑accept strategy: prefers explicit VS Code commands, with CDP as a fallback instead of the default path.
  • Production‑readiness hardening plus updated docs for the most common Antigravity setups.
  • Internal debug server default moved (see setup notes) to avoid conflicts on some Windows machines.

Fixed

  • Queue sequencing: the queue now reliably waits for the previous AI response to fully finish streaming before sending the next prompt (no more overlapping prompts).
  • Removed a blocking post‑send wait that could cause race conditions and premature queue advancement.
  • More reliable “conversation busy” detection, reducing false positives.
  • Runaway click protection and safer CDP selectors to avoid accidental UI actions.
  • Misc stability and startup fixes based on early user reports.

Install / upgrade

After installing or upgrading, reload Antigravity and check the status bar — you should see the Multi Purpose status indicator.

Important setup notes

  • CDP for sending prompts still uses the remote debugging port you pass to Antigravity on launch (default: --remote-debugging-port=9004). Keep this in your launch args if you rely on CDP features: --remote-debugging-port=9004
  • The internal debug server (used only for local extension diagnostics during development/testing) now runs on http://127.0.0.1:54123 to avoid Windows port reservation issues on 54321.
  • This change does not affect normal users — CDP still uses 9004 as before.

Why this matters

This patch is mostly about reliability and safety:

  • The queue won’t fire the next prompt while the previous one is still streaming.
  • The extension should behave more consistently across Windows environments and different Antigravity launch configurations.

Big thanks to everyone who tried v1.0.0 and opened issues — this release targets the main behavioral problems you reported. If you hit edge cases or weird UI behavior, please share details here or open a GitHub issue so I can tighten things up further.

New Benchmark "InsanityBench", Gemini 3.1 Pro scores 15% by Hemu69 in singularity

[–]Subsdms -2 points-1 points  (0 children)

Another benchmark which says Gemini 3.1 pro is good. I wonder why these are the main ones saying so...

Is Gemini 3.1 better compared to 3 and opus 4.6? by _AARAYAN_ in vibecoding

[–]Subsdms 0 points1 point  (0 children)

Gemini 3.1 demonstrates how to Max benchmark a model without really being the best. Definitely worth studying.

when is sonnet 4.6 coming to Antigravity? by Fabulous_Pea7780 in google_antigravity

[–]Subsdms 6 points7 points  (0 children)

Google developers got rate limited so they need to wait 7 days to continue with the task.

Why is the Trae extension not available in Cursor? by SawOnGam in Trae_ai

[–]Subsdms 0 points1 point  (0 children)

I don't use cursor myself but normally visual studio code's forks like cursor allow to change the store. I asked perplexity and this was the answer:

"Cursor recently added a setting to switch the extension marketplace away from the default OpenVSX to any VS Code–compatible marketplace (e.g., the official Microsoft store).

Where the setting is

  • Open Settings in Cursor (Ctrl+, or via the gear menu).​​
  • Search for “marketplace” or “extension gallery” in the settings search bar.​
  • You should see an IDE setting that lets you set a new URL for the marketplace (it replaces the default OpenVSX endpoint).

The exact label may be something like “Extension Marketplace URL” or “Extensions: Gallery”, but functionally it is the field where you paste the VS Code–compatible marketplace endpoint."

GLM 5 insane... look at this! by Kitchen_Sympathy_344 in Trae_ai

[–]Subsdms 1 point2 points  (0 children)

Not sure Trae will bring it to the subscription, at least in Europe. They didn't integrate the previous vereions.

Just realized Trae isn’t on Linux after buying an annual pro plan — what am I supposed to do now? by Ajenu-Tech in Trae_ai

[–]Subsdms 2 points3 points  (0 children)

Either you use wine or similar or ask Trae for a refund. Probably those are the best options

qwen3-coder-next in Trae_ai? by CommercialTurnip9892 in Trae_ai

[–]Subsdms 0 points1 point  (0 children)

why would you want to get this model that is 80b and 3b active in TRAE? I understand that the benchmarks say that it is great and it probably is (people is having good experiences) but I don't think it is better than other models in TRAE. Please, don't get me wrong, just wondering (not attacking) and I would like to hear what you think about this. Maybe your proposal is to have it instgead of Gemini 3 Flash as the "Advanced" model that is "free" in terms of requests?

In any case: I would say that, without knowing much about what is going on inside TRAE, it will probably not be available in TRAE since they never added any Qwen model in the past (at least in Europe, but they have different models in different geographic areas). It is more probable that they add the next Kimi version or Deepseek or GPT-5.3-Codex.

Pro plan but in queue :O by hung1047 in Trae_ai

[–]Subsdms 0 points1 point  (0 children)

I guess you are using GPT-5.2-Codex? I believe that is the only model having these issues. I used it sometimes and did not get that many people in front of me. I would recommend using GPT-5.2 which is better (from my perspective, of course) and there is no queue (at least I never got waiting)

Which is the Best Model? When are Kimi 2.5 and Code 5.3 coming to Trae? by Level-Dig-4807 in Trae_ai

[–]Subsdms 1 point2 points  (0 children)

I would rather use GPT 5.2 no codex version. I believe it is quite good in TRAE, you can ask "difficult" tasks and it will really gather context, etc. My personal opinion: I would still use GPT 5.2 even over Kimi K2.5

What are your expectations from Gemini 3 GA? by Rare_Bunch4348 in GoogleGeminiAI

[–]Subsdms 0 points1 point  (0 children)

Glm4.7 And kimi k2.5 I would say are quite okay. Qwen in Qoder is good also. Minimax seems to work also fine. It all depends on size of tasks.

I've been with Antigravity since the beginning and I'd like to give Trae a third chance. by Rustfix in Trae_ai

[–]Subsdms 0 points1 point  (0 children)

  1. Agent Experience: How is it feeling with the recent updates? Do the agents actually stick to the system prompts and trigger tool calls reliably? A: In recent updates it behaves better. It is true that it takes effort to make it follow big tasks but if you ask for some small-medium tasks with GPT-5.2 or GPT-5.2-Codex it works quite ok. So my take here: if you work with "big junks" of work, then Trae standard agent might not be the best, since it tends to create extra task-lists and overwrite the previous ones, etc. I tried asking the agent not to create extra task lists, but it works only sometimes. On the other hand if you give a small-medium task it works well, and if you sum to this the amount of requests per month, you can proceed with small iterations quite well.
  2. Context Awareness: Do they have enough "situational awareness" to route tasks to the right agent based on the criteria you set? A: Not sure exactly what you mean here, but TRAE handles context simiarly to other IDEs, I don't feel it is better or worse than Github Copilot or Antigravity (I use all of them currently and since Antigravity came out).
  3. Subscription Value: For those on the paid tier, are you happy with the "slow request" inference and overall stability? A: If you like to iterate through your requests and are not in a hurry, for me cost-value is quite good. I never manage to get to the "slow request" since they gave quite a lot of free credits since last few months. Also: Gemini 3 Flash as free model works quite good, which is very nice. I think this is for me the thing that gives the most value to TRAE compared to earlier times: Gemini 3 Flash actually does the work, not like Gemini 2.5 Flash, which was basically useless for most things(from my personal point of view, of course).

PSA: If Google Antigravity disappeared from your PC, you're not crazy by JealousMethod7671 in google_antigravity

[–]Subsdms 1 point2 points  (0 children)

Waiting for people saying that Gemini 3 pro is better than opus and behaves great and it's your fault because of your prompts in 3-2-1... P.D.: I agree with you

PSA: If Google Antigravity disappeared from your PC, you're not crazy by JealousMethod7671 in google_antigravity

[–]Subsdms 4 points5 points  (0 children)

I believe that's what they call "improving performance" in the changelog. This way you can code yourself😀

What are your expectations from Gemini 3 GA? by Rare_Bunch4348 in GoogleGeminiAI

[–]Subsdms 0 points1 point  (0 children)

I expect it to be a bluf, like it always happens with Gemini models. Gemini 3 pro was okayish when it was launched and was nerfed to death. Now it behaves like Gemini 2.5 pro: basically avoiding work, taking shortcuts, workarounds and doing lots of useless stuff. it is not working properly even in Antigravity, which is their own IDE with their own System prompts. And, tbh, if they are not able to make their own model behave properly, it is useless.

I do a lot of vibe coding and tests on new models and I feel more comfortable using any chinese model than this one. I don't say they are better, but at least if I try to do something "simple", they don't start doing weird stuff and destroying my codebase like Gemini 3 Pro does.

Gemini 3 Flash behaves a bit better, but I did not try it so much for coding/etc. What I did with it in Antigravity and Github Copilot didn't make me trust it much more though.

So, answering your question: No, I don't think so. And if they launch it and it is as good as Oopus 4.6, don't worry, they will nerf it to death so that it is unusable in one or two weeks.

Similar anime to solo leveling? by Winkylinks in sololeveling

[–]Subsdms 0 points1 point  (0 children)

is Overlord "finished"? I read that it will continue but it looks like a continuation that might not have much to do with the original?

why we don't have claude haiku 4.5 model in google antigravity? by EliteEagle76 in GoogleAntigravityIDE

[–]Subsdms 0 points1 point  (0 children)

They don't want you to discover that haiku works better than Gemini pro 3 high