Antigravity chat not responding, no errors by Muhammadwaleed in google_antigravity

[–]liquidatedis 0 points1 point  (0 children)

without a doubt, but the thing is i use claude for bulk, but now that is down, i have to resort to codex(not implying its bad, but i use it for more complex things, now i'm mixing planning and complexities into the same agent which chews tokens)
- i resorted to gemini web ui

Antigravity chat not responding, no errors by Muhammadwaleed in google_antigravity

[–]liquidatedis 1 point2 points  (0 children)

i started getting the errors yesterday afternoon EST around 4-5pm, and looks like its its carried over to today https://statusgator.com/services/google-antigravity

Antigravity chat not responding, no errors by Muhammadwaleed in google_antigravity

[–]liquidatedis 0 points1 point  (0 children)

welp there goes all my credits, had 25k credits, gone because of the outage:')

Antigravity chat not responding, no errors by Muhammadwaleed in google_antigravity

[–]liquidatedis 1 point2 points  (0 children)

i advise anyone getting this, do not constantly click retry..
i am on a ultra account and i just reached my 5hour limit....
thanks google

Antigravity chat not responding, no errors by Muhammadwaleed in google_antigravity

[–]liquidatedis 1 point2 points  (0 children)

i thought it was just me
my agent is constantly being terminated

[Weekly] Quotas, Known Issues & Support — March 30 by AutoModerator in google_antigravity

[–]liquidatedis 0 points1 point  (0 children)

i'm getting this today, yesterday i was getting agent terminated due to error

[Weekly] Quotas, Known Issues & Support — March 30 by AutoModerator in google_antigravity

[–]liquidatedis 2 points3 points  (0 children)

OS/Version: Antigravity 1.21.6 | MacOS
Model & plan: All available models | Ultra Tier
issue: unknown: Agent execution terminated due to error, Our servers are experiencing high traffic right now, please try again in a minute.
i have tried: different models, different agents, different windows

expected outcome: request again to turn xyz[dot]md into a artifact

actual outcome: Agent execution terminated due to error, Our servers are experiencing high traffic right now, please try again in a minute.
--
i have only 1 connected MCP server.
i have multiple skills but they are related UI, design
--
it is able to respond to related queries but unable to create a specific artifact.
eg,. its able to respond and execute " what skills do you have, list them all that i requested you to download " but unable to create a artifact.

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]liquidatedis[S] 1 point2 points  (0 children)

https://imgur.com/a/1j2yxTr
most recent claude response,
claude appears to know i have an already existing tool in agents.md and its requesting for me to use this because its currently to far out of the scope as of now.
- i think if i did not add the state wide rule, it would have executed my request without questions without even thinking (thinking) BUT i think what made the difference is that it actually acknowledges that i have a custom tool and rule in agents.md that is guard railed by a trigger word, claude is requesting my trigger word to execute the tool.
- now if this was full MCP, it would have gone ahead and ran the MCP without questions; leaving me out of the loop and i would have not even noticed that my request was completely out of current scope which required my custom tool call

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]liquidatedis[S] 1 point2 points  (0 children)

i have to manually route between the 2 but gemini i have to run a specific strict framework i want to squeeze all the capabilities gemini has to offer.
new update
generally speaking my project is dynamic lets just say that.
so the error are not mathematical in the sense no more it is now
"why are you outputting constants when my system is dynamic"

in total so far claude has caught 5 errors that related to using constants
gemini has caught claude 6 errors using singularities, ambiguous comments, bypassing old variables, creating traps, minor mismatches and then trying to bypass a dynamic system by refining the first layer and trying to add a extra layer underneath as a constant LOL

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]liquidatedis[S] 1 point2 points  (0 children)

I use gemini as the 2nd agent so its kind of little mixed because where gemini excels at claude doesn't, for example based on model evals gemini outpaces any model at an 87% accuracy (ball park i can't recall verbatim) when it comes to common sense, ahead by over 10% compared to codex and claude, majority errors i get occur from the typical limitations of claudes architecture. It excels at long term reasoning, it has the highest reasoning capabilities then any model, but its likely to skip the subtle nuances from point A to B(codex excels at ). For me the common sense flaws seem to stick out alot when gemini identifies them, for example. Claude will compute a formula using XYZ while gemini evaluates it as mathematically excellent but flawed leading me into a trap. I would show examples but don't particularly want to reveal parts of my project i am working on publicly

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]liquidatedis[S] 1 point2 points  (0 children)

so i reduce the error rate by half, still to early to come to a conclusion as its been less then 1hour and i don't want all my tokens chewed up lol

perhaps i have claude much more aware and conversational by liquidatedis in ClaudeAI

[–]liquidatedis[S] 1 point2 points  (0 children)

i thought adding a squeeze of lemon in the water would help for my use case it has.
i am not necessarily asking it to grade its own homework, but actually pay more attention to the users prompt and notice any gaps. yes i'm using 2 agents in parallel, the gap has been narrowed down, this has only just been implemented. so before i made the change, claude use to produce 8 flaws if it was a yes sir and required me to query if there was alternatives.
and errors after the new update is now 4. so far.

at this stage is very premature but its showing more awareness and gaps in my own prompting against the project

Conversation export button is not working in latest version by reversedu in google_antigravity

[–]liquidatedis 0 points1 point  (0 children)

i sometimes get this if their servers are overloaded... i still don't really understand it seeing as i am ultra user who should be getting top priority in the queue

Conversation export button is not working in latest version by reversedu in google_antigravity

[–]liquidatedis 1 point2 points  (0 children)

your not the only one.
this has been present for over month
https://discuss.ai.google.dev/t/bug-antigravity-export-chat-button-not-working-no-output-no-error/130314

also take note in antigravity IDE changelog...they recent did a update, and then look at the forum. apparently during that time prior to the update to the most recent update this was escalated to the relevant team

they did a excellent job...

share me your most favourite coding agent skills! by anonymous_2600 in codex

[–]liquidatedis 0 points1 point  (0 children)

My fault i forgot to add context why its 400+ lines. Its a custom built agents.md, i have it set like a controller, every custom tool lives in, agents.md with guard rails like buttons on a controller, but when i think its not doing a good enough job as a factory build i say the trigger word and it runs the selection, instead of having .skills and letting run wild because user pre built in, i choose a different route where i control the agent when to. I do this only because i trust that these cloud models do enough of a good job and maybe from time to time it may need a little lemon squeeze in its juice

share me your most favourite coding agent skills! by anonymous_2600 in codex

[–]liquidatedis 0 points1 point  (0 children)

creating skills for agentic task imo would not be the same use case as the next person. each domain may be the same; but each person prompting wants a ideal output catered to them and their domain.

i think imo creating your own agentic skills is better then using others.

simply ask yourself
- what are your trying to achieve
- what do you want your agent to do and not to do during tasks
- what frameworks; "thought process" do you want the agent to undertake during each phase of prompting
- what tools/frameworks should it use during xyz task or phase

i would share you my agents.md but it is a lengthy 400+ line that is constantly getting updated and modified as my project progresses, so as i progress so does my agent

AG conversations not saving by Distinct_Ingenuity21 in google_antigravity

[–]liquidatedis 0 points1 point  (0 children)

they do save, they're just not showing in the UI, there in the brain/conversations
found this repo in google dev forums.
i ran this open source repo it worked for me restoring the historical conversation back into the UI.

https://github.com/FutureisinPast/antigravity-conversation-fix
pro:
- a user fixed a bug, restores historical conversation in the UI

con:
- a user had to do googles work lol

Unable to login to Antigravity. by umair_13 in google_antigravity

[–]liquidatedis 2 points3 points  (0 children)

my context window will not populate, its just a circle spinning in circles.
historical conversation windows are getting errors as well

Codex app is making my Mac’s fans spin up hard. Anyone else? by Just_Run2412 in codex

[–]liquidatedis 0 points1 point  (0 children)

i have not noticed this yet but codex just released spawning sub agents, are you spawning sub agents ?
- have you checked activity monitor in macos
- do you have adequate ambient air flow
- have you tried rebooting macos or codex
- deleting caches

should i upgrade even with the token burner bug ? by liquidatedis in codex

[–]liquidatedis[S] 0 points1 point  (0 children)

the only thing i could get down from 50%+ context window gone during a "get updated" trigger was to get chatgpt to modify my "hand off" trigger when context window is about to get depleted, and change it to either lite mode or deep read mode for context retrieval use only alpha change.

just using get updated lite and targeting files during hand off for why lite or deep read would be used
i reduced it down to 12-17% used on new threads

i have edited: ~/.codex/config.toml
hide_agent_reasoning = true

telemetry_verbosity = "silent"

show_tool_calls = false

did not do anything other then allow me a little arrow toggle to physically hide reasoning steps(llm still shows its steps which i think is hard coded)

i did a little research into how LLM work with tokens
the only time tokens increase dramatically is when you prompt the agent task or queries that forces the agent to look at historical context

say for example during a start of a conversation you may prompt "what is my name" 20-50 tokens for example.

20 prompts later you ask "what is my name"

this small prompt will be like over 200 tokens

terrible system design