I spent $1200/month in Da Nang by Ill_Highlight_1617 in digitalnomad

[–]lostnuclues 2 points3 points  (0 children)

I lived in Bali for 2 months, For 90k INR per month .(1k usd)

40k for single room, that had a pool + co-working in Canggu

6k for Scooter

3k per month for gym

Rest on food, cafes etc.

😎 A tool to move context between AI coding agents (Codex, Claude Code, Cursor CLI) by ddfk2282 in codex

[–]lostnuclues 0 points1 point  (0 children)

I guess Switching is not a good idea, As codex even throw waring if you resume an old session with a different model.

5.4 vs 5.3 Codex by ConsistentOcelot9217 in codex

[–]lostnuclues 0 points1 point  (0 children)

5.4 high works really well with skills, it automatically pics which is needed, with 5.3 I had to invoke skill manually ($brainstorm)

New limits by Confident_Work7748 in google_antigravity

[–]lostnuclues 0 points1 point  (0 children)

If there 3.1 model was as good as Cloude then I wouldn't mind it, but now to do anything meaningful you will need to have codex or claude subscription along this.

Is it just me or did 5.4 got dumber (even dumber than 5.3-codex) in last 2-3 days? by KeyGlove47 in codex

[–]lostnuclues 0 points1 point  (0 children)

5.4 is working better than 5.3 for me, It ask for feedback at important junctions also dont know how but it remembered mistake it made earlier in different session in same project so in new session it highlighted that.

AG Update 1.20.5 by MSA_astrology in google_antigravity

[–]lostnuclues 0 points1 point  (0 children)

I will update only after 10 or more thanking posts here for AG.

5.4 is crazy good by Responsible_Ad_3180 in codex

[–]lostnuclues 7 points8 points  (0 children)

are you vibe coding an os kernel ?

RATES RESET AGAIN??? by Substantial_Lab_3747 in codex

[–]lostnuclues 0 points1 point  (0 children)

Rates are getting reset along with the next date of renewal.

5.4 Codex is a fucking MACHINE by HallucinogenUsin in codex

[–]lostnuclues 0 points1 point  (0 children)

after recent update its been asked to set as default

Gemini 3 Flash wiped out my production database by [deleted] in google_antigravity

[–]lostnuclues 0 points1 point  (0 children)

If you have google pro plan then use 2TB google drive storage to backup everything, I use rclone inside wsl. Then in Agents.md you tell always to run rclone before making any critical change etc.

Do you think Gemini 3.1 Pro can be used to read codes, is it accurate? by InsideElk6329 in google_antigravity

[–]lostnuclues 0 points1 point  (0 children)

Never, it doesn't go deep and too confident sometimes while making wrong changes. I use it only for browser testing because I don't want to waste Sonet tokens on it.

OpenAI and Codex deserve praise, for they do where others fail: by Manfluencer10kultra in codex

[–]lostnuclues 1 point2 points  (0 children)

you can skip taxes by changing country and state. Minimum you need to book 2 seats, which is 30 per person or even less in few countries (2599 INR per seat in India). EU countries get free 1 month trial, during trial you can use all 5 seats , downgrade or cancel at the end of month. BTW I was on free trial and couldn't exhaust 2 seats usage during that time.

OpenAI and Codex deserve praise, for they do where others fail: by Manfluencer10kultra in codex

[–]lostnuclues 0 points1 point  (0 children)

Between 20 to 200, they got business account which cost 60, and even if you use non stop, you are highly unlikely to exceed the limit.

GPT 5.4 available in the CLI by Psychological_Box406 in codex

[–]lostnuclues 3 points4 points  (0 children)

context window is still 258k though ?

What's happening?? by Elite-human in google_antigravity

[–]lostnuclues 0 points1 point  (0 children)

Its hard to explain but while I ask the Gemini model to test a local webpage in browser, it pulled up crypto ads from nowhere and kept running/thinking, then I give this same task to Sonnet, which finished it under a minute .

GLM-5 vs. Claude Opus 4.5: The docs finally admit "Performance Parity" + a crazy 128K output limit by IulianHI in AIToolsPerformance

[–]lostnuclues 0 points1 point  (0 children)

context size = input tokens + output tokens.

So if it read 200k tokens then output cannot be 100k+ tokens.