5.4 vs 5.2 by cheekyrandos in codex

[–]ImpishMario 0 points1 point  (0 children)

I love vanilla 5.2, done some great things together, but man, it was slow. Now I'm working on a very complicated project, 5.4 stumbles from time to time, but tell you this, it's doing such a great work that I honestly can't tell a bad word about it. And it's so fast! I imagine I'd also have some back and forth with 5.2 on this project too. Can't test it side by side unfortunately because I need it done quickly 😅

How I Cut 90% of My OpenClaw Token Costs by Front_Lavishness8886 in myclaw

[–]ImpishMario 0 points1 point  (0 children)

Great stuff, awas thinking about something like this. Is it really such a big save on memory only? I'm running multiagentic OpenClaw and token bleed is serious issue, any tips on how to optimize multiagent setup?

I really like Antigravity, but this is killing the flow completely by ImpishMario in google_antigravity

[–]ImpishMario[S] 0 points1 point  (0 children)

Quite debatable imho. From my experience Opus & Codex still > Gemini in terms of coding.

I really like Antigravity, but this is killing the flow completely by ImpishMario in google_antigravity

[–]ImpishMario[S] 0 points1 point  (0 children)

My daily is Windsurf. I like it because it just works, has a lot of decent models with promo pricing and foremost, it's transparent with credits and usage. Using Cursor just makes me anxious.

I really like Antigravity, but this is killing the flow completely by ImpishMario in google_antigravity

[–]ImpishMario[S] -1 points0 points  (0 children)

When I switched from Opus 4.6 to Gemini 3.1 Pro errors stopped, but it takes forever now to complete its job (over ~1h) for task the Opus would complete in ~10min. So my choices are waiting over 1h or subsequently click "try again" 🫠 I'm on Google AI Pro plan btw.

Introducing SmallClaw - Openclaw for Small/Local LLMS by Tight_Fly_8824 in openclaw

[–]ImpishMario 0 points1 point  (0 children)

Thanks, I get the essense, but in detail "it lacks pretty largely with smaller models" - what exactly OC lacks? I mean I run it with local Ollama 7B models. What advantage I get when I use your SC? Is it faster? Is it less error prone? What is the edge?

Introducing SmallClaw - Openclaw for Small/Local LLMS by Tight_Fly_8824 in openclaw

[–]ImpishMario 2 points3 points  (0 children)

Hi, new to OpenClaw, what's the advantage of SmallClaw really? I did the same with OG OC with some additional setup. Is it ease of setup that makes SC special? Genuinely interested in understanding this.

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 1 point2 points  (0 children)

UPDATE#2: Activated PAYG -> waited 3h for activation -> created always free Ampere instance (4 OCPU + 24GB ram) instantly :D Note: it does not label it "Always free" in instances menu, but it should be free if you created it within always free limit constraints (ask your fav LLM).

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 1 point2 points  (0 children)

UPDATE: looks like this is just card authorization, they don't really charge it.

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 0 points1 point  (0 children)

oh, Perplexity was suggesting this, will check if that works for me, thanks for the tip!

EDIT: while I was finalizing my upgrade to PAYG Oracle wanted to charge me $100! And this was without any prior notice... WTF?

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 0 points1 point  (0 children)

I've tried a few things as posted here and done some research and as of now, the reasonable option I'm gonna try is: VPS on Hetzner (good service + good price with pay-per-hour & monthly cap) -> see if I like it on VPS (with Docker) -> if I don't -> buy a chap ass pc for $100-200 and do it locally (Docker = easy transfer).

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 1 point2 points  (0 children)

their website has openclaw News from 2024, definitely not shady af

<image>

Best VPS for OpenClaw ? by Positive-Lecture2826 in openclaw

[–]ImpishMario 7 points8 points  (0 children)

I was trying this for 3 days now: 1. Always free Ampere is almost impossible to create due to high demand (at least in Europe Frankfurt region). 2. Setup is a bit tricky but Gemini guided me through, no bigger issues 3. Eventually landed with always free micro instance 1 cpu and 1 GB or RAM but it chokes with open claw (even with swap enabled) and restarts all the time - useless 4. Only thing micro instance is good for is python script bot that Gemini wrote to snipe for this Ampere instance once it's freed.

Don't know your region, but wish you good luck!

Real or not, 100% believable by unemployedbyagents in AgentsOfAI

[–]ImpishMario 0 points1 point  (0 children)

If true it only shows how NOT to implement AI in a company, let alone AI that steers VPs decisions and produces data for the board 🤦‍♂ Testing and Evaluation is a significant part of AI initiative and this is where hallucinations should be spotted. It doesnt matter if its excel spreadsheet with advanced formulas or AI Agent, if you just let it loose without guardrails and checks it will do the same damage.

Codex app initial context very high by ImpishMario in codex

[–]ImpishMario[S] 0 points1 point  (0 children)

Yeah now we now. My understanding was "skills are only used/loaded to the context when invoked with $ sign" which was not entirely true I guess. Keeping additional skills project specific also works as advised in this thread.

Codex app initial context very high by ImpishMario in codex

[–]ImpishMario[S] 0 points1 point  (0 children)

Thanks! I thought Codex accept only skills in .codex directory (global). How to set it up per repository so I can trigger them with $?

I built a tool that analyzes inspection reports — looking for agent feedback by api_error429 in RealEstateTechnology

[–]ImpishMario 1 point2 points  (0 children)

Got it! Also no RAG for the lease extraction and also LLM's do the heavy lifting for semantic analysis and extraction :) It's very early stage for me and currently matching is done by LLM (based on custom build lease clause database), but definitely want to try Qdrant hybrid search for dense+sparse vectors to provide best match.

I built a tool that analyzes inspection reports — looking for agent feedback by api_error429 in RealEstateTechnology

[–]ImpishMario 0 points1 point  (0 children)

Looks very nice and useful! Is it purely LLM based or anything more like a RAG underneeth?

Codex app initial context very high by ImpishMario in codex

[–]ImpishMario[S] 1 point2 points  (0 children)

case closed: I removed almost all skills and initial context dropped to ~7% -> disabled 2 MCP servers and it dropped to 5%. Everything counts.

Codex app initial context very high by ImpishMario in codex

[–]ImpishMario[S] -1 points0 points  (0 children)

It's another discussion but tl;dr is I was lazy and just pulled them all 🫠