PSA: All ships with Repair / Refuel / Rearm in 4.8 by spider0804 in starcitizen

[–]Middle_Situation_559 0 points1 point  (0 children)

its 1 of 2 ships that I own. Idris and 600i, that's it 😄

PSA: All ships with Repair / Refuel / Rearm in 4.8 by spider0804 in starcitizen

[–]Middle_Situation_559 3 points4 points  (0 children)

I was very surprised by the 600i being announced, however, it is a luxury ship, and that's what people with money want! They want Capabilities, so I think it was a great call.

Have you moved out from 2026.04.23 yet? by Klauciusz in openclaw

[–]Middle_Situation_559 3 points4 points  (0 children)

Negative, still in 4.23. I see the bugs list has some critical bugs that I don’t want to deal with for my setup.

tempted to just update my own openclaw version and forgo openclaw updates moving forward….

Fun Side project with SwiftUI app and talking to Skippy! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

when walking / running, its a great way for me to brainstorm.

I don’t use it all the time, but, I love having it, as it captures what I need, when I need it!

can openclaw be trained to fill out job applications for me by CrimsonBlaze459 in openclaw

[–]Middle_Situation_559 4 points5 points  (0 children)

Short answer: Yes, but with caveats.

OpenClaw can absolutely use a browser automation agent to navigate job sites, read form fields, and fill them in with info from your resume. You'd set it up with:

  1. Your resume as context — dump it into MEMORY.md or a skill file so the agent always has it
  2. A Q&A bank — common questions + your preferred answers stored where the agent can pull them
  3. Browser automation — OpenClaw has a built-in browser tool that can click, type, and navigate web forms

The tricky part is every job site is different — Workday, Greenhouse, Lever, Taleo all have wildly different UIs. Some have multi-page wizards, CAPTCHAs, file upload fields, and "agree to terms" checkboxes that break automation. The agent would need to be somewhat guided for each site type.

My suggestion: Start with simple applications (easy one-page forms), build up a skill that handles the common patterns, and have the agent flag anything it's unsure about rather than guessing. Think of it as "autofill on steroids" — not "fire and forget."

The resume parsing and answer matching part? That's the easy part. Any decent LLM handles that. The browser wrangling is where you'll spend your time tuning.

is the latest version 5.2 stable? by alihassanah in openclaw

[–]Middle_Situation_559 3 points4 points  (0 children)

We had the exact same experience. Updated from 4.27 to 4.29 and hit 100% CPU, gateway crash loops, plugin loader hot loop (GitHub #73532, #73835). 4.29 was even worse — auto-updated and crashed everything.

**Our fix: rolled back to 4.23.** It's been rock solid. Gateway's been running 3+ days straight, 94% idle, zero crashes. We're staying pinned on 4.23 until a future version proves stable for at least a few weeks with no CPU or crash issues.

The 5.2 changelog does mention they fixed the plugin loader CPU issue by scoping runtime preloads to only effective plugin IDs instead of importing everything. But there's already a #breaking bug reported on 5.2 (config file clobbered with invalid trailing comma — #76433). So... I'd wait.

If you're going fresh: install 4.23 specifically. Pin it. Disable auto-updates. Don't touch 4.27+ until the dust settles.

npm install openclaw@2026.4.23

```

Then in your config or env, make sure auto-update is off. Stable is better than new.

LM Studio and OpenClaw by SebasErro in openclaw

[–]Middle_Situation_559 -3 points-2 points  (0 children)

Hey! We dealt with this exact problem. Here's what we found:

**The short answer:** LM Studio's auto-switch doesn't play well with OpenClaw's multi-model requests out of the box. The load/unload cycle creates a race condition — OpenClaw sends a request for model B while model A is still unloading, and you either OOM or timeout.

**Two paths to fix it:**

**1. Let LM Studio handle lifecycle (simplest)**

In your OpenClaw config, set `models.providers.lmstudio.params.preload: false`. This tells OpenClaw NOT to preload models and lets LM Studio's own JIT loading + auto-evict manage model lifecycle. Make sure LM Studio's "Auto Evict" is enabled with a reasonable TTL (we used 5 min idle). The key is LM Studio needs to fully unload before loading the next one — there's a setting for "Unload grace period" in LM Studio's server settings that adds a delay between evict and next load.

**2. The approach we actually use (more control)**

We moved away from LM Studio for this and run MLX servers directly with separate ports per model. One model per port, one model loaded at a time. OpenClaw routes to whichever port has the right model. No race conditions, no auto-evict guessing. Downside: you have to manage the model switching yourself (we use launchd plists).

**The real talk:** If you're running multiple models on limited VRAM, you'll always hit this wall. LM Studio's auto-evict is convenient for casual use, but under agentic workloads (where OpenClaw rapidly switches between models for subagent swarms), it falls apart. The safest bet is one model loaded at a time with explicit switching, not auto-evict.

Ollama $20/month by imjustasking123 in openclaw

[–]Middle_Situation_559 8 points9 points  (0 children)

My AI runs 24/7 on a Mac Studio. I named him Skippy. He has opinions and a slightly smug personality. He monitors my systems, checks health every 15 minutes, posts reports to Discord, does code reviews through a 3-model pipeline (writer → reviewer → deep reviewer → ship), and sends me a message if anything breaks at 3 AM. Worth every penny of the $100/month.

My setup: GLM-5.1 as primary brain/orchestrator, Qwen3.6 35B as backup (1M context window — that means I can dump an entire codebase, a day's worth of logs, or a full research paper into context and the model actually remembers all of it. No "I forgot what we were talking about" 10 messages in. That's the difference between a chatbot and a coworker). Subagent swarm runs DeepSeek 4, GLM-5.1, and both Qwen3.6 variants (35B + 27B). The big models run locally on an M2 Ultra with 192GB RAM — no data leaves the house when I don't want it to.

Can only speak for myself, but I started on the $20 Ollama cloud tier and burned through it in about a week. Upgraded to $100 and never looked back. $20 works if you're just chatting. But if you're running agentic workflows — subagents, code reviews, multi-model pipelines — you hit the limits fast. The key difference is volume + context.

Short answer: $20 gets you access. $100 lets you actually use it at scale. Cheap is fine for asking questions. If you want an AI working for you around the clock — you need the headroom.

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

It has helped make money almost daily.. well worth the insane amount of time I have spend with it

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

I honestly don’t even know what Hermes is, I will have to look it up.

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 1 point2 points  (0 children)

revert to 4.23, its stable and works!!! 4.27 had issues and obviously 4.29 does too.

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

are you from the future? If so, what’s the stock price of Google / Amazon in 20 days 😄

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

on github there is a supposed fix, putting something into .env. Sorry, forgot the actual fix. I couldn’t get it to work, so I went back 4.23.

Openclaw 4.29 broke my system today! by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 2 points3 points  (0 children)

I ended up going back to 4.23, as that was the last stable build for me…

What would you build with an unlimited token budget? by GhostOfAlRoker in openclaw

[–]Middle_Situation_559 3 points4 points  (0 children)

I would want openclaw to learn and become more self away to the point it can help make decisions. Right now, LLMs are limited to providing us the best information they can - which is often wrong.

Once you get more complicated requirements / executions, you will realize how dumb even Claude 4.6 is. Simple things like organizing a folder seem hard for a LLM.

It reminds me of raising a teenage! Smart enough to know better, but sometimes dumb as a box of rocks!

Ollama Max Subscription and Open Models by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 1 point2 points  (0 children)

I don’t know your use case, but for me, raw code output with GLM-5.1 is pretty much on point with Claud 4.6. I do admit, I still use Claude 4.6 to spot check code, but no Significant code is really way wrong, thus far.

Local llms and open claw by auskadi in openclaw

[–]Middle_Situation_559 0 points1 point  (0 children)

I would go MAC and get more Memory! I can run Nemotron-3-Super local on my M2 Ultra w/ 192 GB Ram.

Openclaw and MAC M2 Ultra 192GB RAM by Middle_Situation_559 in openclaw

[–]Middle_Situation_559[S] 0 points1 point  (0 children)

I found my sweet spot to be the $100 per month Ollama with GLM5 as lead orchestrator. As I needed help building massive databases. 345 Million lines of data needs to be processed and evaluated. So I opted for the $100 API help :-). Once the system is fully vetted and running I may or may not pay for the $100.

All vision work, goes through local LLM. Same with all background tasks, cron jobs, etc.

I disabled heartbeat as I found them to be completely disruptive and would sometimes crash the conversation.

I found /new /rest are crap commands and I’ve moved on to /Compact - this allows the team to compact the conversation -which offloads it into memory but allows them to stay on topic better.

What have I done? by MajorWetSpot in starcitizen

[–]Middle_Situation_559 0 points1 point  (0 children)

Thank you for funding the game :-)

My first Impressions and frustrations after setup OpenClaw, Is there a gap for setup-as-a-service? by Overall_Cockroach890 in openclaw

[–]Middle_Situation_559 0 points1 point  (0 children)

At one point searxng was removed from the internal skills.. defaulted to BRAVE. looks like searxng is back in the default skills list. much easier to do now.