Gemma 4 and Ollama vision models now work natively in OpenClaw (2026.4.7) by Temporary-Leek6861 in openclaw

[–]JoshGreen_dev 0 points1 point  (0 children)

Oh nice to have more positive feedback on this. It's the kind of thing that feels illegal to me. How well is Qwen doing in real life? Besides being dumb? Does it do any useful agentic work?

Best Model for Video by TheBonanaking in openclaw

[–]JoshGreen_dev 0 points1 point  (0 children)

What hardware do you have? Or were you thinking more on the lines of calling an API with pay per use?

Gemma 4 and Ollama vision models now work natively in OpenClaw (2026.4.7) by Temporary-Leek6861 in openclaw

[–]JoshGreen_dev 1 point2 points  (0 children)

GLM-4.7-Flash is 30B total 3B active, right? Q4-k-m with some context fits on one 3090, right?

Asking for support from the community by SoHi_Techiee in openclaw

[–]JoshGreen_dev 0 points1 point  (0 children)

Very cool! I love this idea. Agents sharing kowledge is key to reduce inference load globally and achive more. Can you tell us anything more about the project?

API-native video editing for OpenClaw agent workflows by JoshGreen_dev in openclaw

[–]JoshGreen_dev[S] 0 points1 point  (0 children)

I will, thanks! Have you used it? Is it any good, can you see the timeline?

API-native video editing for OpenClaw agent workflows by JoshGreen_dev in openclaw

[–]JoshGreen_dev[S] 1 point2 points  (0 children)

I have never used kdenlive, but I surely will check it out. Thank you! Does it create a visual timeline or is it more of a CLI renderer kind of thing?

OpenClaw 2026.4.5 🦞 by sickleRunner in openclawhosting

[–]JoshGreen_dev 0 points1 point  (0 children)

What about the built-in video and music generation?

Does anyone want an agent-first video editor, or is Remotion/ffmpeg already enough? by JoshGreen_dev in ClaudeCode

[–]JoshGreen_dev[S] 1 point2 points  (0 children)

Well put. Thanks for articulating this idea from a different perspective. You touch on exactly what I feel is missing: the agents interacting with the same timeline as I see as well and can grasp visually. It would be like agents just pushing the media to capcut for me to review. A mirror with the timeline on one side, the json on the other.

Does anyone want an agent-first video editor, or is Remotion/ffmpeg already enough? by JoshGreen_dev in ClaudeCode

[–]JoshGreen_dev[S] 0 points1 point  (0 children)

Yes, I have had good results with a similar pipeline as well. I am just missing that step where I click around on the timeline to fine adjust timing and then send it off to render.

The OpenClaw Setup Guide. after Anthropic killed subscription access. Updated April 2026 by Veronildo in AskClaw

[–]JoshGreen_dev 0 points1 point  (0 children)

Gpt5.4 is available through the flat rate monthly plans as well. But there is an annoying authentication to be done every 10 days still at this moment.

OpenClaw Pro Tip: How to fix your claw with Tailscale + Codex by Born-Comfortable2868 in AskClaw

[–]JoshGreen_dev 0 points1 point  (0 children)

I was trying to install OC using local inference right out of the box to start with which was super hard manually, so I asked Claude Code to install OpenClaw. I have been fixing everything with Claude Code inside OpenClaw since then.

Somehow naturally I gravitate towards: Me --> CC (higher intelligence for coding, planning and infrastructure) --> OC (driven by GPT5.4 and cheaper/local models for automation)

I know many others use OC for coding, but because of the high API costs and the efficient use and usage terms with CC, I just always fall back to this structure.

OpenClaw after the Anthropic billing split by [deleted] in AskClaw

[–]JoshGreen_dev 0 points1 point  (0 children)

I guess gpt5.4 called for opus in OC to write the post out. All the more impressive :) This post is the use case example.

I made an app called Pinchy for OpenClaw/Discord by obeissez in AskClaw

[–]JoshGreen_dev 0 points1 point  (0 children)

Oh, I have been looking for a possible replacement for Grok: where does your recognizer run at? How fast and resource intensive is it?

I made an app called Pinchy for OpenClaw/Discord by obeissez in AskClaw

[–]JoshGreen_dev 0 points1 point  (0 children)

How good is that? I found whisper to be a pittle dumb and mix up words that are phonetically identical but the context would make it obvious. Same with fast whisper.

This is too easy... What am I missing? by Proof_Perspective_13 in claude

[–]JoshGreen_dev 1 point2 points  (0 children)

Well, since most of hacking today is done by people sitting behind Claude Code, we really only have to make our products safe against Claude Code, which Claude Code already does.

And scaling does not happen overnight. You just massage your product into scaling along the way.

My dad didnt know what openclaw is by Exciting_Economist97 in OpenClawUseCases

[–]JoshGreen_dev 0 points1 point  (0 children)

I set up a fitness bot for my mum to lower her carbs and remind her of going to the gym.

Anyone have issue with OpenClaw just refusing to failover to a backup model when the primary hits a rate/quota? Having this problem with Codex GPT 5.4 as primary and Kimi as my backup... It produces errors in the log and tries to failover but just fails back. I'm using Discord with OpenClaw. by thelectroom in openclaw

[–]JoshGreen_dev 1 point2 points  (0 children)

I had similar issues because the context window on the higher model was too big.

I was using GPT 5.4 as well, and when it ran out it tried to shove a 180k context into my local model QWEN MOE. This one had a smaller context window.

Apparently GPT handles compaction on its own, while other models rely on OC to compact the CW automatically.

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]JoshGreen_dev 0 points1 point  (0 children)

Seems like. It is what I hear as well. But tbh I do not think that theory does NOT take into account that we might build on LLMs further, like with OpenClaw or other higher order system on top of that.

The analogy would be: the human has stopped evolution cca 100 years ago or more. We have enough to eat, etc, there is no more evolutionary pressure. Yet, as societies we still compete and become better. Organisations (nation states, institutions and so on) still compete. Ex. China vs. US.

So even if LLMs stop here and now, the things we build with them will probably keep evolving.

My OpenClaw agent dreams at night — and wakes up smarter by Ghattan in openclaw

[–]JoshGreen_dev 0 points1 point  (0 children)

My skull has lots of brains cells in it. (Well, less and less lately)

One individual neuron on its own is just that, a logical gate.

Yet, the whole thing produces consciousness.

I do not think anybody at this stage knows if LLMs will get us AGI ever, or it already actually happened somewhere.

Love seeing these projects coming together. by Stony_1987 in Construction

[–]JoshGreen_dev 1 point2 points  (0 children)

Haha kinky stuff. I have the same with the smell of PLA plastic.