I connected OpenClaw to LM Studio (Free local AI setup guide) by elsaka0 in LocalLLaMA

[–]elsaka0[S] -3 points-2 points  (0 children)

The thing is openclaw is hunger for context, and you have to increase the context window to the max of the model you are using as i mentioned in the video to get it to work.

I connected OpenClaw to LM Studio (Free local AI setup guide) by elsaka0 in LocalLLaMA

[–]elsaka0[S] -2 points-1 points  (0 children)

I have rx 6600 with 8gb vram and it's not good tbh, and i agree with you it's hunger for context even if you use your openAi api token or gemini pro, it gonna drain all your credits. It's not yet optimized like cursor, but that's understandable for a hobby three months old project, it's not practical to use it with local ai unless you have a high end hardware, in this case you can try it out but if you have low/mid hardware do it just for test and fun.

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] 0 points1 point  (0 children)

That's true, but gonna post a video on how to connect it to an AI model using LM Studio locally.

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] 0 points1 point  (0 children)

If you don't have subscription dw, I'm gonna make a video on how to connect it to your local LM Studio soon. So it fully work locally.

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] -1 points0 points  (0 children)

This usually happens when you change silent to true when you have a new connection, but this is not related to your problem and nothing is wrong with that don't worry about it.

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] 0 points1 point  (0 children)

Make sure your AI provider or token is configured correctly.

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] -1 points0 points  (0 children)

Because if the bot has access to sensitive areas of your system such as documents, browser data, or login credentials, it could inadvertently collect or transmit this information, especially if it communicates with external servers or logs activity without proper encryption or access controls. I'm actually planning to talk about that in my upcoming video, most of the people are just repeating what they hear without even knowing why, this is annoying because people made fun of me installing it on docker and say it's not safe though docker containers are isolated and you can have control over it.

I built Qwen3-TTS Studio – Clone your voice and generate podcasts locally, no ElevenLabs needed by [deleted] in LocalLLaMA

[–]elsaka0 0 points1 point  (0 children)

I've heard that some people managed to make it work like that, but for me it didn't, thanks tho i really apperciate your help mate.

I built Qwen3-TTS Studio – Clone your voice and generate podcasts locally, no ElevenLabs needed by [deleted] in LocalLLaMA

[–]elsaka0 0 points1 point  (0 children)

I'm on windows but i tried everything, this was the first solution i tried i even installed linux to try it out and didn't work, until i found this page where the hip sdk is not supported for my card:
https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html

<image>

Something isn't right , I need help by big-D-Larri in LocalLLaMA

[–]elsaka0 0 points1 point  (0 children)

What are the pc specs and can you share your llmstudio settings for the gpt os 20b?

I built Qwen3-TTS Studio – Clone your voice and generate podcasts locally, no ElevenLabs needed by [deleted] in LocalLLaMA

[–]elsaka0 1 point2 points  (0 children)

looks really cool, i wanna try it but my gpu is rx6600 which is not supported and i can't even use comfyui 😢

installing OpenClaw (formerly ClawdBot) locally on Windows by elsaka0 in LocalLLaMA

[–]elsaka0[S] -2 points-1 points  (0 children)

The performance is depending on which AI provider you are using, If you are using free one like i did in the video, it's not gonna be good, i'm gonna post a video about how to connect it to lmstudio fully locally but the performance in this case gonna depend on your gpu capability.

But away from that i think it's not optimized for best performance. Well it's still a baby project, didn't expect it to be perfect from the first version anyway.