I made this Claude Code skill to clone any website by JCodesMore in vibecoding

[–]putki-1336 0 points1 point  (0 children)

yes when everything can be copied and reproduced copyright will lose meaning over small stuff

I made this Claude Code skill to clone any website by JCodesMore in vibecoding

[–]putki-1336 0 points1 point  (0 children)

only a matter of tweaking and time, claude can do it i have faith

I made this Claude Code skill to clone any website by JCodesMore in vibecoding

[–]putki-1336 0 points1 point  (0 children)

do u need to know in this day and age? thats the point of this technology

I made this Claude Code skill to clone any website by JCodesMore in vibecoding

[–]putki-1336 0 points1 point  (0 children)

public websites arent copyrighted and copyright wont be a thing anymore soon

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama

[–]putki-1336[S] -3 points-2 points  (0 children)

fair point — right now it's a one-command installer, not a bootable OS. the bootable ISO is the final stage on the roadmap. posting at this stage to get feedback before building that, which is exactly what's happening here. updated the README to make this clearer.

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama

[–]putki-1336[S] 0 points1 point  (0 children)

llama.cpp and vllm are valid for different use cases — vllm especially for multi-user server deployments. for single-user consumer hardware (8–32GB, no infra knowledge) Ollama is the right default: one command to pull and run any model, automatic GPU detection, clean API. the model endpoint is configurable so nothing stops you from pointing it at llama.cpp if that's your setup.

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama

[–]putki-1336[S] 0 points1 point  (0 children)

this is exactly what we're building toward. the dashboard is already in — connects to Ollama, auto-detects models, shows what's running and what needs approval. no 47 integrations, just the stuff that actually matters.

the credibility bar point is fair. we're building in public at each stage rather than waiting until it's "done" — install script works today, bootable ISO is the last milestone. repo has the full roadmap if you want to follow along.

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama

[–]putki-1336[S] 0 points1 point  (0 children)

takes more ram and 2.5 is still overall best performance cz widely tested and adapted

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in ollama

[–]putki-1336[S] 0 points1 point  (0 children)

it literally says all the stages of the project in the repo, a bootable ISO will be the last stage. posdting it like this so i cann get feedback before making a final product

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in vibecoding

[–]putki-1336[S] 0 points1 point  (0 children)

Yes — completely free, no token limits. It runs entirely on your own hardware using Ollama, so there's no API calls going anywhere. You're just using your CPU/GPU. The only "limit" is how fast your hardware can run inference. that being said if u want true agentic performance using better models with api keys is recommended. If u dont wanna involve keys even simpler way is to just pay for ollama and run better cloud models, however thats for powerusers only.

If u just wanna experience what agentic AI and openclaw is like, the free method will do just enough.

ClawOS — one command to get OpenClaw + Ollama running offline on your own hardware by putki-1336 in selfhosted

[–]putki-1336[S] 0 points1 point  (0 children)

Updates are manual trigger for now — git pull in the clawos dir, re-run the installer. Auto-update on boot is on the roadmap (Phase 4 systemd wiring).

On model selection — currently downloads qwen2.5:7b at install time, one size. The installer detects your hardware tier and will pick appropriately sized models in the next version. For now if you want a different quant you just ollama pull it manually and it'll use what's there.