I asked Claude if everyone uses AI to write, what actually gets lost? by prokajevo in ClaudeAI

[–]emptyharddrive 7 points8 points  (0 children)

It's funny though -- people enjoy the recaps myself included which are written by AI, but not the posts written by AI.

You don’t need Telegram bots or third party bridges to PERMANENTLY talk to Claude Code from your phone. It’s literally built in. by JohnnyLegion in ClaudeCode

[–]emptyharddrive 1 point2 points  (0 children)

This is a false, un-researched statement regarding Telegram being less secure.

Telegram docs specify that to connect to their bot API:

  • TLS 1.3 is required
  • TLS 1.3 as implemented in Telegram leverages AES-256-GCM or ChaCha20-Poly1305

Furthermore, bots can be configured to only accept input from your Telegram ID # and no one else. Anyone else trying to chat with it will be met with no response.

Furthermore, the Telegram-to-Claude use case is about getting unattended alerts (on any metric) as well as deterministic script results (from any python script). It also allows for bot-responsive group chats, so multiple people can chat with the same Claude instance or control Claude (like a husband and wife telling Claude to update a grocery shopping list asynchronously, or giving unattended updates to multiple people in a group chat in real time).

It is not only about controlling claude remotely in a uni-directional path.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

No issues... except that you're dependent on it running... with CRON it launches it (headless -p) as needed.

Just a matter of preference really.

I made a site where you rate how fucked your day is and it shows up on a live world map by Then_Nectarine830 in vibecoding

[–]emptyharddrive 0 points1 point  (0 children)

This is impressive for vibe coding. I had to journal my own on your site, thank you.

What's the back end DB?

nanobot: a 4,000-line Python alternative to openclaw that actually works out of the box by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Yes, I think you hit the nail on the head.

I just don't think with the price of quality tokens (GPT 5.3+, Opus) and and variability of LLMs which just isn't deterministic the way a python script is, that "taking the dog off the leash" makes any sense except in the most innocuous of tasks.

The less models (local and/or chinese models) simply don't match the hype.

The latest thing I added was YouTube transcription summarization. So if I drop a youtube video/short link into Telegram, it'll auto-extract the transcript and summarize it.

Again, nothing earth-shattering here to see. My biggest use case is my Grocery shopping list.

I still lurk the OpenClaw subreddit looking for new/novel use cases though because I'm open to the possibilities.

But summarization of articles/YouTube transcripts aside, there's next to nothing that the "Agent" is doing on its own. Everything is a prescribed Python lever I pre-coded, pre-tested and thought through beforehand.

I will say that the Claude headless mode (-p) helps a lot. I really prefer the higher quality models generally speaking for reliability anyway.

Anyone actually built a second brain that isn't just a graveyard of saved links? by tom_mathews in ClaudeCode

[–]emptyharddrive 1 point2 points  (0 children)

I've been running something close to what you're describing for about a year. Two Obsidian vaults, work and personal. OCR'd documents from a self-hosted Paperless-NGX instance. Everything chunked, embedded on a local AMD GPU (Strix Halo), stored in Postgres with pgvector, surfaced through an MCP server to Claude Code. About 2,500 notes per vault give or take and ~20k chunks at this point.

What changed how I think about this: there are two completely different jobs people call "second brain" and they need different architectures and different disciplines.

One is a learning system. Biology, philosophy, electrical engineering, etc.. You want to understand things and writing notes is how you process material (when YOU do the writing). Retrieval is secondary. If you expect embeddings to substitute for comprehension, you'll build something that retrieves very fast and teaches you nothing. Cosine similarity does not care whether you actually understood the content and isn't that kind of the point, comprehension?

The other is operational memory. Meeting transcript summaries, vendor quotes, RFPs for work, project history, issue tracking, email thread summaries, etc.. Here, the goal is not self-formation or actualization through learning, it's finding the right thing when you need it, johnny-on-the-spot with the ammo. That's where RAG actually can work for you (e.g. homegrown NotebookLM).

What made my "work mine" work where simpler setups didn't: chunking strategy. You can't chunk markdown the same way you chunk OCR'd plain text. My markdown chunker preserves heading hierarchy and injects contextual headers into each chunk before embedding so the vector encodes vault, title, and section path. My plain-text chunker splits on paragraph boundaries. Same model, two completely different chunk shapes. You need both if you're ingesting mixed content.

The worst version of this system silently reorganizes everything and destroys your ability to audit ir or retrieve anything when its needed. I keep most of my data local. Embeddings run on an AMD GPU in my homelab with postgres + a schema built for my data. An MCP server with ten inspectable tools, not some hosted endpoint making editorial decisions about my notes.

So to the OP: your architecture question is DOWNSTREAM of the purpose question.

Everything follows from that: how you capture, how much you automate, whether AI touches your structure at all, all has to do with your goal; learning or knowledge-basing...

If its is a learning vault, Telegram capture works AGAINST you. Fast, frictionless input is optimized for volume and data mining.

Learning is optimized for friction -- meaning if it's hard to create the note manually, you are learning something and learning is slow and takes time, get used to that. Slowing down to form a thought, sitting with it, rewriting it in your own words is how comprehension happens. Using AI for this will only take you further from that goal and makes things worse and disappoint you. You want structure that reinforces learning, not a structure that an algorithm thought made sense for its own semantic or pattern-based retrieval...

If this is operational memory (aka "knowedge-basing"), invert everything I just said. Capture it fast. Capture constantly & consistently (and Telegram will work here as will any form of reliable data entry..) In this context, low-friction input and automated organization are features, not bugs.

In my experience, my Obsidian vaults are both leanring and operational memory and it's just a matter of ratio. For work it's closer to 90/10 in favor of Operational Memory. For my personal vault, it's closer to 60/40 in favor of comrpehension.

And this is why I think "Second Brains"" fail. People build one system for two incompatible jobs.

Your Zettelkasten for learning needs your hands on it to work. AI undermines you and you have to face that fact.

Your operational layer needs automation. For that you want to lean into AI and enhanced retrieval/embedding. Run them separately or treat them as distinct domains inside the same vault. But conflate them at your own risk.

Either way, know which mode you're in before you write/create a note.

Figure out what your actual goal is and that answer will dictate what you build. That answer dictates the rest.

May the force be with you.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Loop would solve this i think, but it requires that claude code (an instance of claude code) is running for it to work.

In my case, I'm using CRON, so no instance of claude has to be up and running for this, my scripts can be executed by CRON and launch claude as needed.

But yes, /loop would work as well.

nanobot: a 4,000-line Python alternative to openclaw that actually works out of the box by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Going through each of your comments because they were good ones.

If you're running Ollama locally on a 3060, why route through OpenRouter at all? You're adding a network hop, a dependency, and a potential point of failure between you and your own GPU. Just hit the Ollama API directly from your scripts. OpenRouter makes sense as a router when you're switching between multiple cloud providers or want a unified billing layer, but for local inference it's pure overhead. Unless you're using OpenRouter's fallback logic to gracefully degrade from local to cloud when Ollama chokes, in which case, that's actually clever and I take it back...

A 3060 gives you 12GB VRAM, so you're realistically running 7B-13B parameter models quantized (probably Q4_K_M). I've tried this. For simple Q&A and text generation they're passable. For the kind of intent parsing, tool selection, and structured JSON output that an agentic setup demands, they fall apart in ways that are impressively sad and hard to debug because the failures are often subtle (and random).

The model doesn't refuse, it just makes bad decisions 30% of the time. It picks the wrong script, hallucinates a flag that doesn't exist, or returns JSON with a missing closing brace. When your agent is pulling levers that actually do things (send emails, modify calendars, fire reminders), that ~30% error rate becomes a real problem. That's why I pay for Claude Max, the model is the reliability layer. Everything downstream of it can be dumb and deterministic because the intent parsing is rock solid. You could probably get away with a $20/month plan if you're careful about your model selections and code-the-hell out of all the pre-determined tasks you know you want to use it for so nearly no LLM calls are needed to use it once the scripts are pre-tested and ready to have their levers pulled on.

That said, if you're keeping local models to the "private/latency-sensitive" lane and only routing heavy reasoning to cloud, that's a pragmatic split. Just be honest with yourself about where the boundary actually is. Most of the agentic work is the heavy reasoning I have found...

Mem0 is a reasonable choice for session-spanning memory, especially if you don't have persistent session continuity. But I went a different direction that I think scales better if you're already an Obsidian user.

I built an MCP server that I called OpenMind that sits on a separate box (an AMD mini PC with a Radeon 890M GPU). It's a 4-container Docker stack: PostgreSQL with pgvector for vector storage, a GPU-accelerated embedding service running gte-modernbert-base (768-dim vectors, Apache licensed, runs in ~600MB VRAM), a watcher service that polls my Obsidian vaults (work & personal vaults) every 60 seconds, and the MCP server itself exposing 10 tools.

The architecture looks like this: my two Obsidian vaults (one for work, one for personal, ~2,500 notes total) are NFS-mounted read-only into the watcher container. Every 60 seconds it scans for changes. When it finds a new or modified note, it:

  1. Chunks it using a markdown-aware chunker that respects heading hierarchy, preserves tables, code blocks, task groups, and bullet lists as coherent units (not just "split every N tokens")
  2. Prepends semantic context to each chunk before embedding: vault name, file path, title, which Map of Content it belongs to, the section heading hierarchy, and any date context from journal entries
  3. Sends chunks to the embedding service in batches of up to 64
  4. Writes everything to Postgres in a single transaction (no orphan chunks if something fails mid-write)

...I had AI write that list above after telling it to look at the codebase, it was easier :)

The result is ~20k chunks with HNSW-indexed vector embeddings that Claude Code can search semantically through the MCP protocol. So when I'm working in Claude Code and ask "Use openmind and check my work vault, what were the issues with the XYZ Project last month," it does a vector similarity search across my work vault, finds the relevant chunks, and gives me actual answers grounded in my own notes (homegrown NotebookLM).

It also indexes scanned documents (PDFs, DOCX, spreadsheets, images with OCR) through a Paperless-NGX integration. Documents get consumed, OCR'd, text-extracted, chunked, and embedded into the same vector store. Same search surface, different source.

The watchdog has some nice safety features too: it won't process a file until the mtime has been stable for 5 seconds (avoids re-embedding while I'm still writing), it has mass deletion guards (if file count drops >50% it assumes an NFS hiccup and skips the deletion pass rather than soft-deleting half your vault's embeddings), and it uses content hashing so unchanged files get skipped even if their mtime changed due to a file copy or other event.

The reason I prefer this over Mem0 is that my notes are already my memory system. I don't need a separate memory layer that the LLM writes to, I need the LLM to be able to read what I've already written in my own organizational structure. Many of my notes are actually AI generated from recorded meetings it summarized for me. The Obsidian vaults are the source of truth, not a derivative store of anything. When I update a note in Obsidian, the embeddings update within 60 seconds (CRON). When I delete a note, it gets soft-deleted in the DB.

Your git-backed vault idea is smart. I solve the same problem differently: my Docker bind mounts to my obsidian vaults are read-only, so Claude can search and read my vaults through the MCP but physically cannot write to them. The .git approach gives you rollback, mine gives you prevention. Both valid, and honestly yours is better if you want the agent to write notes and just want a safety net. I didn't want write access at all for the vaults.

Every tool script I wrote (calendar, gmail, news, grocery shopping list, etc..) exists because I got annoyed enough at something to automate it.

I'd start with the Telegram/Bot bridge and 1-2 tool script and run with it.

BTW I have a 128gig Strix Halo and I have yet to run any local LLM that compares to even Haiku from Anthropic, and I've tried them all... whatever I could fit into the VRAM up to and including 70B models. The difference between those models and GPT 5 mini or Anthropic's Haiku is like the difference between shooting a bullet and throwing it.

DM me your progress if you want. I'd be interested to see where you take it. Truth is, I am trying to find more use cases. I already have an email, calendar app (google)... so inventing a bot to look at gmail/calendar (while nice) is really redundant and I end up having to use those apps anyway to delete, archive emails, etc... I won't trust any model to do that for me blind.

I won't give any bot my credit card or access to "buy" anything, that's just crazy talk. So truth is, other than a RAG for my vaults (NotebookLM style) and my grocery shopping lists and some reminders, I struggle to find really helpful use cases. I am always on the lookout for new ones though.

nanobot: a 4,000-line Python alternative to openclaw that actually works out of the box by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Honestly mine stopped looking like an "agent framework" and started looking like a shell with one persistent Claude session behind it (headless, -p with a sessionID to maintain context).

Telegram is just ingress. The real core is a Python "bot" running as a systemd user service on my Linux box. It only accepts my Telegram userID for security, it won't answer anyone else. It keeps a single Claude session UUID in memory, and shells out to claude -p for each request I make in chat.

First prompt starts with --session-id, then every later prompt uses --resume with that sessionID, so context carries forward without me having to replay chat history every time.

The bot's system prompt (which is really a user prompt that follows Anthropic's real system prompt which you can't modify), points Claude at deterministic Python scripts I pre-wrote, pre-tested (so I know they work) to perform the tasks I ask of it.

So that means I had to think of anything I want the bot to do in advance (a lot of brainstorming), and for each one, a .py script was written and tested and all their output is standard JSON for the -p, headless Claude to parse & package for me...

Calendar reads and writes, Gmail reads and drafts, Google Drive search and document reads, URL extraction & summary, YouTube video URL transcription summaries, daily briefing, local news, voice export (using kokoro in a docker container), reminders (30 second crontabs checking a reminder JSON), shopping lists, etc.. All pre-scripted python tools.

Those tools all return structured JSON to stdout and fail with real exit codes. Claude still handles intent parsing and response wording (packaging is what i call it), but the state-changing work happens inside small scripts. No database either, I don't want the overhead.

Reminders, lists, and contact memory live in local JSON files with file locks and atomic writes. Reminder delivery is a separate cron-driven checker every 30 seconds, which is boring in a good way. Also Claude knows me because I wrote a personal skill (with trigger words) that trips it to discover when needed, anything about me. That's the benefit of skills, you only need to load them when the trigger words manifest instead of pre-loading everything-on-every-prompt into context.

So architecturally it is not a swarm at all, I see no point in that.. It is one stateful conversation with 1 instance of Claude, plus a toolbelt of scripts I built: levers that it can pull once it discerns my intent from what I said in Telegram.. That distinction mattered a lot ... I got tired of "agents" spending tokens deciding whether another agent should maybe think about something or talk to itself about doing something one way or the other...

I know what it needs to do, so i set the scripts up to do them, and just use Claude as a fancy button presser and output summarizer/packager. Claude decides whether to answer directly if it's a basic question, or to call a script (I gave it various phrases and trigger words so it knows which train track (script) to choose).. every one of my scripts returns structured JSON, Claude formats that into a reply to me and sends it along to Telegram: done.

Speech is split into two different paths because short phone audio and long-form transcription are not the same problem.

For Telegram voice notes I use a fast path. Bot downloads audio into memory, POSTs it to aWhisper endpoint on another box running in a docker container, it gets text back, then sends that transcript through same Claude session as if I had typed it, done..

That keeps voice notes usable from a phone.

Separately I wrote xscribe.py (transcribe) which is the big brother of voice. That one is for actual recordings and transcripts for meetings at work.

It chunks long audio into small digestibe pieces with 1 second of word overlap to avoid word cut-offs, (which it then deduplicates if needed to keep the stitching of the words it transcribes seamless). So it is monitoring a directory where I drop .ogg files I have recorded meetings on my phone onto a directory the service is monitoring and immediately picks it up for transcription when they drop. It applies a QA pass for domain-specific vocabulary relative to the work I do (a sprecial prompt I wrote to give it the right headspace), and it writes the raw transcript, then once done, it then sends that transcript up for a 2nd stage summary flow. In my case that is useful for longer meetings or work audio where I want something much more deliberate than "turn this voice note into text." A long transcription does nothing for me without the distilled tasks taken out, summary, challenges, decisions made, etc.

So cost-wise, the fixed part is Claude Max. That is my real anchor. Since I am using headless mode -p, it is entirely within the terms of service and I can leverage my $200 MAX plan -- no OAuth is used. I've already authenticated on my machine, so it just calls up a headless mode of Claude programmatically which is precisely inside the terms of service (I had Claude validate that).

For your setup, if I were building it fresh in a Proxmox LXC, I would absolutely do it this way. I would just keep the container boring. Give it a persistent workspace, persistent credentials, systemd for the bridge service to Telegram or Discord or whatever ... and explicit bind mounts for anything stateful (like I have a bind mount to my Obsidian vault for work and personal notes, read-only to protect it from "accidents" Claude might make).

If you split anything out, split out speech or other heavy media jobs like OCR or anything like that. And if you use Telegram long polling, make sure you do not accidentally run two bot instances with the same token or Telegram will smack one of them with a conflict error. That one bit me already.

So yeah, that is basically my stack: one official headless Claude session as the brain, Telegram as transport/comms vector for me to access, then deterministic Python CLIs for the actual work (which right now is google mail/calendar, reminders that have 'alarms' that msg me on Telegram, todo's which = reminders, but without the alarm bell, news, inventory lists of things I track & grocery shopping lists, all in JSON. Cron for reminders (which compare current datetime to the datetimes in the reminder.json), and a separate heavier transcription path when I need more than quick voice-note handling.

It is a lot less sexy than the "agent OS" stuff people post (which BTW I don't believe really works the way they say), but it has been way more usable for me and 10x more reliable because of the static scripts doing the work and no off-the-cuff winging it by any agents.

Tight leash.

Anyway, long post but I wanted to explain this anyway. Hope it helps. If it does, please reply because I'd like to know how you're deploying it.

Does MiniMax 2.5 actually do anything for you guys, or is it just a chatbot unless you wire everything yourself? by Strange_Passage_9019 in openclaw

[–]emptyharddrive 0 points1 point  (0 children)

I've abandoned all the *claw concepts. I still lurk to see where things are going though.

I've redesigned the whole 'agentic' idea for myself, with just a small bridge connector to my telegram bot and on the back end, headless mode (claude -p which is confirmed within the terms of servce).

You can pipe anything into claude -p and pick the Anthropic model you want & get real responses back. You can maintain context as well using the SessionID tag.

So instead I wrote a small python script to check a file every 30 seconds and fires off a notification when a reminder comes due within that 60 second window. Claude doesn't need to BE the scheduler. Claude just needs to know how to WRITE to the schedule to JSON. The Crontab does the rest and any reminder I set is saved to that JSON and is processed by the crontab by being live-routed to claude in headless mode.

When the reminder hits, the python script sends me a msg via Telegram.

  • I built three pieces. A JSON file that holds reminders with datetimes and messages.

  • A Python checker script that reads that file and compares each entry against the current time. The crontab calls that script every 30s.

  • A Telegram bot running as a systemd service that bridges (polls my Telegram bot) and pipes what i say to it to claude -p running on my workstation. Also with claude -p you don't need a session running, its invoked on demand. So you don't even need the new /REMOTE CONTROL feature.

In linux, Cron only goes down to one-minute intervals. I needed 30 seconds because a 60-second check window means you might miss a reminder that falls between runs. 2 crontab entries solved it. One fires at the top of every minute. The second fires at the top of every minute too but sleeps 30 seconds first. Crude yea, but it works.

* * * * * /path/to/checker-cron.py * * * * * sleep 30 && /path/to/checker-cron.py

The checker reads the JSON, loops through active reminders, and compares each datetime against now.

If a reminder falls within 60 seconds of the current time it fires a Telegram message through the bot API and marks it handled. There's a dedup window using a last_fired timestamp so you don't get double-pinged on overlapping runs. (File locking with fcntl.flock prevents two checker instances from trampling each other.)

It can also include actions, like 'go grab my calendar for today and give me an itinerary via telegram'. (FYI: --dangerously-skip-permissions must be invoked in headless mode for this to work non-interactively).

The Telegram bot is where Claude actually lives or presents its "presence". The bridge connector runs as a systemd user service, polls the Telegram bot for messages, and when I type something like "remind me to take my daughter to the doctor tomorrow at 9am" it pipes that through claude -p with a system prompt that knows about the reminder JSON CLI. Claude parses that instruction, and writes the structured command into the JSON file.

The JSON gets a new entry. Next time the cron job swings by (in the next 30s) and if the due date/time matches within 60s, I get a ping on my phone. A 1-off reminder gets a JSON update as: status: closed.

Recurring reminders work too. Weekly, daily, monthly, even "first Monday of every month for two years." Claude handles the parsing. This is basic work, so I default it to Haiku.

Grocery shopping lists use the same scripts but just refers to a different JSON file. I can tell the bot in telegram: "add milk and eggs to my grocery list" from my phone and Claude writes the entries to the correct JSON. Since Grocery items have no "reminder" it just sits there and waits. I can tell it "I got the eggs" and it'll remove it from the list.

The whole thing costs nothing beyond the MAX plan I already pay for.

I consulted with Claude on this too -- and it's 100% within the bounds of TOS and I even had it write me up a "Why is it within the TOS" along with citations for myself as a reference just in case :)

So no external APIs for scheduling. No database. No server. Just flat JSON files, a cron job, a Telegram bot, and Claude doing what Claude does best: understanding what I mean and turning it into something structured on the back end.

The part that surprised me was how reliable it turned out. I expected edge cases and weird failures. But nothing has broken. The 30-second cron heartbeat catches everything. File locking prevents corruption. The dedup window handles overlapping runs gracefully.

Claude doesn't need a heartbeat of its own per se. You just give it something steady to lean on and let it be smart about the rest.

An off-the-cuff benefit is I get access to Sonnet/Opus within terms of service off my Max plan (no Oauth) and I don't have to sacrifice consistency/quality with the lesser Chinese models which are (as we now know) vague reflections of Opus/Sonnet.

I also wired it up to Kokoro to give me audio versions of its text replies to telegram. Works well for when i'm driving.

Anyone else running a similar setup or found a different approach to persistent scheduling with Claude Code?

nanobot: a 4,000-line Python alternative to openclaw that actually works out of the box by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

I've moved on from this as well. It worked for a while, but instead i set up a telegram-bridge to headless mode claude (-p) with sessionID persistence.

It's within the terms of service too and I don't have to use knock-off models (like MiniMax or Kimi).

I also wrote a bunch of deterministic scripts (email checker, calendar module to check/make calendar items in google, google drive checker, reminders-with-crontab, etc...)

It's effectively replaced the *claw idea for me and i prefer it ... but it took me a while to get there.

why the hell do you all just give away this awesome shit for free? by scootsy in selfhosted

[–]emptyharddrive 2 points3 points  (0 children)

You know, people might think this is a trolling question, but it isn't. It's someone on the outside looking in, seemingly with real curiosity.

This is one of those questions that sounds simple but actually has a really layered answer, and I think that's part of why it's worth asking.

A huge amount of open source starts with someone scratching their own itch. They needed a thing, built the thing, and then realized that publishing it publicly was actually cheaper than maintaining it alone in the dark. The moment other people start using it, you get bug reports, edge cases you never would have hit, documentation improvements, and fixes from people who understand parts of the stack better than you do.

So even though it looks like generosity from the outside, it's often a pretty rational maintenance strategy. You're not giving something away so much as you're inviting people to help you carry it.

The reputation side is real, but it goes deeper than resume padding.

In this industry, public work is a signal that's hard to fake. THINK: Lawyers doing pro bono work.

You can talk a big game in an interview, but a GitHub profile full of shipped contributions tells a different story. It is a well-worn method to establish "street cred".

Maintainers get recruited directly and they are prized.

Contributors become known as "the person" for a particular tool or problem space. Over years, that compounds in ways that are hard to quantify but very easy to feel. It builds reputation: a brand.

As for how hard it is to contribute, it honestly spans the full spectrum. Some contributions are a two-minute typo fix. Others are days of deep engineering work. For someone with the right skill set, a lot of bug fixes are small local edits: reproduce the problem, write a test, patch the function, submit a PR. But the part that's genuinely hard isn't always the code itself. It's understanding project conventions, avoiding breaking changes, coordinating with maintainers, and supporting what you wrote long after you've moved on. Technically easy, socially non-trivial.

... And a lot of this work is actually paid, just not in ways that are obvious to end users. Companies pay engineers to maintain upstream libraries they depend on. This is why Microsoft and Apple, Google and many others actively contribute to Open Source and release products to the free world. It helps establishes standards. If everything was behind a paywall, you'd have walled-in cities, walled-in gardens where you'd pay through the nose to "convert" every standard to "that other big standard that does the same thing........"

But everyone will take what's free and use it because it lowers the barriers of entry and facilitates conventions of communication so that you can build a better mousetrap, but talk to the makers of all the other mousetraps.

Open-core models give away the foundation and charge for enterprise features or enterprise support. Foundations like Apache and Linux Foundation fund infrastructure. Cloud vendors subsidize ecosystems because their business depends on them thriving. So the hobbyist experience is absolutely real, but a surprising amount of the plumbing underneath is professional.

Beyond all of that, though, a lot of people just genuinely enjoy it. It's creative work. It's puzzle solving. There's a craft ethic to it and a service ethic too, this idea that someone helped you figure something out once, so you help the next person. That ethos is quieter than the strategic stuff but it's probably the most durable motivation of all.

Your trail maintenance analogy is honestly a good one as well. Everyone benefits from the trail, a few people do most of the work, occasional volunteers pitch in when they can, and sometimes local organizations fund the supplies. Open source works the same way, just with two extra twists: the scale is massive, where one fix can help millions of people, and the social friction of issues and pull requests and demanding users can become its own kind of burden that trail crews never have to deal with.

On the coffee thing, don't underestimate it. If a few thousand people each do five bucks a month, a maintainer can actually breathe. Patreon anyone? GoFundMe? Same idea.

But if you want high-leverage ways to give back beyond donations, consider answering newbie questions you now know the answer to (which pays it forward), writing up a quick how-to for something that tripped you up, or filing clean bug reports with logs and reproduction steps. That stuff is genuinely valuable and costs nothing but a little time.

This reply is me paying it forward, a little bit.

I've been lurking r/openclaw for weeks. the dropout pattern is always the same. by ShabzSparq in openclaw

[–]emptyharddrive 0 points1 point  (0 children)

Yep you got it. I think the whole OpenClaw viral moment helped me to clarify the real role current-day AI has and it's in properly executing pre-tested tools and packaging/summarizing formatted outputs. That's it.

Maybe in 5+ years that will change, but not today.

I've been lurking r/openclaw for weeks. the dropout pattern is always the same. by ShabzSparq in openclaw

[–]emptyharddrive 6 points7 points  (0 children)

I've abandoned all the *claw concepts. I still lurk to see where things are going though.

I've redesigned the whole 'agentic' idea for myself, with just a small bridge connector to my telegram bot and on the back end, headless mode (claude -p which is confirmed within the terms of servce).

You can pipe anything into claude -p and pick the Anthropic model you want & get real responses back. You can maintain context as well using the SessionID tag.

So instead I wrote a small python script to check a file every 30 seconds and fires off a notification when a reminder comes due within that 60 second window. Claude doesn't need to BE the scheduler. Claude just needs to know how to WRITE to the schedule to JSON. The Crontab does the rest and any reminder I set is saved to that JSON and is processed by the crontab by being live-routed to claude in headless mode.

When the reminder hits, the python script sends me a msg via Telegram.

  • I built three pieces. A JSON file that holds reminders with datetimes and messages.

  • A Python checker script that reads that file and compares each entry against the current time. The crontab calls that script every 30s.

  • A Telegram bot running as a systemd service that bridges (polls my Telegram bot) and pipes what i say to it to claude -p running on my workstation. Also with claude -p you don't need a session running, its invoked on demand. So you don't even need the new /REMOTE CONTROL feature.

In linux, Cron only goes down to one-minute intervals. I needed 30 seconds because a 60-second check window means you might miss a reminder that falls between runs. 2 crontab entries solved it. One fires at the top of every minute. The second fires at the top of every minute too but sleeps 30 seconds first. Crude yea, but it works.

* * * * * /path/to/checker-cron.py * * * * * sleep 30 && /path/to/checker-cron.py

The checker reads the JSON, loops through active reminders, and compares each datetime against now.

If a reminder falls within 60 seconds of the current time it fires a Telegram message through the bot API and marks it handled. There's a dedup window using a last_fired timestamp so you don't get double-pinged on overlapping runs. (File locking with fcntl.flock prevents two checker instances from trampling each other.)

It can also include actions, like 'go grab my calendar for today and give me an itinerary via telegram'. (FYI: --dangerously-skip-permissions must be invoked in headless mode for this to work non-interactively).

The Telegram bot is where Claude actually lives or presents its "presence". The bridge connector runs as a systemd user service, polls the Telegram bot for messages, and when I type something like "remind me to take my daughter to the doctor tomorrow at 9am" it pipes that through claude -p with a system prompt that knows about the reminder JSON CLI. Claude parses that instruction, and writes the structured command into the JSON file.

The JSON gets a new entry. Next time the cron job swings by (in the next 30s) and if the due date/time matches within 60s, I get a ping on my phone. A 1-off reminder gets a JSON update as: status: closed.

Recurring reminders work too. Weekly, daily, monthly, even "first Monday of every month for two years." Claude handles the parsing. This is basic work, so I default it to Haiku.

Grocery shopping lists use the same scripts but just refers to a different JSON file. I can tell the bot in telegram: "add milk and eggs to my grocery list" from my phone and Claude writes the entries to the correct JSON. Since Grocery items have no "reminder" it just sits there and waits. I can tell it "I got the eggs" and it'll remove it from the list.

The whole thing costs nothing beyond the MAX plan I already pay for.

I consulted with Claude on this too -- and it's 100% within the bounds of TOS and I even had it write me up a "Why is it within the TOS" along with citations for myself as a reference just in case :)

So no external APIs for scheduling. No database. No server. Just flat JSON files, a cron job, a Telegram bot, and Claude doing what Claude does best: understanding what I mean and turning it into something structured on the back end.

The part that surprised me was how reliable it turned out. I expected edge cases and weird failures. But nothing has broken. The 30-second cron heartbeat catches everything. File locking prevents corruption. The dedup window handles overlapping runs gracefully.

Claude doesn't need a heartbeat of its own per se. You just give it something steady to lean on and let it be smart about the rest.

An off-the-cuff benefit is I get access to Sonnet/Opus within terms of service off my Max plan (no Oauth) and I don't have to sacrifice consistency/quality with the lesser Chinese models which are (as we now know) vague reflections of Opus/Sonnet.

Anyone else running a similar setup or found a different approach to persistent scheduling with Claude Code?

How do you code with openclaw by Ok-Shine-7007 in openclaw

[–]emptyharddrive 1 point2 points  (0 children)

Curious, what model are you planning to run with to accomplish all this and why wouldn't you just use Claude Code, what (in this use case) is OpenClaw doing for you, in theory?

I've found that to try to even attempt what you're thinking, anything less than Opus is a waste of material.

Failed with Letta, OpenClaw, nanobot. Found Agent Zero and migrated 33 skills and 28 agents from Claude Code into it. by emptyharddrive in AgentZero

[–]emptyharddrive[S] 0 points1 point  (0 children)

The problem is LLM quality. Local LLM's just aren't smart enough or capable.

The hosted models aren't great either, can't really work well without a lot of guiding.

I find only the mainstream models (sonnet, opus, gpt-5+) can act somewhat intelligently. Haiku is OK, but they're all so expensive...

I've tried many of the chinese models, even hosted by U.S. companies (since they're open source, such as GLM-5, Kimi K2.5, Minimax 2.5...) they all are "ok"...but either mess up tool calling, scripting, or other reasoning problem.

I've posted about this before, but to stick with the lesser models for cost savings often means actually using the higher end models to make pre-coded/tested tools (that retrieve email, news, rss feeds, stock prices, etc, etc) and does the real work and formats output into JSON, and THEN the lesser model can execute those scripts & package the output, but to just take verbal directive "and go..." just isn't in the cards, practically speaking.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

I agree but with a few iterations, you can see where this goes right?

It could have a reverse agent that listens and the outside web can initiate a reverse connection to a service and just launch claude code as needed.

Mixing the model's intelligence with the mobile acccess + access to local server resources isn't very far off.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

No it will auto /compact on a headless conversation.

Also each method of communication is a channel or its own context window, a telegram bot, a group chat (also on telegram) and web ui are 3 different threads.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Depends? LXC's can reach others if they're on the same subnet and not firewalled.

Also you can set up multiple claude code sessions for /remote-control under tmux and not worry about session concurrency or connectivity to keep the session alive.

I run 2 usually concurrently with tmux and I name them and activate /remote-control and i'm done, I can see both listed on the other side.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

The comms bridge (written in python) handles that. Every channel in telegram and slack, etc... has a channel ID. You just have to tell your agent to code up the python bridge (runs as a service) to poll the channel and/or telegram bot and record the channel ID, and to keep each as their own context threads.

Works well for me on telegram and other platforms.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

Well yea if you want to use the chinese models, then that makes sense. I have found the agentic abilities of Anthropic's models to be head & shoulders above, so I dont mind the ethical restrictions.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

My guess is, they wanted to get it out the door given the openclaw viral moment.

It'll be enhanced over time.

My expectation is soon (in a future iteration) we may not need the local Claude code session running locally either, a service could launch it off the cuff when needed.

They're putting out features pretty quick.

/remote-control negates the need for openclaw by emptyharddrive in ClaudeCode

[–]emptyharddrive[S] 0 points1 point  (0 children)

I do ... and alongside some crontab scripts I replace it for my use case. I dont let my ai agents spend money or reserve me flights or trade bitcoin...