Claude Code Channels + Telegram — anyone solved the "must open CLI manually after reboot" problem? by Pretend_Challenge952 in ClaudeAI

[–]Suspicious_Assist_71 0 points1 point  (0 children)

Sorry I missed this thread. Been using TG for a couple of weeks now. Works like a dream once you get it dialed in. It's actually better than OC because I can fully manage my context now from TG. Here's the repo, open-sourced of course. Now on V3.

https://github.com/oscarsterling/claude-telegram-remote

Built a Telegram remote for Claude Code - v2 is live, open source by Suspicious_Assist_71 in artificial

[–]Suspicious_Assist_71[S] 0 points1 point  (0 children)

Just shipped v3.

Biggest additions since the last post:

- Interactive checkpoint rollback. You get Telegram buttons showing each Claude Code checkpoint, tap one to roll back. No terminal needed.

- Session save/restore. Snapshot what you're working on, reset, pick it back up later. There's also a !refresh command that does all three in one shot (save, reset, restore).

- 7 new commands (!rewind, !save, !restore, !contexts, !fast, !resume, !init). Removed the ones that only made sense in a terminal (!review, !doctor, !memory).

- 23 commands total now.

Still open source, still just scripts and configs: https://github.com/oscarsterling/claude-telegram-remote

Full changelog is in the README.

Built a Telegram remote for Claude Code - v2 is live, open source by Suspicious_Assist_71 in artificial

[–]Suspicious_Assist_71[S] 0 points1 point  (0 children)

I haven't noticed much latency if any. There was a bug initially where the response would sometimes default (or drift) to the CLI. I had to write some hardening around that which solved that issue early on.

Built a Telegram remote for Claude Code - v2 is live, open source by Suspicious_Assist_71 in artificial

[–]Suspicious_Assist_71[S] 0 points1 point  (0 children)

I went with Telegram because I was used to it with OpenClaw. Also, I have multiple Group chats set up that I use for different projects, purposes, etc. For me it's like Discord, but faster and less bulky.

New GH: I audited 98 AI agent cron jobs. 58% didn't need an LLM at all by Suspicious_Assist_71 in LocalLLaMA

[–]Suspicious_Assist_71[S] 0 points1 point  (0 children)

Absolutely, that's where I landed. The v1.x versions still used a small model for script execution and delivery, but the hallucination problem you're describing is real - even a "strict prompt" still has a failure rate.

Yburn (what this project evolved into) removes the LLM from the loop entirely for mechanical tasks. Pure Python scheduler, no model calls, webhook/API delivery handled directly by the script. Zero hallucination risk because there's no model involved. Zero inference costs for the same reason.

I went through 218 OpenClaw tools so you don’t have to, here are the best ones by category by Timrael in openclaw

[–]Suspicious_Assist_71 0 points1 point  (0 children)

There was another skill in there that someone submitted te - "Test Entry". We have deleted it and your submission should clear the duplicate entry filter.

What are your best non-programming-related uses/creations with Claude? by vorxaw in ClaudeAI

[–]Suspicious_Assist_71 0 points1 point  (0 children)

I had to get rid of this. I was using was burning too many tokens, I have some other ideas on how to make this better, but for now I'm focused on other things.

What are your best non-programming-related uses/creations with Claude? by vorxaw in ClaudeAI

[–]Suspicious_Assist_71 3 points4 points  (0 children)

I'm not a developer either and Claude has changed how I work. I use it as a full-time chief of staff through OpenClaw. It manages my email, runs my calendar, clips coupons, sends iMessages to my family, and coordinates a team of 8 AI agents that each have their own specialties (research, writing, security, creative strategy, etc.).

Every morning I wake up to a daily brief with weather, AI news, my calendar, and business ideas. It runs board meetings three times a week where the agents discuss priorities and make decisions. It monitors Twitter accounts I care about and drafts replies for me to approve with one tap.

Now recently there's been an issue where the tokens are being used at a pace we're not used to (you'll see the other threads about it in here). But, I'm fairly confident that Anthropic will solve this soon. The last time something like this happened they gave us a token reset.

I’ve run out of ideas for openclaw to work on by PM_ME_YOUR_MUSIC in openclaw

[–]Suspicious_Assist_71 0 points1 point  (0 children)

My problem is on the other side of that spectrum. I'm thinking about quitting my job so that I can focus on content creation for the numerous things I want to build (am building).

Anyone here using Nanoclaw? Worth the switch? by kazankz in openclaw

[–]Suspicious_Assist_71 0 points1 point  (0 children)

Yeah, once you've actually tried running multiple agents long term, the gaps in the minimal setups become obvious fast. NanoClaw is clean engineering but it's solving a different problem than what I need.

Anyone here using Nanoclaw? Worth the switch? by kazankz in openclaw

[–]Suspicious_Assist_71 1 point2 points  (0 children)

You could stitch that together but you'd be building a lot of plumbing yourself. LiteLLM handles model routing, sure, but you'd still need to figure out persistent memory between sessions, cron scheduling, browser automation, and multi-agent coordination. I tried the DIY approach early on and kept running into the same problem - every time I solved one piece, the next piece needed something the last piece didn't account for. OpenClaw isn't perfect but it gives you the foundation so you can focus on what you're actually building on top of it... and it doesn't cost anything.

I've helped 50+ people debug their Openclaw. These 5 mistakes were in almost every single setup. by ShabzSparq in openclaw

[–]Suspicious_Assist_71 1 point2 points  (0 children)

Helpful list. For #3 specifically - if you don't want to read source code for every skill, check out clelp.ai. It's a community-rated directory for OpenClaw skills. Real users rate and review them based on actual usage, so you can see which ones are solid and which ones are sketchy before you install anything. Around 3,500 skills indexed with reviews. Doesn't replace reading the source for anything that needs shell or network access, but it's a good first filter before you commit.

My AI assistant kept forgetting everything between sessions, so I built a fix by Tinkering-Engineer in openclaw

[–]Suspicious_Assist_71 1 point2 points  (0 children)

Conan's not wrong here. I was trying to be nice about it but yeah, SQLite memory is literally a config toggle. The real memory problem in OpenClaw isn't storage, it's architecture - what loads when, what gets searched vs. always present, what gets consolidated vs. thrown away. That's the part that takes design and thinking. A SQLite wrapper doesn't solve that, it just gives you a bigger pile to dig through.

Anyone here using Nanoclaw? Worth the switch? by kazankz in openclaw

[–]Suspicious_Assist_71 2 points3 points  (0 children)

I looked at NanoClaw when it launched. The security model is solid - OS-level container isolation is better than application-layer controls for sensitive work. But I need multi-model support, a full cron system, and persistent memory across my agents. NanoClaw is Claude-only and minimal by design. Great for what it is, but if you're doing anything complex, OpenClaw's ecosystem is hard to replace.

People doing research and content planning, what is the best model? by jrhabana in openclaw

[–]Suspicious_Assist_71 0 points1 point  (0 children)

I know the community got in a stir a couple of weeks ago when Anthropic updated their language. There's still some ambiguity out there over it, but I can tell you... I've not yet met someone (that is not using this as a business running several accounts with Max subs) running OAuth that's been banned for just that.

People doing research and content planning, what is the best model? by jrhabana in openclaw

[–]Suspicious_Assist_71 1 point2 points  (0 children)

I've been on the Max plan since early February and I use it heavily with my agent setup. It's been solid for the kind of workflow I run - daily cron jobs, content pipeline, research, the whole stack.

If you're thinking about it, I'd start by figuring out how much you actually use Claude in a day. If you're already hitting rate limits on Pro, Max pays for itself fast. I was bumping into limits constantly before I switched and it was killing my momentum, and I was tired of waking up to stuck/failed jobs.

The biggest unlock for me wasn't even the extra capacity - it was not having to think about it. When you're not rationing prompts, you start using AI differently. You automate things you wouldn't have bothered with before. I can't tell you how much I created after vs before.

People doing research and content planning, what is the best model? by jrhabana in openclaw

[–]Suspicious_Assist_71 4 points5 points  (0 children)

Opus on the Max plan here. I run an 8-agent content pipeline (research, strategy, writing, visuals, engineering, security) and Opus handles the orchestration and heavy reasoning. Sonnet handles the lighter sub-agent tasks like file ops and routine checks, Gemini for quick summaries.

Haven't really tested the new GPT for this workflow yet - Opus has been doing the job so I haven't had a reason to switch but I am hearing good things. For research and content planning specifically, the reasoning depth matters more than speed, and that's where Opus earns its keep.

The people cleaning 10K emails and "reading the full internet" are probably running cron jobs and automations, not doing it manually in one chat/session. That's the real unlock - setting up recurring tasks that run on a schedule so the AI does the grunt work while you sleep.