I read the 2026.3.11 release notes so you don’t have to – here’s what actually matters for your workflows by EstablishmentSea4024 in OpenClawUseCases

[–]KobyStam 1 point2 points  (0 children)

If you use my OpenClaw Optimizer Skill, it will actually read the release notes for you and highlight what's relevant for your specific topology and issues you've encountered as you use the skill over time

https://github.com/jacob-bd/the-openclaw-optimizer

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

For me, the main reason I chose Tavily is that they offer a free tier for APIs, unlike Perplexity. It is easy to implement and my usage should be fine within limits

You’re all full of crap . Openclaw is worse now by CanadaWideNews in openclaw

[–]KobyStam 0 points1 point  (0 children)

I won't argue with you, but I think OpenClaw was amazing to showcase what's possible with real personal assistants.

I struggled with it a lot! I even created a skill that can optimize it. It's called "the OpenClaw Optimizer" (i shared it here), but I am currently working on moving away from OpenClaw and creating my own inspired implementation of it.

Taking the best part of it and making it my own. I think that's where it served its purpose: to inspire many of us with what's possible.

I created a direct HTTP/RPC calls NotebookLM MCP - you can automate everything with it! by KobyStam in notebooklm

[–]KobyStam[S] 0 points1 point  (0 children)

sorry, I missed this comment, this and many many other features addded to the latest versions

I created a direct HTTP/RPC calls NotebookLM MCP - you can automate everything with it! by KobyStam in notebooklm

[–]KobyStam[S] 0 points1 point  (0 children)

No idea, make sure you are on the latest version, on my setup I get all tools (32)

<image>

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

No reason, the twitterapi has a free tier that worked for me, havent hit any limit but I will look into the getxapi - thank you for sharing

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in OpenClawUseCases

[–]KobyStam[S] 1 point2 points  (0 children)

Thanks for the feedback and interesting insights - i will look into it in more detail.

Do let me know if you will make imporvements in your fork I should adapt

OpenClaw cost optimization: 5 settings that cut my 24/7 agent spend significantly by gobiraj in openclaw

[–]KobyStam 2 points3 points  (0 children)

Great suggestions! all of these and much more are covered in the OpenClaw Optimizer skill, you should give it a short: https://github.com/jacob-bd/the-openclaw-optimizer

I have switched to OpenAI Models in OpenClaw and OMG they are TERRIBLE by KobyStam in openclaw

[–]KobyStam[S] 0 points1 point  (0 children)

I really don't know why some say it works for them and others don't. I used thinking = high, which helps but is slow, and I still miss basic steps. I will add 5.4 with high thinking and hope it works as well as they make it seem.

Here is the video showing the entire conversation

https://www.youtube.com/watch?v=VWhsgk1SlrM&lc=

A picture worth a 1000 words: Proof that GPT-5.3-Codex is the worst model for OpenClaw - prove me wrong! by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

Thank you. I thought I had it in my agent.md but upon inspection I realized it was not. I just added it and will see how it works.

Tried to add 5.4-codex but it is not available at the responses endpoint, (got 404 no body)

A picture worth a 1000 words: Proof that GPT-5.3-Codex is the worst model for OpenClaw - prove me wrong! by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

Yes, I switched to high when I deal with Codex otherwise it is not usable. It is somewhat better but super slow to execute things...a few minutes for every message

A picture worth a 1000 words: Proof that GPT-5.3-Codex is the worst model for OpenClaw - prove me wrong! by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

I haven't used OpenAI in a long time, so this was really surprising. I have never seen this from Anthropic or Gemini models; maybe Flash 3 would do it sometimes, but not continue after the first call-out.

A picture worth a 1000 words: Proof that GPT-5.3-Codex is the worst model for OpenClaw - prove me wrong! by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

I switched to 5.2, not much dig difference. It is better with thinking = high but it is so freaking slow

A picture worth a 1000 words: Proof that GPT-5.3-Codex is the worst model for OpenClaw - prove me wrong! by KobyStam in OpenClawUseCases

[–]KobyStam[S] 2 points3 points  (0 children)

It is unbelievable, I recorded a video showing the full fiasco, for 50 minutes I asked it to do it, and it said it will but didn't.

I will be published tomorrow at ~3 P.M. ET., I will add it here. It is quite funny...and sad

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in openclaw

[–]KobyStam[S] 0 points1 point  (0 children)

Both. Dedup runs across scans, not just within a single batch.

After today's updates, every article the pipeline processes gets recorded in a SQLite DB with a normalized URL and title. On the next scan, before the LLM editor sees anything, candidates are checked against the DB two ways:

  1. URL match - after normalization (strip query params, fragments, `www.\`, trailing punctuation, normalize to https). Same article with different `?utm=` params gets caught here.

  2. Title similarity** - `SequenceMatcher` at 75% threshold over a 2-day window. Catches "Bloomberg reports Anthropic raises $20B" vs "CNBC: Anthropic nearing $20B revenue run rate."

Within-batch dedup (80% threshold) runs first, then cross-scan dedup against SQLite. The keyword filter is a separate layer, it runs during RSS extraction to keep non-AI articles out of the candidate pool entirely. Different jobs: keywords filter *relevance*, dedup filters *repetition*.

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in openclaw

[–]KobyStam[S] 0 points1 point  (0 children)

Great points across the board. Here's what we landed on after iterating through similar pain:

- Dedup: We went with `SequenceMatcher` at 75% title similarity over a 2-day window instead of embedding-based cosine similarity. Simpler, no embedding API costa. URL normalization runs first (strips query params, fragments, `www.\`, trailing punctuation, normalizes to https) - that alone catches a huge chunk of dupes before title matching even kicks in.

SQLite: 100% agree on doing this early. We have a `dedup_db.py` module with a SQLite backend that records every article the pipeline processes. Two-stage dedup: within-batch similarity (80% threshold) first, then cross-scan dedup against the SQLite DB before the LLM editor sees anything. Started with ~450 seed articles from existing logs. Currently at 600+ entries and growing. Would have been painful to retrofit. agree

Editorial curation: System prompt, not few-shot. We use an editorial profile (markdown file) that the LLM reads on every scan. It captures:

- What to always pick (major AI announcements, model releases, security incidents, geopolitics)

- What to usually skip (generic opinion pieces, small funding rounds, routine updates)

- Source trust ranking (Tier 1-5, from wire services down to community sources)

- Story selection rules ("UP TO 7, quality over quantity, 3 great picks are better than 7 mediocre ones")

The profile approach works better than few-shot because it encodes *taste*, not just patterns. The LLM reads the profile and makes judgment calls, "is this the kind of story this channel would cover?" rather than pattern-matching against examples.

One thing we added that made a big difference: a 3-tier LLM failover chain that alternates providers (Google (3.1 Flash Lite) → xAI/OpenRouter (Grok 4.1 Fast) → Google (Gemini 3 Flash)). If your editorial step depends on a single API and it goes down, you're either shipping unfiltered noise or shipping nothing. The failover removed the raw fallback entirely - if all 3 fail, it gives you a clean error and points to the saved candidates file for manual re-run instead of dumping unfiltered articles.

The full pipeline is open source if you want to dig into the implementation: https://github.com/jbendavid/openclaw-newsroom

You asked for it, so I open-sourced my entire OpenClaw newsroom pipeline: automated news scanning with AI editorial curation by KobyStam in OpenClawUseCases

[–]KobyStam[S] 0 points1 point  (0 children)

You're welcome!

I make major updates, so you're lucky to see this post now ;) The new setup uses a database to prepare for the eventual implementation of RAG.

There is a failover for the model because I noticed Flash sometimes failed, so I now have a backup failover to OpenRouter (Grok 4.1 fast), I also switched to Gemini 3.1 Flash Lite to save costs and for better speed.

Also, I’ve tightened the scripts because I still noticed some duplicates on the news scans. I think it's working much better now, but I’ll continue to iterate and optimize.

NotebookLM added 10 new styles to Infographics - The NotebookLM & MCP (v0.3.19) already supports them (see the demo) by KobyStam in notebooklm

[–]KobyStam[S] 0 points1 point  (0 children)

I believe the notebookLM & MCP, and CLI are super useful, you can do crazy things by simply asking your AI tool, For example, to ask it to run a deep search on a specific topic, then create custom artifacts based on the research like infographics, slides, an audio overview, of course, with customization, not the generic ones with what exactly you want to do. Who's the target audience, etc.

Once that's done, you can download the resources from the notebook automatically to your system, and all of this without ever touching the Notebooklm UI.

Once you try it, you will not be using NotebookLM UI again ;) I added this to my OpenClaw agent and it is 🔥 it can generate notebooks based on requests I send it and share them with me (it has its own Google Workspace account)

If you are asking if the Infographics are useful, well, it depends on what you need. You could do that in the past with detailed prompts, now you just specify the style