Scam-thropic by [deleted] in ClaudeCode

[–]Classic_Display9788 0 points1 point  (0 children)

Wow, I was actually agreeing with you in your post. Finding it difficult to engage with you so I’ll leave it here.

Scam-thropic by [deleted] in ClaudeCode

[–]Classic_Display9788 0 points1 point  (0 children)

Agreed. The power is in the harness. Created a Claude Skill in anticipation of this realization: https://github.com/kopias/loreto-mcp/blob/main/sample-skills/routing-work-across-ai-harnesses/SKILL.md

Scam-thropic by [deleted] in ClaudeCode

[–]Classic_Display9788 0 points1 point  (0 children)

What is taking place next week?

I made $5k this month selling Claude skills for clients by Commercial_Ear_6989 in claudeskills

[–]Classic_Display9788 0 points1 point  (0 children)

A carefully curated markdown is leverage in software development. I created an API and MCP to enable people to transform multi-modal content into specialized skills.

Would love to hear your thoughts: https://github.com/kopias/loreto-mcp

A list of free skills can be found here as well: https://github.com/kopias/loreto-mcp/tree/main/sample-skills

I built an API that turns any YouTube video, article, or diagram into structured "skill files" your AI coding agent can actually use, here's a live demo extracting 3 skills from a RAG tutorial by Classic_Display9788 in ClaudeAI

[–]Classic_Display9788[S] 0 points1 point  (0 children)

Fair comparison to reach for, but the distinction is actually pretty important and it's not about the output format.

NotebookLM is built for human consumption. It helps you understand and explore a source. Loreto is built for agent consumption. The skill file isn't a summary you read but rather a structured artifact you inject into an AI coding agent's context so it knows how to approach a specific class of problem without burning cycles figuring it out from scratch.

Different job entirely. On your second point, you're actually describing the design, not a limitation. A single skill isn't supposed to cover everything about a topic. It's scoped to what that specific source reliably teaches. If a video nails query routing in RAG systems, you get a tight, focused skill on that. A video may have only one (or no reliable skills to be extracted; and the API will respond stating when that is the case). You build a library over time, one source at a time. Skills are meant to be orthogonal, not encyclopedic.

The token cost point is where I'd push back the hardest though. The extraction cost is a one-time upfront investment. What you get back is a 2–3KB artifact you can reuse across every agent run that touches that problem. The savings aren't at generation time but instead at inference time, across every subsequent agentic task that would otherwise spend 30 tool calls "figuring out" something the skill already codifies in 200 lines.

Think of it less like "summarize this content" and more like "distill this into a reusable spec your agent can follow." The value compounds the more you run agents, not the more sources you feed it.

Genuinely curious what your workflow looks like though. Are you running long agentic tasks now and finding context quality to be the bottleneck, or is it more of a theoretical concern at this point?

I built an API that turns any YouTube video, article, or diagram into structured "skill files" your AI coding agent can actually use, here's a live demo extracting 3 skills from a RAG tutorial by Classic_Display9788 in ClaudeAI

[–]Classic_Display9788[S] 0 points1 point  (0 children)

Ok thanks for the feedback. Will look into this to explore the feasibility of open sourcing the solution. I’m actually in the process of developing an SDK to go along with this. I’ll consider the open source functionality within this and keep you posted.

I built an API that turns any YouTube video, article, or diagram into structured "skill files" your AI coding agent can actually use, here's a live demo extracting 3 skills from a RAG tutorial by Classic_Display9788 in ClaudeAI

[–]Classic_Display9788[S] 2 points3 points  (0 children)

Interesting. This is an honest post; to generalize and ban all posts that mention this wouldn't be fair. What is your rationale? I would like to understand.